Nginx Architecture Analysis

Source: Internet
Author: User
Tags epoll http redirect http request imap mutex versions keep alive perl script

Nginx (pronounced "engine X") is a free open source Web server software developed by Russian software engineer Igor Sysoev. Nginx was released in 2004, focusing on high performance, high concurrency, and low memory consumption issues. And with a variety of Web server features: Load balancing, caching, access control, bandwidth control, and the ability to efficiently integrate various applications, these features make Nginx a good fit for modern site architectures. Currently, Nginx is the second most popular open source web server software on the Internet.


14.1 Why high concurrency is important


Compared with ten years ago, the current Internet has been difficult to imagine the widespread application and popularization. The clickable text HTML from NCSA's Web server with Apache has evolved into an online communications medium for more than 2 billion people. With the growing number of permanent online PCs, mobile terminals and tablets, the Internet is rapidly changing and the economic system is fully digitally wired. Online services that provide real-time, available information and entertainment become more sophisticated. The security requirements for online business have also changed dramatically. The site is more complex than ever and needs to be more robust and scalable in engineering.

Concurrency is always one of the biggest challenges in site architecture. Because of the rise of Web services, the magnitude of concurrency is growing. Popular sites are not uncommon for hundreds of thousands of or even millions of of simultaneous online users to serve. Ten years ago, the main reason for concurrency was due to slow client access--users using ADSL or dial-up commerce. now, concurrency is brought on by mobile terminals and new application architectures, which are often based on persistent connections to provide clients with news, tweets, notifications and other services. Another important factor is that modern browser behavior has changed, and when they browse the site, they open 4 to 6 connections at a time to speed up page loading.

For an example of a slow client, suppose an Apache website produces a response that is less than 100KB-a webpage that contains text or pictures. It may take 1 seconds to generate this page, but if the network speed is only 80kbps (10KB/S), it will take 10 seconds to send this page to the client. Basically, the Web server pushes 100KB of data relatively quickly, and then waits 10 seconds to send the data before closing the connection. Now if you have 1000 simultaneous clients requesting the same page, then if you allocate 1MB of memory for each client, you will need 1000MB of memory to provide this page for these 1000 clients. In fact, a typical Apache-based Web server typically allocates 1MB of memory for each connection, and mobile communication is typically dozens of kbps in speed. While it is possible to optimize the scenario of sending data to a slow client by increasing the size of the OS kernel socket buffer, this is not a regular solution and can have unintended side effects.

With the use of persistent connections, the problem of concurrent processing is more pronounced. To avoid delays caused by new HTTP connections, the client needs to remain connected so that the Web server needs to allocate a certain amount of memory to the clients on each connection.

Therefore, in order to handle the load and higher levels of concurrency caused by the continued growth of the user, the site needs a lot of efficient components. Web server Software, on the other hand, is clearly also important in terms of hardware (CPU, memory, disk), network bandwidth, applications, and data storage architectures. as a result, Web server performance should also be non-linear as the number of simultaneous online and requests per second grows.


Apache no longer applies?


Apache Web server Software originated in the 1990 's, currently in the Internet site share first. The architecture of Apache is suitable for the operating system and hardware at that time, and it also complies with the prevailing Internet conditions: A Web site typically runs an Apache instance using a physical server. 2000 years later, it is clear that this single-server model cannot be easily scaled up to meet the growing demands of Web services. Although Apache provides a solid foundation for new feature development, it is the practice of deriving a process for each new connection (Apache has already supported the event model from version 2.4) and is not suitable for non-linear extensions of the Web site. Eventually, Apache became a generic Web server software focused on versatility, third-party expansion, and Web application development. However, when hardware costs are getting lower, each connection consumes more CPU and memory, and a single piece of software with such a wide range of features is no longer scalable.

As a result, when server hardware, operating systems, and network facilities are no longer the main limiting factor for website growth, web developers are looking for more efficient ways to set up Web servers. About 10 years ago, the famous software engineer, Daniel Kegel, put forward: "It's time for the Web server to support the simultaneous processing of 10000 clients," and predicts the technology now called cloud services. Kegel's c10k vision has significantly pushed many people to try to solve this problem--by optimizing Web server software to support concurrent processing of large-scale client connections, Nginx is one of the most successful ones.

In order to solve the c10k problem of 10,000 concurrent connections, Nginx is based on a completely different architecture-more suitable for simultaneous connections per second and a non-linear increase in the number of requests. Nginx is based on the event model and does not mimic the practice of Apache to derive new processes or threads for each request. The end result is that memory and CPU usage events are always predictable, even if the load is increased. Nginx uses normal hardware to handle tens of thousands of of concurrent connections on a single server.

After the first version of Nginx was released, it was commonly used to deploy with Apache, and static content such as HTML, CSS, JavaScript scripts, and images was handled by Nginx to reduce the concurrency and latency of Apache application servers. As development evolves, Nginx adds support for protocols such as FASTCGI, USWGE, and scgi, as well as support for distributed memory object caching systems such as memcached. Other useful features have been added, such as reverse proxies that support load balancing and caching. These additional features make Nginx an efficient toolset for building scalable web infrastructures.

February 2012, version of Apache 2.4.x released. Although new concurrent processing core modules and proxy modules have been added to enhance scalability and performance, it is too early to say whether performance, concurrency, and resource utilization can catch or exceed the pure event-driven model of the Web server. Apache new version has better performance to be happy, for Nginx+apache Web site architecture, although this can alleviate the back-end potential bottlenecks, but does not solve all problems.


Nginx has more advantages.


One of the key benefits of deploying Nginx is the ability to handle high concurrency efficiently and high performance. At the same time, there are more interesting benefits.

In recent years,Web architectures have embraced decoupled concepts and decoupled application-tier facilities from Web servers. Although it is now only the site that was originally built on lamp (Linux, Apache, MySQL, PHP, Python, or Perl), it becomes based on the Lemp (e means engine x). However, more and more practice is to push Web servers into the edge of the infrastructure, and to consolidate these same or newer application and database Toolsets in different ways.

Nginx is perfect for these jobs. He provides the key features necessary to facilitate the stripping of the following features from the application layer to a more efficient edge Web server layer: concurrency, long-connection processing, SSL, static content, compression and caching, connection and request speed limits, and HTTP media streaming. Nginx also allows direct integration of memcached, Redis, or other nosql solutions to enhance the performance of large-scale concurrent users.

As modern programming languages and development packages are widely used, more and more companies are changing the way applications are developed and deployed. Nginx has become one of the most important parts of these paradigm changes, and has helped many companies quickly launch and develop their Web services within budget.

Nginx Development began in 2002 and was formally released in 2004 based on the 2-clause BSD authorization. Since its release, Nginx users have been growing and contributing proposals, submitting bug reports, recommendations and evaluation reports, which greatly helped and facilitated the development of the community as a whole.

Nginx code is written entirely in C, and has been ported to many architectures and operating systems, including Linux, FreeBSD, Solaris, Mac OS X, AIX, and Microsoft Windows. Nginx has its own library of functions, and in addition to Zlib, Pcre, and OpenSSL, standard modules use only the System C library functions. Furthermore, these third-party libraries may not be used if you do not need or consider potential authorization conflicts.

Talk about Windows version Nginx. When Nginx is working in a Windows environment, the Windows version of Nginx is more like a proof-of-concept version than a full-featured migration. This is due to some limitations of the interaction between Nginx and the Windows kernel architecture at this time. Issues known to Windows version Ngnix include: low number of concurrent connections, slow performance, cache and bandwidth policies not supported. Future versions of Windows Nginx will feature closer to the mainstream version.


14.2 nginx Architecture Overview


A traditional process-or thread-based model uses a separate process or thread to handle concurrent connections, thereby blocking network or I/O operations. Depending on the application, this is very inefficient in terms of memory and CPU. A derived process or thread needs to prepare a new run environment, including allocating heaps and stacks on memory, and generating a new run context. Creating these things also requires additional CPU time, and thread jitter caused by excessive context switching can eventually lead to poor performance. All of these complexities are sweeping through the old architecture of Apache Web servers. There is a tradeoff between providing rich generic application capabilities and optimizing server resource usage.

At the earliest, Nginx wanted to get better performance for the dynamically growing web site and use the server resources intensively and efficiently, so it used another model. The development of a technology-driven, event-based model on different operating systems, ultimately a modular, event-driven, asynchronous, single-threaded, non-blocking architecture is the basis for Nginx code.

  Nginx uses multiplexing and event notifications extensively, and assigns different tasks to different processes. A limited number of worker processes (workers) use efficient single-threaded loops to process connections. Each worker process can handle thousands of concurrent connections, requests per second.


Code Structure


The code for the Nginx worker contains the core and function modules. The core is responsible for maintaining a compact event processing loop and executing the corresponding module code snippet at each stage of the request processing. Module completes most of the presentation and application layer functions. This includes reading from the network and storage devices, writing, converting content, filtering for output, SSI (server-side include) processing, or forwarding a request to a back-end server if the agent is enabled.

The Nginx modular architecture allows developers to extend the functionality of the Web server without the need to modify the Nginx core. Nginx modules can be divided into: Core, event module, stage processor, protocol, variable processor, filter, upstream and load balancer, etc. at present, Nginx does not support dynamic loading module, that is, the module code is compiled with the Nginx core code. module dynamic loading and ABI has been planned for development in a future version. More detailed information on the different module roles can be found in Chapter 14.4.

Nginx uses technologies such as Kqueue, Epoll, and event ports on BSD, Linux, and Solaris Systems to handle network connectivity and content acquisition through event notification mechanisms, including accepting, processing, and managing connections, and greatly enhancing disk IO performance. The aim is to provide as much as possible the operating system recommended means to obtain feedback in a timely and asynchronous manner from network traffic, disk operations, socket reads and writes, and timeouts. Nginx optimizes these multiplexing and advanced I/O operations in a number of ways for each UNIX-based operating system.

Figure 14.1 shows the high-level design of the Nginx architecture.


As mentioned earlier,Nginx does not derive a process or thread for each connection, but rather the worker process accepts a new request by listening on a shared socket and uses an efficient loop to handle thousands of connections. Nginx does not use arbitrators or distributors to distribute connections, this work is done by the operating system kernel mechanism. the listener sockets are initialized at startup, and the worker process accepts, reads requests, and outputs responses through these sockets.

The event-handling loop is the most complex part of the Nginx worker code, which contains complex internal calls and relies heavily on the idea of asynchronous task processing. Asynchronous operations are implemented through Modularity, event notifications, a number of callback functions, and fine-tuning timers. In general, the basic principle is to be as non-blocking as possible. The only scenario in which the Nginx worker process is blocked is insufficient disk performance.

Because Nginx does not derive a process or thread for each connection, memory usage is very economical and efficient in most cases. It also saves CPU time because the process or thread is not frequently generated and destroyed. What Nginx does is to check the state of the network and storage, initialize the new connection and add it to the main loop, and asynchronous processing will not be released from the main loop until the end of the request. With well-designed system calls and an accurate implementation of supporting interfaces such as memory pools, Nginx typically achieves low CPU usage in extreme load situations.

Nginx derives multiple worker processes to process connections, so it is well-utilized for multicore CPUs. typically a separate worker process uses a single processor core, which makes full use of multicore architectures and avoids thread jitter and locks. There is no resource scarcity within a single-threaded worker process, and the resource control mechanism is isolated. This model also allows for scaling between physical storage devices, increasing disk utilization to avoid disk I/O-caused blocking. Distributing workloads across multiple worker processes can ultimately make server resources more efficient to use.

For some disk usage and CPU load patterns, the number of Nginx worker processes should be adjusted. The rules here are more basic, and the system administrator should try several configurations based on the load. Generally recommended: If the load pattern is CPU intensive, such as handling a large number of TCP/IP protocols, using SSL, or compressing data, the Nginx worker process should match the number of CPU cores, and if it is disk intensive, For example, to provide multiple content services from storage, or a large number of proxy services, the number of worker processes should be 1.5 to twice times the number of CPU cores. Some engineers determine the number of worker processes based on the number of isolated storage units, although the validity of this method depends on the type of disk storage configuration.

One of the main problems that Nginx developers will address in the next release is how to avoid disk I/O blocking. Currently, if there is not enough storage performance to serve the disk operations of a worker process, the process is blocked on disk read and write operations. some mechanisms and configuration directives are used to mitigate this disk I/O blocking scenario, most notably Sendfile and AIO directives, which can typically significantly improve disk performance. the installation nginx should be planned based on the data set, the number of available memory, and the underlying storage architecture.

Another problem with the current worker model is that support for embedded scripts is limited. For example, the standard Nginx release version only supports Perl as an embedded scripting language. The reason for this is simple: the embedded script is likely to block on any operation or exit abnormally, both of which cause the worker process to hang and affect thousands of connections at the same time. Simpler scripts, more reliable embedding of nginx, and more suitable for a wide range of applications have been included in the plan.


Nginx Process Role


Nginx runs multiple processes in memory, one master process and multiple worker processes. There are also special-purpose processes, such as cache loading and cache management processes. in the Nginx 1.x version, all processes are single-threaded, using shared memory as the inter-process communication mechanism. The master process runs with the root user right, and other processes run with unprivileged user rights.

The master process is responsible for reading and verifying profile creation, binding, closing socket start, terminating, and maintaining the number of worker processes configured without interruption service Refresh profile non-disruptive service upgrade program (start new program or rollback if needed) reopen log file compile embedded Perl script

  The worker process accepts, processes connections from clients, provides reverse proxy and filtering capabilities, and all the other features that Nginx has. because the worker process is the actual performer of the daily operation of the Web server, the system administrator should keep an eye on the worker process for monitoring Nginx instance behavior.

The cache loading process is responsible for checking the cache data on the disk and maintaining the database of cached metadata in memory. Basically, the cache loading process uses a specific, well-allocated directory structure to manage the files already stored on disk, prepares it for Nginx, traverses the directory, checks the cached content metadata, and updates the associated shared memory entries when all the data is available.

The cache management process is primarily responsible for cache expiration and invalidation. It is resident in memory when Nginx is working normally and is restarted by the master process when there is an exception.


Introduction to Nginx cache


  Nginx uses tiered data storage to implement caching on the file system. The cache primary key is configurable, and the cached content can be controlled using different specific request parameters. the cache primary key and metadata are stored in shared memory segments, which are accessible by the cache loading process, the cache management process, and the worker process. Caching files in memory is not currently supported, but can be optimized with the operating system's virtual file system mechanism. Each cached response is stored to a different file on the file system, and the Nginx configuration directive controls the level of storage (in stages and nomenclature). If the response needs to be cached to the cache directory, the cached path and file name are obtained from the URL's MD5 hash value.

The process of caching the response to disk is as follows: When Nginx reads the response from the back-end server, the response content is first written to a temporary file outside the cache directory. After Nginx finishes the request processing, the temporary file is renamed and moved to the cache directory. If the temporary directory for the proxy function is located on another file system, the temporary files are copied once, so it is recommended that the temporary directory and the cache directory be placed on the same file system . You can also safely delete a file if you need to clear the cache directory. Some third-party extensions can control cache content remotely, and consolidating these features into the main release is already planned.


14.3 nginx configuration file


The Nginx configuration system comes from the experience of Igor Sysoev using Apache. He believes that a scalable configuration system is the foundation of a Web server. Scalability issues are encountered when maintaining a large and complex configuration that includes a large number of virtual servers, directories, locations, and datasets. For a relatively large site, it would be a nightmare if the system administrator did not properly configure the application layer.

Therefore, Nginx configuration is designed to simplify routine maintenance and provides a simple means for future extensions of the Web server.

  A configuration file is a text file, usually located in/usr/local/etc/nginx or/etc/nginx. The main configuration file is typically named nginx.conf. to keep it neat, some configurations can be placed in separate files and then automatically included in the Master profile. It should be noted, however, that Nginx does not currently support Apache-style distributed configuration files (such as. htaccess files), and that all configurations related to Nginx behavior should be in a centralized configuration file directory.

The master process reads and verifies these profiles when it starts. Because the worker process is derived from the master process, a compiled, read-only configuration information can be used. The configuration information structure is automatically shared through a common virtual memory management mechanism.

Nginx configuration has a number of different contexts, such as: Main, HTTP, server, upstream, location (as well as mail for the email agent) and other instruction blocks. These contexts do not overlap, for example, a location instruction block cannot be placed in the main instruction block. Also, in order to avoid unnecessary ambiguity, there is no configuration similar to "Global Web server". The Nginx configuration is deliberately neat and logical, allowing users to create complex configuration files that contain thousands of instructions. In a private conversation, Sysoev said: "The location, directory, and other directives in the global server configuration are features I don't like in Apache, so that's why this is not done in Nginx." ”

  the configuration syntax, format, and definition follow a so-called C-style contract. This method of building configuration files is widely used in open source software and commercial software. By design, C-style configuration is suitable for nested descriptions, logical, easy to create, read and maintain, by the vast number of engineers like. The Nginx C-style configuration is also easy to automate.

Although some nginx configuration directives look like part of the APAHCE configuration, setting an Nginx instance is a completely different experience. For example, although Nginx supports rewrite rules, the system administrator will manually convert the Apache rewrite configuration to fit the Nginx style. Similarly, the implementation of the rewrite engine is not the same.

In general, Nginx settings also provide support for several primitive mechanisms, and are useful for efficient Web server configuration. It is necessary to have a simple understanding of the following variables and try_files instructions, which are almost unique to nginx. Nginx has developed variables to provide additional, more powerful mechanisms to control the runtime's Web server configuration. The variable is optimized for fast assignment and is precompiled internally as an index. The assignment is calculated on demand, for example, the value of a variable is usually computed only once in the lifetime of the request and then cached. Variables can be used in different configuration directives, providing more flexibility for describing conditional request processing behavior.

The Try_files directive is important for the gradual substitution of the IF condition configuration statement in a more appropriate way, and it is designed to quickly and efficiently attempt to map between different URIs and content. In general, try_files instructions are useful and efficient and useful. It is recommended that the reader take a complete look at this instruction and use it wherever it can be used.


14.4 Deep Nginx


As mentioned earlier, Nginx code contains the core and other modules. The core is responsible for providing the Web server's basic, web and mail reverse proxy functions, implementing the underlying network protocols, building the necessary operating environment, and ensuring seamless interaction between different modules. However, most protocol-related and application-related features are done by other modules, not core modules.

Internally,Nginx processes the connection through the module pipelining or the module chain. In other words, each operation has a module that corresponds to the work. For example: Compress, modify content, execute SSI, communicate with backend application server via fastcgi or UWSGI protocol, and communicate with memcached.

Between the core and the actual function module, there are two modules http and mail. These two modules provide an additional layer of abstraction between the core and the underlying components. These modules handle the sequence of events associated with their respective application-layer protocols, such as implementing HTTP, SMTP, or IMAP. Together with the core, these upper modules are responsible for invoking the respective function modules in the correct order. Although the HTTP protocol is currently implemented as part of the HTTP module, it is planned to be separate as a functional module in the future to support other protocols, such as SPDY (refer to "Spdy:an Experimental protocol for a faster web").

function modules can be divided into event modules, stage processors, output filters, variable processors, protocol modules, upstream and load balancer types. Although the event modules and protocols are also used in the mail module, most of these modules are used to supplement the Nginx HTTP functionality. The event module provides an operating system-based event notification mechanism, such as kqueue or Epoll, depending on the capabilities of the operating systems and the build configuration. The Protocol module allows Nginx to communicate through protocols such as HTTPS, Tls/ssl, SMTP, POP3, and IMAP.

  a typical HTTP request processing cycle is as follows: 1. The client sends an HTTP request. 2. Nginx core from the configuration file to find the location matching the request, based on this location information to select the appropriate stage processor. 3. If configured as a reverse proxy, the load balancer picks up an upstream server for forwarding requests. 4. The stage processor completes the operation and passes each output buffer to the first filter. 5. The first filter passes the output to a second filter. 6. The second filter passes the output to the third one, and so on. 7. The final response is sent to the client.

The Nginx module is highly customizable. It works through a series of callback pointers that point to the executable function. As a result, the side effect is to increase the burden on third-party developers, because they must precisely define how and when the module should run. Nginx API and Developer documentation has been optimized to make it more available to ease the development challenge.

Some examples of inserting modules in Nginx: Profile read and processing before each configuration directive for location and server takes effect when the server configuration is initialized when the server configuration is initialized at the time of configuration When the location configuration is initialized or merged into the parent server configuration, the master process starts or exits when the start or exit of the worker process starts or exits when the response header and response body are filtered when the request is processed, initializing and re-initializing the upstream server when processing the upstream server response When you are finished interacting with the upstream server

Inside the worker, the process of generating the response is as follows: Start ngx_worker_process_cycle () handle events (such as Epoll or Kqueue) through the operating system mechanism. Accept the event and invoke the corresponding action. Process or forward the request header and the request body. Generate the response content and stream it to the client. Complete the request processing. Reinitialize timers and events.

The event loop itself (steps 5 and 6) ensures that the response is incrementally generated and streamed to the client.

The process of processing HTTP requests in more detail is as follows: initialization request processing processing request header processing request body call corresponding processor perform all processing phase

When Nginx processes an HTTP request, it passes through several processing stages. Each stage calls the corresponding processor. Typically, the stage processor produces the corresponding output after processing a request, and the stage processor is defined in the location of the configuration file.

Phase processors typically do four things: Get the location configuration, generate an appropriate response, send a response header, and send the response body. The processor function has a parameter that describes the structure of the request. The request struct has many useful information about the client request, such as the request method type, URI, and request first class.

After reading the HTTP request header, Nginx looks for the relevant virtual server configuration, if the virtual server is found, the request passes through the following six stages: Server rewrite phase location phase location rewrite phase ( which can bring the request back to the previous phase could bring the requests to the previous stage) access control phase Try_files phase Log phase

In order to generate the necessary response content for the request, Nginx passes the request to the matching content processor. Depending on the location configuration, Nginx will first try an unconditional processor, such as Perl,proxy_pass,flv,mp4. If this request does not match these content processors, a processor will be picked in the following order: Random index,index,autoindex,gzip_static,static.

The Nginx document contains the details of the index module, which only handles requests with the trailing slash. If the MP4 or AutoIndex module is not matched, the response content is considered to be a file or directory on disk (that is, static), which is done by the static content processor. in the case of a directory, the automatic rewrite URI guarantees that the end is a slash (thus initiating an HTTP redirect).

Content generated by the content processor is passed to the filter. Filters are also associated with location, and a location can be configured with multiple filters. The output generated by the filter processing processor. The order in which the processors are executed is determined at compile time, for native filters, the order is already defined, and for third-party filters, the sequencing can be set in the compilation phase. In the current Nginx implementation, the filter can only modify the output data, and cannot write a filter that modifies the input data. The input filter will be available in a future release.

The filter follows a specific design pattern. The filter is called and starts to work, calling the next filter until the last one in the filter chain. After completion, Nginx ends the response. The filter does not have to wait for the previous filter to end. Once the input provided by the previous filter is available, the next filter can start its own work (much like a pipeline in Unix). thus, the resulting output response has been sent to the client until all responses have been received from the upstream server.

The filter has header filter and body Filter,nginx to send the header and body of the response to the relevant filter respectively.

Header filter contains 3 basic steps: Decide whether to handle this response processing response call the next filter

The content generated by the Body filter transformation. Some examples of body filter: SSI XSLT filtering image filtering (e.g. resizing pictures) Character Set conversion gzip compression chunked encoding

After the filter chain is passed, the response is sent to writer. There are two additional filters with a specific function associated with writer, copy filter and postpone filter. The Copy filter is responsible for populating the relevant response content into memory buffers, which may be stored in the reverse proxy's temporary directory. The postpone filter is used for child request processing.

  A child request is an important mechanism for handling requests and responses, and is one of the most powerful features of Nginx. with a child request, Nginx can return a response from another URL that differs from the URL that the client originally requested. Some web frameworks are called internal jumps, but Nginx is more powerful, not only to run multiple child requests and to combine the responses of these sub-requests into one, but also to nest and rank. A child request can produce a child-child request, and a child-child request can produce a child-child-child request. Child requests can be mapped to disk files, other processing, or upstream servers. A child request is useful when inserting additional content based on the original response data. For example, the SSI module uses a filter to parse the contents of the returned document, and then replaces the include directive with the contents of the specified URL. Or make a filter that appends some new document content to the response content generated by a URL.

Upstream (upstream) and load balancers are also worth a brief introduction. upstream is used to implement the reverse proxy processor (Proxy_pass processor). the upstream module assembles the request to the upstream server (or "back end") and then receives the response returned by the upstream server. This procedure does not call the output filter. The upstream module simply sets the callback function to be invoked when the upstream server is readable or writable. The callback function implements the following functions: Preparing the request buffer (or buffer chain) for sending to the upstream server to reinitialize, resetting the connection to the upstream server (should be before restarting the request), processing the first byte of the upstream server response, and holding a pointer to the response content to discard the request (when the client closes the connection prematurely) End request (when Nginx finishes reading the upstream server response) Trims the response body content (for example, removing whitespace)

If the upstream server is larger than one, the Load Balancer module can be attached to the Proxy_pass processor to provide the ability to select the upstream server. The load balancer registers a configuration file directive that provides additional upstream server initialization capabilities (resolving upstream server names through DNS), initializes the connection structure, determines how requests are routed, and updates state information. Currently, Nginx supports two standard upstream server load Balancing rules: Polling and IP hashing.

Upstream modules and load balancing algorithms can detect upstream server anomalies and reroute new requests to available upstream servers, and there are more work plans to strengthen this feature. In summary, the Load Balancer Improvement plan is more, the next version of Nginx will significantly improve the distribution of load and health detection between the different upstream server mechanism.

There are also some interesting modules that provide additional variables for use in the configuration file. These variables are generated and updated by different modules, with two modules fully used for variables: GEO and map. The GEO module is used for more convenient IP address-based tracking of client addresses, which can generate arbitrary variables based on the client IP address. Another map module allows you to generate another variable from one variable, providing the basic ability to easily map host names and other variables. This type of module is called a variable processor.

The memory allocation mechanism implemented by the Nginx worker process comes from Apache in one way or another. High-level description of Nginx memory management: For each connection, the necessary memory buffers are dynamically allocated for storing or manipulating the request, the response header, and the body, released when the connection is closed. It is important that Nginx avoid copying data in memory as much as possible, and that most of the data is passed through pointers rather than calling memcpy.

More deeply, when a module generates a response, these responses are put into a memory buffer and added to a buffer chain. This buffer chain also applies to applies request processing. Because there are multiple processing scenarios depending on the module type, the buffer chain in Nginx is quite complex. For example, it is tricky to manage buffers precisely when implementing the body filter module. The module can handle only one buffer in the buffer chain at the same time, it must decide whether to overwrite the input buffer, replace the buffer with the newly allocated buffer, or insert a new buffer before or after the buffer. In more complex cases, sometimes a module receives data that requires multiple buffers to be stored, so it must handle an incomplete buffer chain. However, since Nginx only provides the underlying API for manipulating the buffer chain, developers should really master the obscure part of Nginx before developing a third-party module.

One thing to note in the above mentioned is that the memory buffers are allocated for the entire life cycle of the connection, so additional memory is required for long connections. At the same time, for idle keep alive connection, Nginx consumes only 550 bytes of memory. Future Nginx versions may be optimized to allow long connections to reuse and share memory buffers.

The task of memory allocation management is done by the Nginx memory pool allocator. The shared memory area is used for holding the mutex (accept mutex), caching metadata, SSL session caching, and bandwidth policy management (speed limit) related information. Nginx implements the slab allocator for managing shared memory and provides a series of locking mechanisms (mutexes and semaphores) to allow shared memory to be used securely and concurrently. to organize complex data structures, Nginx also provides the implementation of red-black trees. Red and black trees are used to store cache metadata in memory, to find non-regular location definitions, and for some other tasks.

Unfortunately, the above content has never been introduced in a consistent and simple way, so that the work of developing third-party modules is quite complex. While there are some good documents that Nginx implements internally, for example, Evan Miller writes, these documents require a lot of restoration work, and the development of nginx modules is like magic.

Although it is so difficult to develop third-party modules, the Nginx community has recently sprung up with a number of useful third-party modules. For example, embedding the LUA interpreter into Nginx, load balancer Add-ons, full web DAV support, advanced cache control, and other interesting third-party work that this chapter's authors encourage and support in the future.


14.5 Good Practices


When Igor Sysoev started writing nginx, most of the software that built the Internet already existed, and the architecture of the software typically follows the definition of traditional server and network hardware, operating systems, and past Internet architectures. However, this does not prevent Igor from considering further work in the Web server area. So, obviously the first good practice is: there is always room for improvement.

With the idea of developing better web software, Igor spent a lot of time developing the original code structure and studying the different ways to optimize code under multiple operating systems. Ten years later, considering that the 1.0 version has been actively developed over 10 years, Igor has developed a 2.0 version prototype. Obviously, the initial prototyping and code structure of this new architecture is important for the subsequent development of the software.

  Another point worth mentioning is focus on development. the Nginx version of Windows is a good example of how it is worthwhile to avoid dilution development work on the developer's core skills or application goals. It is also worthwhile to strengthen the ability of the Nginx rewrite engine to provide back compatibility to existing legacy configurations.

Finally, it is worth mentioning that, although the Nginx developer community is not big,Nginx's third-party modules and extensions have become an important factor in Nginx's popularity. the Nginx user community and the authors are grateful to Evan Miller, Piotr Sikora, Valery Kholodkov, Zhang Yichun (Agentzh) and other excellent software engineers for their work.


English Original: http://www.aosabook.org/en/nginx.html

Chinese reference: http://www.ituring.com.cn/article/4436


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.