Nginx Learning Notes (11): Nginx Architecture Design __ Understand Nginx

Source: Internet
Author: User
Tags data structures
Preface
began to study the third part of the book, in-depth Nginx, but also more in the Nginx body to see the previous internship company developed the shadow of the system, thanks to the past experience. In addition, more and more found that csdn is a good place to see other people's blog can always make their own blood boiling, in their body to see their lack of and valuable quality, to learn from them.
nginx Architecture DesignHere first the Nginx design of several key points: performance: Including network performance, the delay of the word request, network efficiency; scalability: You can add components to promote services, or allow interaction between components; simplicity: The simplicity of the component, easy to understand and implement; Modifiable: Includes scalability, scalability, customization, scalability, reusability, Visibility: monitoring key component operations, portability: cross-platform operation, Reliability: In the event of a service failure, an architecture is vulnerable to the level of system-level failure; Modular DesignThe highly modular design is the architectural foundation of the Nginx. In Nginx, except for a small amount of core code, everything else is a module. This has been felt in the previous practice of module development.
All modules follow the ngx_module_t interface design, and all modules are layered and categorized.
There are five major types of modules in the official Nginx: Core module, configuration module, event module, HTTP module, mail module
They all have the same ngx_module_t interface, but differ in the level of the request processing process.
The relationship between nginx commonly used modules is shown below:


The configuration module and core module are defined by the Nginx framework code. Among them, the configuration module is the foundation of all modules, it realizes the most basic parsing function (that is, parsing nginx.conf file). The Nginx framework then invokes the core module, and the other three modules are not directly related to the framework. As shown above, the event module, HTTP module, mail module in the core modules are each have a "spokesperson", and in the same type of module has a core business and management functions of the module. For example, the event module is defined by its spokesperson Ngx_events_module Core module, but all event modules are loaded by Ngx_event_core_module.
In the above diagram, both the configuration module and the core module are closely related to the Nginx framework and are the basis of other modules. The event module is also the interface between the HTTP module and the Mail module. The HTTP module and Mail module are more concerned with the application level and have similar status.
Event-Driven architectureEvent-driven architecture, simple to live, is generated by a number of time source events, by one or more event collectors to collect, distribute time, and then many time processors will register their interest in events, and "consume" these events.
Nginx uses a complete event-driven architecture to handle the business, unlike a traditional Web server. Distinguishing with illustrations: traditional Web server processing event models

Nginx Processing Event Model
The most important difference between the two: traditional Web servers are each event consumer exclusive one process resource, Nginx event consumers are only called by the event distribution process in the short term.
This design makes the network performance, user-perceived request delay is promoted. But at the same time a disadvantage: consumers can not have blocking behavior at each time, otherwise, because of the long time consuming the distributor process will cause other events are not responding. Further, the distributor process cannot be turned into hibernation or wait state.
Multi-stage asynchronous processing of requestsThe multistage asynchronous processing of a request can only be based on an event-driven architecture, meaning that the processing of a request is divided into stages by the way the event is triggered, each of which can be triggered by the event collection and the Distributor.
Asynchronous Processing and multi-stage:Asynchronous processing and multi-stage partitioning are mutually reinforcing. It can be understood that when an event is distributed to an event consumer for processing, the event consumer handles the current event just as if it were a phase of processing a request, and then waits for the kernel to notify again, and then invoke the consumer to process again ...
Multi-stage asynchronous processing advantages of the request:This design allows each process to perform at its full capacity, without or as little as possible with the process hibernation state. When process hibernation occurs, it inevitably reduces the number of concurrent processing events, which in turn reduces network performance while increasing the average delay of request processing time; If hibernation causes network performance to fail to meet business requirements, the system can only be resolved by adding processes. At this point, excessive number of processes will result in additional operating system operations: inter-process switching。 Frequent process switching can severely deplete CPU resources; The hibernation process will consume memory without releasing the process, which will cause the system to fall in usable memory, thus affecting the maximum number of concurrent connections that the system can handle; How to divide the stage of the request. The method of blocking the process is divided into two stages according to the related triggering event. A method or system call that itself can cause a process to hibernate can generally be decomposed into several smaller methods or system calls.                                        In most cases, this can be divided into: 1, the blocking method is changed to a non-blocking method, this is the first stage, 2, processing the result of the return of the Non-blocking method, this is the second stage; for example:         A method call that breaks down a blocking method call by time to multiple stages if you try to partition in the previous way, you find that the triggering event that you found cannot be handled by the event collection, the Distributor, and you can only split by the execution event. For example: Read 10M files, but the file may not be continuous on the disk, this time you need to drive hard drive addressing. During the addressing process, it can cause the process to hibernate or wait. At this point, we would like to think of the above method Division stage. However, for example, on Linux, reading a disk file must be asynchronous I/O through the kernel, but the Nginx event module on Linux does not support this practice when asynchronous I/O is not turned on, so that we can only write read file calls: The 10M is divided into 100 parts, each reading 10K size. So the 10K is read per phase, which splits the event.
When "nothing" and must wait for the system to respond, causing the process to idling, using the timer to divide the stage code snippet itself does not block the method, but is actually blocking the process. For example, once a non-blocking system call is made, it is necessary to determine whether to proceed downward through the constant check flag bit, and to iterate indefinitely when the flag bit is not satisfied. In this case, you need to use the Timer control, if the timeout, the flag bit is still not satisfied, then start the next phase of the event.
If the blocking method is completely unable to continue partitioning, you must use a separate process to execute this blocking method when a method invocation can cause a process to hibernate, or take up a process that is too long, but cannot be decomposed into a method that is not blocked, which is usually contrary to the event-driven architecture itself. At this point, the blocking method must be executed either by generating a new process or by specifying a non-event distributor process, and when the blocking method completes, the event collection and the dispatcher process sends an event notification to continue.        There are at least two phases: the blocking method performs the previous phase, the blocking method executes after the stage, and the blocking method executes using a separate process to dispatch and sends an event notification after the method returns. The general appearance of this design, need to look at the issue of whether consumers reasonable. (This section is more abstract, there are no examples in the book, so this sentence is not very understandable)
management process, multi-worker process designNginx uses a master management process, a number of worker process design methods. The following figure:


The advantages of this design are: 1 The ability to use concurrent processing of multi-core systems. Multiple work processes can occupy different CPU cores to work, providing network performance and reducing request latency. 2) Load balance. When a request arrives, it is assigned to a less-loaded worker process. 3 The management process is responsible for monitoring the status of the work process and managing its behavior. The master process basically does not occupy much of the system resources, it is only responsible for starting, stopping, monitoring, or using other behaviors to control the worker process.
Platform-Independent code implementationThe Nginx encapsulates logs, basic data structures (previous notes are recorded), common algorithms, and so on.
In the core code are used to implement the operating system-independent code, in the system calls related to the operating systems have their own independent implementation of each operating system, which created a nginx portability.
Memory Pool DesignI used to think it was a very high-end thing when I first heard it, and now I suddenly understand why there is a memory pool design. The simple thing is to consolidate multiple requests for memory into the system once, reducing the number of requests to the operating system for memory and avoiding memory fragmentation.
The memory pool is not responsible for reclaiming memory that has been allocated in the memory pool.
Then at the end of the code, and then unified release, that is, unified application, unified release. This way, convenient too much.
HTTP Filtering module using the Unified pipe Filter modeis actually the filter module, this can look at the previous note "HTTP Filter module." The input and output of each filter module is a unified interface, which is chained together by a chain-table structure, but each module is independent.
The whole process is a filtering module that processes the input data and passes through the unified interface to the next module.
The design advantages of this unified management filter are as follows: the input/output of the entire HTTP filtering system is simplified into a simple combination of one filter module, which provides good reusability, can connect any two HTTP filter modules together, is easy to maintain and enhance, and is more friendly to the testability and testability. Can flexibly change the Filter module assembly line to verify the function, fully support concurrent execution; Some other user modulesNginx also has a number of specific user modules.        These modules are used to improve or improve the Nginx design key points. For example, the Ngx_http_stub_status_module module provides monitoring of all HTTP connection states and enhances nginx system visibility.
Summary
Nginx's architectural design of this section of the notes, scattered in the two or three-day period. First I went through it, and I read it carefully in the course of my notes. Some examples are repeated by drawing. Therefore, the nginx of the architecture design of the section of the basic grasp. Overall two points: multi-stage and asynchronous processing, management processes, and multiple worker processes.
Main referenceDeep understanding of the Nginx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.