Nginx Architecture Note 2

Source: Internet
Author: User
Tags epoll

Reprint Please specify: theviper http://www.cnblogs.com/TheViper

From Tengine's nginx development from beginner to proficient

Or this dick can not read most of the content, for the moment to write down to see understand.

Nginx is a multi-process (a master process and a number of worker processes) to work, of course, Nginx is also support multi-threaded way, but we are the mainstream way is a multi-process way, is the default way Nginx. (see http://www.cnblogs.com/TheViper/p/4180551.html for the difference between a process and a thread)

The master process is primarily used to manage worker processes, including receiving signals from the outside, sending signals to the worker processes, monitoring the health status of the worker process, and automatically restarting the new worker process when the worker process exits (in exceptional cases). The basic network events are handled in the worker process. Multiple worker processes are equivalent, they compete for requests from clients equally, and each process is independent of each other. A request can only be processed in a worker process, and a worker process cannot handle requests from other processes.

Worker processes are equal, and each process is the same as the opportunity to process requests. When we provide a 80 port HTTP service, a connection request comes in, each process has the possibility to handle this connection, how to do it? First, each worker process is forked from the master process, and in the master process, the socket (LISTENFD) that needs to be listen is first established, and then the multiple worker processes are forked. The LISTENFD of all worker processes becomes readable when a new connection arrives, and to ensure that only one process processes the connection, all worker processes seize Accept_mutex before registering the LISTENFD read event, and the process that grabs the mutex registers the LISTENFD read event. The accept accepts the connection in the Read event. When a worker process begins to read the request, parse the request, process the request, generate the data, and then return it to the client after the connection is accept, this is the complete request. As we can see, a request is handled entirely by the worker process and is handled only in a worker process.

Nginx handles requests in a multi-worker manner, with only one main thread inside each worker, and the worker internally uses an asynchronous, non-blocking way to process requests.

and Apache's Common way of working (Apache also has an asynchronous non-blocking version, but because it conflicts with some modules, so it is not commonly used), each request will be exclusive to a worker thread, when the number of concurrent to thousands of, at the same time thousands of of the threads are processing the request. This is a big challenge for the operating system, the memory footprint of the thread is very large, the thread's context switch brings a lot of CPU overhead, the natural performance is not going to go, and these costs are completely meaningless.

About asynchronous non-blocking is the event (here, the event consumer, the various processing modules of nginx?). Not ready, immediately return to Eagain, tell you, the incident is not ready, you panic, will come again. Well, after a while, check the event until the event is ready, and in the meantime, you can do something else and then look at the event. Although not blocked, but you have to come to check the status of the event from time to moment, you can do more things, but the cost is not small. Therefore, there is an asynchronous non-blocking event handling mechanism.

Specific to system calls is a system call like Select/poll/epoll/kqueue. They provide a mechanism that allows you to monitor multiple events at the same time, call them blocked, but you can set the timeout period, and return if an event is ready in the timeout period. This mechanism solves two of our problems above, take epoll as an example, when the event is not ready, put into epoll inside, the event is ready, we go to read and write, when read and write return eagain, we add it again to epoll inside. Thus, as soon as the event is ready, we will deal with it and wait in the Epoll only if all the events are not ready. In this way, we can handle a lot of concurrency concurrently.

Of course, here is the concurrent request, the request is not processed, the thread only one, so at the same time can handle the request of course only one, only in the request to constantly switch, but also because the asynchronous event is not ready, and actively let go. There is no price to switch here, and you can understand that the loop handles multiple prepared events, which is actually the case. Compared with multithreading, this kind of event processing is a great advantage, do not need to create threads, each request consumes less memory, no context switches, event handling is very lightweight. Any number of concurrent numbers will not result in unnecessary resource wastage (context switching). More concurrency, it just takes up more memory.

The recommended number of workers is the number of cores of the CPU, which is easy to understand, and more worker numbers will only cause the process to compete for CPU resources, resulting in unnecessary context switching. Furthermore, Nginx provides a binding option for CPU affinity in order to make better use of multicore features, and we can bind a process to a kernel so that it does not invalidate the cache because of the process switching.

If you find the content of this article helpful to you, you can reward me with:

Nginx Architecture Note 2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.