Original: http://tengine.taobao.org/book/chapter_02.html
Nginx use multi-worker way to handle the request, each worker inside only one main thread, that can handle the number of concurrency is very limited ah, how many workers can handle how many concurrency, how high concurrency?
Nginx uses an asynchronous, non-blocking way to process requests, and this asynchronous, non-blocking event-handling mechanism, specifically to system calls, is a system call like Select/poll/epoll/kqueue. They provide a mechanism that allows you to monitor multiple events at the same time, call them blocked, but you can set the timeout period, and return if an event is ready in the timeout period.
Take Epoll as an example (in the following example, we use Epoll as an example to represent this kind of function)
- When the event is not ready, put it inside the epoll.
- When the event is ready, we're going to read and write.
- When read-write returns Eagain, we add it to epoll again.
Thus, as soon as the event is ready, we will deal with it and wait in the Epoll only if all the events are not ready. In this way, we can handle a lot of concurrency concurrently.
Of course, here is the concurrent request, the request is not processed, the thread only one, so at the same time can handle the request of course only one, only in the request to constantly switch, but also because the asynchronous event is not ready, and actively let go. There is no price to switch here, and you can understand that the loop handles multiple prepared events, which is actually the case. Compared with multithreading, this kind of event processing is a great advantage, do not need to create threads, each request consumes less memory, no context switches, event handling is very lightweight. Any number of concurrent numbers will not result in unnecessary resource wastage (context switching).
More concurrency, it just takes up more memory.
The number of worker processes mentioned earlier can be set, generally we will set the machine CPU with the same number of cores, the reasons for the Nginx process model and the event processing model is inseparable.
The number of workers recommended here is the number of cores of the CPU, which is easy to understand, and more worker numbers will only cause the process to compete for CPU resources, resulting in unnecessary context switching. Furthermore, Nginx provides a binding option for CPU affinity in order to make better use of multicore features, and we can bind a process to a kernel so that it does not invalidate the cache because of the process switching. Small optimizations like this are common in Nginx, and also illustrate the drew up of nginx authors. For example, when Nginx makes a 4-byte string comparison, it converts 4 characters into an int, then compares them to reduce the number of instructions on the CPU, among other things.
'). addclass (' pre-numbering '). Hide (); $ (this). addclass (' has-numbering '). Parent (). append ($numbering); for (i = 1; i <= lines; i++) {$numbering. Append ($ ('
'). Text (i)); }; $numbering. FadeIn (1700); }); });
The above describes the [reading notes]1_ on the Nginx architecture _2_ Event Processing mechanism of how to high concurrency, including the content, I hope to be interested in PHP tutorial friends helpful.