We know that processes and threads consume memory and other system resources, and they require context switching. Most modern servers can process hundreds of processes or threads at the same time, but when memory runs out, performance will degrade and frequent context switches will occur at high IO loads.
The general way to handle a network is to create a process or thread for each connection, which is easy to implement, but is difficult to extend.
So how does nginx do it? How is Does NGINX work?
Nginx after booting, there will be a master process and multiple worker processes. The master process is primarily used to manage worker processes, including receiving signals from the outside, sending signals to the worker processes, monitoring the health status of the worker process, and automatically restarting the new worker process when the worker process exits (in exceptional cases). The basic network events are handled in the worker process. Multiple worker processes are equivalent, they compete for requests from clients equally, and each process is independent of each other. A request can only be processed in a worker process, and a worker process cannot handle requests from other processes. The number of worker processes can be set, in general we will be set with the machine CPU core number consistent, which causes the Nginx process model and the event processing model is inseparable. Nginx process model, can be the origin of the expression:
As you can see, the worker process is managed by master. The master process receives signals from the outside world, and then does the different things according to the signals. So we have to control nginx, just send a signal to the master process. For example,./nginx-s reload, is to restart Nginx, the command will send a signal to the master process, first the master process after receiving the signal, the first reload the configuration file, and then start a new process, and send a signal to all the old process, tell them can retire honorably. After the new process starts, it begins to receive the new request, and the old process receives a signal from master, and no longer receives the new request, and exits after all outstanding requests in the current process have been processed.
So how does the worker process handle the request?
As we mentioned earlier, the worker process is equal, and every process, the opportunity to process the request is the same. When we provide a 80 port HTTP service, a connection request comes in, each process has the possibility to handle this connection, how to do it? First, each worker process is forked from the master process, and in the master process, the socket that needs to be listen is first established, and then the multiple worker processes are forked, so that each worker process can go to ACC EPT This socket (of course not the same socket, but this socket for each process will be monitored in the same IP address and port, which is allowed in the network protocol). Nginx provides a accept_mutex this thing, from the name we can see this is a shared lock added on the accept. With this lock, at the same moment, there will only be one process in the Accpet connection. Accept_mutex is a controllable option that we can display to turn off, which is turned on by default. When a worker process begins to read the request, parse the request, process the request, generate the data, and then return it to the client after the connection is accept, this is the complete request. As we can see, a request is handled entirely by the worker process and is handled only in a worker process.
So what is the benefit of nginx using this process model?
1) Each worker process is independent, does not need to lock, saves the lock overhead, the programming is simple.
2) The process is independent of each other, and when a process exits (such as an exception), the other processes work as usual and the service is not interrupted.
3) No context switch to reduce unnecessary system overhead
Nginx uses the event-driven processing mechanism, in Linux is epoll such a system call. You can monitor multiple events at the same time, you can set the time-out period, and if there is an event ready, return the prepared event. Thus, as soon as the event is ready, we will deal with it, and only when all the events are not ready will the Epoll block wait. In this way, we can do high concurrency processing, we are constantly switching between requests, switching is because the event is not ready, and actively let out. There is no price for switching here, which can be simply understood as looping through multiple prepared events.
Compared with multithreading, this kind of event processing is a great advantage, do not need to create threads, each request consumes less memory, no context switches, event handling is very lightweight. Any number of concurrent numbers will not result in unnecessary resource wastage (context switching). More concurrency, it only consumes more memory, which is the main reason for nginx performance efficiency.
The following is from the Nginx official website:
When an NGINX server was active, only the worker processes was busy. Each worker process handles multiple connections in a non-blocking fashion, reducing the number of the context switches.
Each worker process is single-threaded and runs independently, grabbing new connections and processing them. The processes can communicate using shared memory for shared cache data, session persistence data, and other shared resour Ces.
Each nginx worker process are initialized with the NGINX configuration and are provided with a set of listen sockets by the Master process.
The NGINX worker processes begin by waiting for events on the Listen sockets (Accept_mutex and kernel socket sharding). Events is initiated by new incoming connections. These connections was assigned to a state machine–the HTTP state machine was the most commonly used, but NGINX also imple ments state machines for stream (raw TCP) traffic and for a number of mail protocols (SMTP, IMAP, and POP3).
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
The above describes the NGINX2: working mechanism, including aspects of the content, I hope that the PHP tutorial interested in a friend helpful.