: This article mainly introduces Nginx2: working mechanism. if you are interested in PHP tutorials, refer to it. We know that processes and threads consume memory and other system resources, and they need to perform context switching. Most modern servers can process hundreds of processes or threads at the same time, but when the memory is exhausted, the performance will decline, and frequent context switches will occur at high IO loads.
The conventional method for processing the network is to create a process or thread for each connection, which is easy to implement but difficult to expand.
So how does Nginx work? How Does NGINX Work?
After nginx is started, there will be one master process and multiple worker processes. The master process is mainly used to manage worker processes, including receiving external signals, sending signals to worker processes, and monitoring the running status of worker processes, when the worker process exits (abnormal), it automatically restarts the new worker process. The basic network events are handled in the worker process. Multiple worker processes are peer-to-peer. they compete for requests from clients, and each process is independent of each other. One request can only be processed in one worker process. one worker process cannot process requests from other processes. The number of worker processes can be set. Generally, the number of cpu cores is the same as that of the machine. The reason is that the nginx Process Model and event processing model are inseparable. Nginx process model, which can be represented by the following sources:
As you can see, the worker process is managed by the master. The master process will receive signals from outside, and then perform different tasks based on the signals. Therefore, to control nginx, you only need to send a signal to the master process. For example ,. /nginx-s reload is to restart nginx. this command will send a signal to the master process. first, after the master process receives the signal, it will re-load the configuration file, then start a new process and send a signal to all old ones to tell them that they can retire with honor. After a new process is started, it starts to receive new requests. after receiving a signal from the master, the old process no longer receives new requests, and exit after all unprocessed requests in the current process are processed.
So how does the worker process the request?
As we have mentioned above, worker processes are equal, and each process has the same opportunity to process requests. When we provide an http service with Port 80 and a connection request comes over, every process may process this connection. how can this problem be solved? First, each worker process comes from the master process fork. in the master process, first establish the socket that requires listen, and then fork multiple worker processes, in this way, each worker process can go to the accept socket (of course not the same socket, but the socket of each process will monitor the same ip address and port, this is allowed in the network protocol ). Nginx provides an accept_mutex. from the name, we can see that this is a shared lock applied to the accept. With this lock, only one process is connected to accpet at the same time. Accept_mutex is a controllable option, which can be turned off explicitly. it is enabled by default. After a worker process connects to accept, it starts to read the request, parse the request, process the request, generate the data, and then return it to the client. the connection is closed, such a complete request is like this. We can see that a request is completely handled by the worker process and only processed in one worker process.
So what are the advantages of nginx adopting this process model?
1) each worker process is independent and requires no locks. it saves the lock overhead and is easy to program.
2) processes are independent of each other. after a process exits (for example, an exception occurs), other processes work as usual and services are not interrupted.
3) No context switching, reducing unnecessary system overhead
Nginx adopts an event-driven processing mechanism, which is a system call such as epoll in linux. You can monitor multiple events at the same time and set the timeout time. if an event is ready within the timeout time, the system returns the prepared event. In this way, as long as an event is ready, we will handle it. only when all events are not ready can we wait for epoll congestion. In this way, we can perform high-concurrency processing, and we will continue to switch between requests. The switchover is because the event is not ready, and we will take the initiative to let it out. There is no price for switching here. it can be simply understood as loop processing of multiple prepared events.
Compared with multithreading, this event processing method has great advantages. no threads need to be created, and each request occupies a small amount of memory. there is no context switch, and event processing is very lightweight. A large number of concurrent operations will not cause unnecessary resource waste (context switching ). More Concurrency only occupies more memory, which is also the main reason for nginx performance efficiency.
The following is taken from the official Nginx website:
When an NGINX server is active, only the worker processes are busy. Each worker process handles multiple connections in a non-blocking fashion, which redirects the number of context switches.
Each worker process is single-threaded and runs independently, grabbing new connections and processing them. The processes can communicate using shared memory for shared cache data, session persistence data, and other shared resources.
Each NGINX worker process is initialized with the NGINX configuration and is provided with a set of listen sockets by the master process.
The NGINX worker processes begin by waiting for events on the listen sockets (accept_mutex and kernel socket sharding ). events are initiated by new incoming connections. these connections are assigned to a state machine-the HTTP state machine is the most commonly used, but NGINX also implements state machines for stream (raw TCP) traffic and for a number of mail protocols (SMTP, IMAP, and POP3 ).
Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.
The above introduces Nginx2: working mechanism, including some content, hope to be helpful to friends who are interested in PHP tutorials.