Nginx Core Architecture Overview

Source: Internet
Author: User
: This article mainly introduces the nginx core architecture. if you are interested in the PHP Tutorial, refer to it. Before graduation, after I completed the setup, I got bored with socket programming and used the C ++ Qt framework to write TCP and UDP communication clients like toys. It is recommended that you dig deep into the socket and try to walk the back-end or the architect's path when chatting with the immediate supervisor over the phone. How to dig deeper, answer the source code, and learn socket-related knowledge. it is most appropriate to study the server source code. As to which server to choose, after investigation, we found that nginx is more compact and excellent than the heavy apache. So before getting started with the source code, I started some self-science work.

1. Process model

First, nginx in Unix runs continuously in the background in the form of daemon, just like other servers. Although nginx can also turn off the background mode for debugging purposes, use the foreground mode, or even cancel the master process through configuration (which will be explained in detail later), so that nginx can work in the form of a single process. However, these architectures that are proud of nginx do not have much to do with them. Although nginx also supports multithreading, we will continue to learn about its default multi-process method.

After nginx is started, a master process (main process) and several worker processes (slave process) are created ). The master process is mainly responsible for managing the worker process. Specifically, it receives signals from the administrator and forwards them to the corresponding worker process. it monitors the working status of the worker process, re-create and start the worker process when the worker process ends abnormally. The worker process is responsible for handling basic network events. Worker processes have equal priorities and are independent of each other. they compete fairly for requests from clients. each request is processed by only one worker process. Nginx Process Model 1.

Figure 1 nginx process model

You can set the number of worker processes. Generally, the setting is consistent with the number of CPU cores. this principle is related to the nginx event processing model. We will continue to introduce the nginx event processing model later.

2. signal and request

Nginx interacts with the outside world through two interface interfaces: the signal from the administrator and the request from the client. The following is an example of how nginx processes signals and requests.

To control nginx, the administrator needs to communicate with the master process and send a command signal to the master process. For example, nginx uses the kill-HUP [pid] command before version 0.8 to restart nginx. Using this command to restart nginx will enable easy restart without service interruption. After receiving the HUP command, the master process reloads the configuration file, starts the new worker process, and sends a stop signal to the old worker process. At this time, the new worker process starts to receive network requests, and the old worker process stops receiving new requests. after the current request is processed, the old worker process will exit and destroy. After Version 0.8, nginx introduced a series of command line parameters to facilitate server management, such as./nginx-s reload and./nginx-s stop, to restart and stop nginx respectively. When executing the operation command, we actually started a new nginx process. after parsing the parameters in the command, this process sends the corresponding signal to the master process, achieve the same effect as previously manually sent signals.

3. requests and events

The server processes http requests on port 80 most often. this is an example to describe how nginx processes requests. First, each worker process is composed of the master process fork (fork). The master process first establishes the socket (socket, that is, IP address + port number) to be monitored) and the corresponding listenfd (listener file descriptor or handle ). We know that each process in socket communication needs to allocate a port number, and the socket allocation of the worker process is done by the master process. Listenfd of all worker processes becomes readable when new connections are established. to ensure that only one worker process processes the connection, before registering a listenfd read event, each worker process must first obtain the accept_mutex (accept the connection mutex lock). after a worker process steals the connection, start to read requests, parse requests, process requests, and feedback data to the client.

4. Process Model Analysis

Nginx is used, but not only the multi-process request processing model (PPC) is used. each worker process processes only one request at a time, so that resources between requests do not need to be locked independently, processes can process requests in parallel without affecting each other. A failure in processing a request causes a worker process to exit abnormally without service interruption. Instead, the master process immediately restarts a new worker process, reducing the overall risks faced by the server, make the service more stable. However, compared with the multi-threaded model (TPC), the system overhead is slightly larger and the efficiency is slightly lower, which needs to be improved by other means.

5. nginx high concurrency mechanism-asynchronous non-blocking event mechanism

The event processing mechanism of IIS is multithreading, and each request excludes one working thread. Due to the memory usage of multithreading, context switching between threads (repeated operations on the register group to protect the field and restore the field) brings about a high CPU overhead, when the multi-threaded server faces thousands of concurrent requests, it will put a lot of pressure on the system, and the high concurrency performance is not ideal. of course, if the hardware is good enough, with sufficient system resources, the system pressure is no longer a problem.

We will go deep into the system to discuss the differences between multi-process and multi-thread, blocking and non-blocking mechanisms.

Those familiar with the operating system should understand that the emergence of multithreading is to make full use of CPU scheduling when resources are sufficient, especially to improve the utilization of multi-core CPU. But the thread is the smallest unit of system tasks, while the process is the smallest unit of system resource allocation, which means that multithreading will face a big problem: when the number of threads increases, the resource demand increases, the parent process of these threads may not be able to immediately apply for enough resources for all threads. when the system does not have enough resources to satisfy a process, it will choose to let the entire process wait. At this time, even if the system resources have enough threads to work normally, the parent process cannot apply for these resources, causing all threads to wait together. To put it bluntly, with multithreading, flexible scheduling can be performed between threads in the process (although it increases the risk of Thread deadlock and thread switching overhead ), however, it cannot be ensured that the parent process can be properly scheduled in the system even when the parent process is getting heavier and heavier. It can be seen that multithreading can indeed improve the CPU utilization, but it is not an ideal solution to solve the high-concurrency server request problem, not to mention that the high CPU utilization cannot be maintained in the high-concurrency state. The above is the multi-thread synchronous blocking event mechanism of IIS.

Nginx's multi-process mechanism ensures that each request independently applies for system resources. once the conditions are met, each request can be processed immediately, that is, asynchronous non-blocking processing. However, creating a process requires more resource overhead than a thread. to save the number of processes, nginx uses some process scheduling algorithms to enable I/O event processing not only by multi-process mechanisms, instead, it is an asynchronous non-blocking multi-process mechanism. Next we will introduce nginx's asynchronous non-blocking event processing mechanism.

6. epoll

In Linux, high-concurrency and high-performance networks must be epoll, and nginx uses the epoll model as the processing mechanism for network events. Let's first look at how epoll came about.

The earliest scheduling scheme was the asynchronous busy polling method, that is, the continuous polling of I/O events, that is, traversing the access status of the socket set, obviously, this scheme causes unnecessary CPU overhead when the server is idle.

Later, the select and poll appeared successively as scheduling processes and proxies for CPU utilization improvement. literally, one is "select" and the other is "vote", which are essentially the same, the requests are all round-robin socket sets and processed. The difference is that they can monitor I/O events and the polling thread will be blocked in idle time, when one or more I/O events arrive, they are awakened. instead of being "busy" in "busy polling", they become asynchronous polling methods. The select/poll model polls the entire FD (file descriptor) set, that is, the socket set. The network event processing efficiency decreases linearly with the number of concurrent requests, so a macro is used to limit the maximum number of concurrent connections. At the same time, the kernel space of the select/poll model communicates with the user space for memory replication, resulting in high overhead. The above disadvantages gave birth to a new model.

Epoll can be considered to be short for event poll. it is a poll improved by the Linux kernel to process large volumes of file descriptors. it is an enhanced version of select/poll for multiplexing I/O interfaces in Linux, it can significantly improve the CPU usage of the system when the program is active in a large number of concurrent connections. First, epoll has no limit on the maximum number of concurrent connections. the upper limit is the maximum number of files that can be opened. this is related to the hardware memory size, which is about on 1 GB machines; then there is the most significant advantage of epoll. it only operates on "active" sockets, because only the sockets that are asynchronously awakened by the kernel I/O read/write events are put into the ready queue, prepare to enter the worker process for processing, which saves a lot of polling overhead in the actual production environment, greatly improving the event processing efficiency. finally, epoll uses the shared memory (MMAP) to realize the communication between the kernel space and the user space, saving the overhead of memory replication. Additionally, the epoll ET (edge-triggered) working mode is the quick working mode in nginx. In ET mode, only non-blocking sockets are supported. when FD is ready, the kernel sends a notification through epoll. after some operations, a notification is sent when FD is no longer ready, however, if no I/O operation is performed, the notification will not be sent if the FD changes to the ready state. In general, nginx uses epoll to process network events based on events in Linux.

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

The above introduces the nginx core architecture overview, including some content, hope to be helpful to friends who are interested in the PHP Tutorial.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.