Reprint: 2.1 The relationship between Nginx processes in operation "in-depth understanding Nginx" (Tao Hui)

Source: Internet
Author: User

Original: https://book.2cto.com/201304/19624.html

In an officially serviced product environment, deploying Nginx uses a master process to manage multiple worker processes, and in general, the number of worker processes is equal to the number of CPU cores on the server. Each worker process is busy, they are actually providing Internet services, the master process is "idle" and is only responsible for monitoring the management worker process. The worker process uses some inter-process communication mechanisms, such as shared memory and atomic operations, to achieve load balancing (the 9th chapter introduces the load balancing mechanism, and the 14th chapter introduces the implementation of the load balancing lock).

The relationship between nginx processes after deployment is shown in 2-1.

Nginx is to support a single process (master process) to provide services, then why the product environment in accordance with the Master-worker mode configuration to start multiple processes at the same time? The main benefits of this are the following two points:

Because the master process does not service user requests and only manages worker processes that actually provide services, the master process can be unique, focusing only on its own purely administrative work, providing command-line services for administrators, including such things as starting a service, stopping a service, overloading a configuration file, Smooth upgrade programs, and more. Master processes need to have large permissions, for example, the master process is typically started with the root user. The worker process has permissions that are less than or equal to the master process so that the master process can completely manage worker processes. When any worker process has an error causing coredump, the master process starts a new worker process to continue the service immediately.

Multiple worker processes can handle Internet requests not only to improve the robustness of the service (after a worker process error, other worker processes can still provide service), and most importantly, to take full advantage of the current common SMP multicore architecture, so that the micro-real multi-core concurrency processing. Therefore, it is certainly not appropriate to use a process (master process) to handle Internet requests. Also, why set the number of worker processes to match the number of CPU cores? This is where nginx differs from the Apache server. In Apache, each process only processes one request at a time, so if you want the Web server to have more concurrent processing requests, set the number of Apache processes or threads to more, usually up to one server with hundreds of worker processes, Such a large number of inter-process switching will result in unnecessary system resource consumption. In the case of Nginx, the number of requests that a worker process can process at the same time is limited by the memory size, and in the architecture design, there is little synchronization lock restriction when processing concurrent requests between different worker processes, and the worker process usually does not go to sleep, so When the number of processes on nginx is equal to the number of CPU cores (preferably each worker process is bound to a specific CPU core), the cost of inter-process switching is minimal.

For example, if the number of server CPUs in a product is 8, then you need to configure 8 worker processes (see figure 2-2).

If you use the default configuration for the path section, the Nginx run directory is/usr/local/nginx and its directory structure is as follows.
|---sbin
| |---nginx
|---conf
| |---koi-win
| |---koi-utf
| |---win-utf
| |---mime.types
| |---mime.types.default
| |---fastcgi_params
| |---fastcgi_params.default
| |---fastcgi.conf
| |---fastcgi.conf.default
| |---uwsgi_params
| |---uwsgi_params.default
| |---scgi_params
| |---scgi_params.default
| |---nginx.conf
| |---nginx.conf.default
|---Logs
| |---error.log
| |---access.log
| |---nginx.pid
|---HTML
| |---50x.html
| |---index.html
|---client_body_temp
|---proxy_temp
|---fastcgi_temp
|---uwsgi_temp
|---scgi_temp

Reprint: 2.1 The relationship between Nginx processes in operation "in-depth understanding Nginx" (Tao Hui)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.