Reprint Please specify: theviper http://www.cnblogs.com/TheViper
Excerpt from << deep understanding of Nginx module Development and Architecture Analysis >>
Nginx Inter-process relationships
Nginx uses a master process to manage multiple worker processes. In general, the number of worker processes is equal to the number of CPU cores on the server. The worker process provides a true service, and the master process is only responsible for monitoring the worker process.
Master-worker the benefits of starting multiple processes in a way:
1. Because the master process only focuses on its own management, the worker process can be fully managed, and when any worker process has an error, the master process can immediately start a new worker process to continue the service.
2. Multiple worker processes take advantage of today's multi-core architectures to achieve true multicore concurrency at the microscopic level. Setting the number of worker processes equal to the number of CPU cores on the server is due to the fact that more worker numbers only cause the process to compete for CPU resources, resulting in unnecessary context switching. Furthermore, Nginx provides a binding option for CPU affinity in order to make better use of multicore features, and we can bind a process to a kernel so that it does not invalidate the cache because of the process switching. In addition, the number of requests that a worker process can process at the same time is limited by the memory size, and there is little synchronization lock restriction between different worker processes to handle concurrent requests, and the worker process usually does not go to sleep. Thus, the cost of inter-process switching is minimal.
Configuration items that optimize performance
Number of 1.nginx worker processes
Default: Worker_processes 1;
2. Bind the Nginx worker process to the specified CPU core (configuration is only valid for Linux systems)
Why bind? Assuming that each worker process is very busy, there is a synchronization problem if multiple worker processes are robbing the same CPU. Conversely, if each worker process has a single CPU, it achieves full concurrency on the kernel's scheduling policy.
For example, there are 4 CPU cores that can be configured as:
Worker_processes 4;
Worker_cpu_affnity 1000 0100 0010 0001
3.ssl Hardware Acceleration
Syntax: Ssl_engine_device;
You can use OpenSSL engine-t to see if you have an SSL hardware acceleration device
4. System call Gettimeofday frequency of execution
Syntax: Timer_resolution t;
By default, each time a kernel event call (Epoll,select,poll,kquene, etc.) is returned, a gettimeofday is executed to update the cache clock in nginx with the kernel's clock.
5.nginx Worker Process priority setting
Syntax: worker_priority Nice;
Default: Worker_priority 0;
In Unix systems, when many processes are in an executable state, the priority of all processes is followed to determine which process the kernel chooses to execute. The CPU time slices allocated by the process are also related to the process priority. The higher the priority, the larger the time slice the process is assigned to. In this way, higher-priority processes will occupy more system resources.
The priority is determined by the static priority and dynamic adjustments made by the kernel based on the execution of the process. The nice value is the static priority of the process, and the value range is 20 to +19,-20 is the highest priority. Therefore, if you want Nginx to occupy more system resources, you can set a smaller value, but it is not recommended to be smaller than the nice value of the kernel process (typically-5).
Event-Driven architecture
Definition: Events are generated by some event sources, collected by one or more event collectors, distributed events, and then many event handlers register their own events of interest to consume these events.
For Nginx, usually by network card, disk generated events, Nginx event module responsible for the collection of events, distribution. All modules may be consumers of events, and they first need to register an event type of interest to the event module, so that when an event occurs, the event module distributes the event to the appropriate module for processing.
For a traditional Web server, after a connection is established, all operations before it is closed are batch mode for each operation in order, so that each request will always occupy system resources after the connection is established, until the connection is closed to release the resource. and occupy this period of time may be very long, and this time occupies the memory, CPU and other resources may not make sense, this causes the server resources of great waste, affecting the system can handle the number of concurrent. The traditional server takes a process or thread as the consumer of the event.
Nginx uses an asynchronous, non-blocking way to process the request, and the event consumer is a module. Only event collection, the dispatcher is eligible to occupy process resources, and they invoke the event consumption module to consume the currently occupied process resources when distributing an event.
From the above you can see the difference between the two: the traditional server of each event consumer exclusive one process resources, Nginx event consumers are only the event dispatcher process short-term call, so that each user's request generated by the event will be timely response, server throughput is also greatly increased, but note that Each event consumer can not have blocking behavior, otherwise it will lead to long time to occupy the event dispatcher process, so that other events are not timely response.
If you find the content of this article helpful to you, you can reward me with:
Nginx Architecture Note 1