The Analysis of nginx multi-threaded model is of great reference value.
Address: http://blog.dccmx.com/2011/02/nginx-conn-handling/
You know, concurrent connections are any server.ProgramImportant performance indicators that cannot be escaped. How to deal with a large number of concurrent connections is undoubtedly the first issue to be considered during server program design. Here, let's take a look at how nginx handles concurrent HTTP connections.
Shows the overall structure:
For the server, the result of processing concurrent connections is: High concurrency and fast response. The nginx architecture adopts the master-worker multi-process collaboration mode. Therefore, it is also important to consider how to make every worker process connections evenly.
Then, the listen socket is created when the master process is initialized, And the Fork sub-process is naturally inherited to the sub-process. UpperCode.
In the main function of SRC/CORE/nginx. C, the following two lines are called in sequence:
Cycle = ngx_init_cycle (& init_cycle); ngx_master_process_cycle (cycle );
In nginx code, a cycle represents a process, and all process-related variables (including connections) are in this struct. The main function first calls ngx_init_cycle to initialize a master process instance. The listening socket of port 80 is also created in this function:
If (ngx_open_listening_sockets (cycle )! = Ngx_ OK) {goto failed ;}
In the code of the ngx_open_listening_sockets function, you can see the call of socket functions such as bind and listen. The finally created listening socket is in the listening domain of the cycle struct. The related code is as follows:
Ls = Cycle-> listening. ELTs; S = ngx_socket (LS [I]. sockaddr-> sa_family, ls [I]. type, 0); If (setsockopt (S, sol_socket, so_reuseaddr, (const void *) & reuseaddr, sizeof (INT) =-1 ){... if (BIND (S, ls [I]. sockaddr, ls [I]. socklen) =-1 ){... if (Listen (S, ls [I]. backlog) =-1 ){... ls [I]. listen = 1; LS [I]. FD = s;
You know.
The ngx_master_process_cycle called in the main function is the place where the worker process is created.
The ngx_master_process_cycle function calls ngx_start_worker_processes (cycle, CCF-> worker_processes, ngx_process_respawn). The function can be seen in the ngx_start_worker_processes function.
For (I = 0; I <n; I ++) {cpu_affinity = ngx_get_cpu_affinity (I); ngx_spawn_process (cycle, ngx_worker_process_cycle, null, "worker process", type); Ch. PID = ngx_processes [ngx_process_slot]. PID; Ch. slot = ngx_process_slot; Ch. FD = ngx_processes [ngx_process_slot]. channel [0]; ngx_pass_open_channel (cycle, & Ch );}
Well, since then, the master has been working almost, and the master process and worker process have entered their own event loop. The master's event loop is to receive and receive signals, manage and manage the worker process, and the event loop of the Worker Process is to listen to and process network events (such as new connections, disconnect connections, processing requests to send responses, etc.). Therefore, the real connection is eventually connected to the worker process. How do worker processes receive the connection (call the accept () function. All worker processes have listening sockets and can access one connection. Therefore, nginx has prepared an accept lock ,, all sub-processes compete for this lock when processing the new connection. Worker processes that compete for the lock can call accept to accept the new connection. The purpose of this operation is to prevent multiple processes from simultaneously accessing and calling multiple processes at the same time when a connection comes. The so-called surprise group (BTW: it is said that the new version of the kernel has not been stunned, to be verified ). The related code is in ngx_process_events_and_timers of SRC/event. C:
If (ngx_use_accept_mutex) {If (ngx_accept_disabled> 0) {ngx_accept_disabled --; // ngx_accept_disabled is actually the maximum number of connections of processes (specified in the configuration file) 1/8 minus the number of remaining connections // calculated in src/event/nginx_event_accept.c: ngx_accept_disabled = ngx_cycle-> connection_n/8-ngx_cycle-> free_connection_n; // when the remaining number of connections is less than 1/8 of the maximum number of connections, it indicates that the connection is too large, so an attempt to lock the connection is abandoned.} else {If (ngx_trylock_accept_mutex (cycle) = ngx_error) {// here, the ngx_trylock_accept_mutex function is the lock contention function, and the lock is successfully obtained. Set the global variable ngx_accept_mutex_held to 1; otherwise, set 0 return;} If (ngx_accept_mutex_held) {flags | = ngx_post_events; // The process that occupies the accept lock puts the event into the queue before processing the event, so that the lock can be released as soon as possible .} Else {// The process that does not compete for the lock does not need to process the event in two steps, but updates the timer that handles the event to ngx_accept_mutex_delayif (timer = ngx_timer_infinite | timer> ngx_accept_mutex) {timer = ngx_accept_mutex_delay ;}}} Delta = ngx_current_msec; // The following function is used to process events (including new connection Creation events), network I/O events, and so on (void) ngx_process_events (cycle, timer, flags); Delta = ngx_current_msec-delta; ngx_log_debug1 (ngx_log_debug_event, cycle-> log, 0, "timer delta: % m", Delta ); if (ngx_posted_accept_events) {// handle the accept event in the queue. ngx_event_accept is called to create a new connection ngx_event_process_posted (cycle, & ngx_posted_accept_events);} If (events) {ngx_shmtx_unlock (& ngx_accept_mutex); // The accept lock is released}
well, this is the case. In this way, nginx distributes the concurrent connections evenly to the worker process (in fact, each worker process is relatively average ). The specific accept call and connection initialization are in the Void ngx_event_accept (ngx_event_t * eV) function in src/event/nginx_event_accept.c.