Nginx Learning Notes Event-driven framework processing process _nginx

Source: Internet
Author: User
Tags epoll mutex set socket

The Ngx_event_process_init method of the Ngx_event_core_module module does some initialization to the event module. This includes setting the processing method (handler) that corresponds to a read event such as request connection to the Ngx_event_accept function and adding this event to the Epoll module. When a new connection event occurs, the ngx_event_accept is invoked. The approximate process is this:

The worker process loops through the Ngx_process_events_and_timers function to handle events in the Ngx_worker_process_cycle method, which is the general entry point for event handling.

Ngx_process_events_and_timers will call Ngx_process_events, which is a macro, equivalent to Ngx_event_actions.process_events,ngx_event_ The actions are a global structure that stores 10 function interfaces for the corresponding event-driven module (here is the Epoll module). So here's the call to the Ngx_epoll_module_ctx.actions.process_events function, which is the ngx_epoll_process_events function to handle the event.

Ngx_epoll_process_events calls the Linux function interface epoll_wait gets the "New Connection" event, and then calls the event's handler handler function to process the event.

It has already been said that handler has been set to the Ngx_event_accept function, so call ngx_event_accept for actual processing.

The following is an analysis of the Ngx_event_accept method, whose flowchart looks like this:

Following the streamlined code, the number in the comment corresponds to the number of the previous figure:

void Ngx_event_accept (ngx_event_t *ev) {socklen_t socklen;
 ngx_err_t err;
 ngx_log_t *log;
 ngx_uint_t level;
 ngx_socket_t s;
 ngx_event_t *rev, *wev;
 ngx_listening_t *ls;
 Ngx_connection_t *c, *LC;
 ngx_event_conf_t *ECF;
 
 U_char Sa[ngx_sockaddrlen];
  if (ev->timedout) {if (ngx_enable_accept_events (ngx_cycle_t *) ngx_cycle)!= Ngx_ok) {return;
 } ev->timedout = 0;
 
 } ECF = ngx_event_get_conf (Ngx_cycle->conf_ctx, ngx_event_core_module);
 
 if (Ngx_event_flags & ngx_use_rtsig_event) {ev->available = 1; Or else if (!
 Ngx_event_flags & Ngx_use_kqueue_event)) {ev->available = ecf->multi_accept;
 LC = ev->data;
 ls = lc->listening;
 
 Ev->ready = 0;
 
  do {socklen = Ngx_sockaddrlen;
 
  /* 1, accept method attempts to establish a connection, non-blocking call/s = Accept (LC->FD, (struct sockaddr *) SA, &socklen);
 
   if (s = = (ngx_socket_t)-1) {err = Ngx_socket_errno;
   if (err = = Ngx_eagain) {/* does not have a connection, return directly * *;
 
 }  level = Ngx_log_alert;
 
   if (err = = ngx_econnaborted) {level = Ngx_log_err;
   else if (err = = Ngx_emfile | | err = = ngx_enfile) {level = Ngx_log_crit;
    } if (err = = ngx_econnaborted) {if (Ngx_event_flags & ngx_use_kqueue_event) {ev->available--;
    } if (ev->available) {continue;
     } if (err = = Ngx_emfile | | err = = ngx_enfile) {if (ngx_disable_accept_events (ngx_cycle_t *) ngx_cycle)
    != Ngx_ok) {return;
      } if (Ngx_use_accept_mutex) {if (Ngx_accept_mutex_held) {ngx_shmtx_unlock (&ngx_accept_mutex);
     Ngx_accept_mutex_held = 0;
 
    } ngx_accept_disabled = 1;
    else {ngx_add_timer (EV, ecf->accept_mutex_delay);
  } return;
 
  /* 2, Set load balancing threshold * * ngx_accept_disabled = NGX_CYCLE->CONNECTION_N/8-ngx_cycle->free_connection_n;
 
  /* 3, from the connection pool to obtain a connection object */c = ngx_get_connection (S, ev->log); /* 4, create a memory pool for the connection * * C->p ool = Ngx_create_pool (ls->pool_size, Ev->log);
 
  C->SOCKADDR = Ngx_palloc (C->pool, Socklen);
 
  ngx_memcpy (c->sockaddr, SA, Socklen);
 
  Log = Ngx_palloc (C->pool, sizeof (ngx_log_t)); /* Set a blocking mode for AIO and non-blocking mode for others/* 5, set socket property to block or non-blocking/if (Ngx_inherited_nonblockin g) {if (Ngx_event_flags & ngx_use_aio_event) {if (ngx_blocking (s) = = 1) {Ngx_log_error (Ngx_log_alert,
     Ev->log, Ngx_socket_errno, ngx_blocking_n "failed");
     Ngx_close_accepted_connection (c);
    Return }} else {if (! Ngx_event_flags & (ngx_use_aio_event| ngx_use_rtsig_event))) {if (ngx_nonblocking (s) = = 1) {ngx_log_error (Ngx_log_alert, Ev->log, Ngx_socket_errn
     O, Ngx_nonblocking_n "failed");
     Ngx_close_accepted_connection (c);
    Return
 
  }} *log = ls->log;
  C->RECV = NGX_RECV;
  C->send = Ngx_send;
  C->recv_chain = Ngx_recv_chain; C->seNd_chain = Ngx_send_chain;
  C->log = log;
 
  C->pool->log = log;
  C->socklen = Socklen;
  c->listening = ls;
  C->LOCAL_SOCKADDR = ls->sockaddr;
 
  C->local_socklen = ls->socklen;
 
  c->unexpected_eof = 1;
  Rev = c->read;
 
  Wev = c->write;
 
  Wev->ready = 1; if (Ngx_event_flags & (ngx_use_aio_event|
  Ngx_use_rtsig_event)) {/* Rtsig, AIO, IOCP * * Rev->ready = 1;
 
  } if (ev->deferred_accept) {rev->ready = 1;
  } rev->log = log;
 
  Wev->log = log; * * TODO:MT:-Ngx_atomic_fetch_add () * or protection by critical section or light mutex * * TODO:MP:-AL
 
  Located in a shared memory *-ngx_atomic_fetch_add () * or protection by critical section or light mutex * * *
 
  C->number = Ngx_atomic_fetch_add (Ngx_connection_counter, 1);
   if (ls->addr_ntop) {c->addr_text.data = Ngx_pnalloc (C->pool, Ls->addr_text_max_len);
    if (C->addr_text.data = = NULL) {Ngx_close_accepted_connection (c);
   Return } C->addr_text.len = Ngx_sock_ntop (c->sockaddr, C->socklen, C->addr_text.data, ls
   ->addr_text_max_len, 0);
    if (C->addr_text.len = = 0) {ngx_close_accepted_connection (c);
   Return 
   }/* 6, add read and write events corresponding to the new connection to the Epoll object/if (Ngx_add_conn && (ngx_event_flags & ngx_use_epoll_event) = 0) {
    if (Ngx_add_conn (c) = = Ngx_error) {ngx_close_accepted_connection (c);
   Return
  } log->data = NULL;
 
  Log->handler = NULL;
 
 /* 7, TCP establishes a successful call method, this method in the ngx_listening_t structure body/Ls->handler (c); while (ev->available);
 /* Available flag indicates that as much as possible to establish a connection, by the configuration item multi_accept decision */}

The problem of "startled group" in Nginx

Nginx typically runs multiple worker processes that monitor the same port at the same time. When a new connection arrives, the kernel wakes up all of these processes, but only one process can successfully connect to the client, causing the other processes to waste a significant amount of overhead when they wake up, which is called a "surprise group" phenomenon. Nginx to solve the "surprise group" approach is to allow the process to obtain a mutex ngx_accept_mutex, so that the process mutually exclusive access to a certain section of the critical area. In this critical section, the process adds a read event to the Epoll module that corresponds to the connection it is listening to, so that the worker process responds when a "new connection" event occurs. This process of locking and adding events is done in the function Ngx_trylock_accept_mutex. And when other processes go into the function to add a read event, the mutex is found to be held by another process, so it can only return, and the event it listens to cannot be added to the Epoll module to respond to the "New Connection" event. But there's a problem: when does the process that holds the mutex unlock the mutex? It will take quite a long time if you need to wait for it to process all the events before releasing the lock. During this time, it is obviously undesirable for other worker processes to establish new connections. The Nginx solution is to put these event collations into the queue after obtaining the ready read/write events and returning from Epoll_wait after the process of obtaining the mutex through Ngx_trylock_accept_mutex:

New connection events into the ngx_posted_accept_events queue
An existing connection event is placed in the ngx_posted_events queue

The code is as follows:

if (Flags & ngx_post_events)
{
 */* defer processing of this batch of events/
 queue = (ngx_event_t * *) (rev->accept? &ngx_posted_ Accept_events: &ngx_posted_events);
 
 /* Add the event to the deferred execution queue
 /ngx_locked_post_event (rev, queue)
;
else
{
 Rev->handler (rev);/* Do not need to postpone, then handle the event immediately/
}

Write events to do similar processing. The process then processes the events in the Ngx_posted_accept_events queue, releasing the mutex immediately after processing, which minimizes the time it takes to lock the process.

The problem of load balance in Nginx

Each process in Nginx uses a threshold ngx_accept_disabled that handles load balancing, which is initialized in step 2nd of the previous illustration:

ngx_accept_disabled = NGX_CYCLE->CONNECTION_N/8-ngx_cycle->free_connection_n;

Its initial value is a negative number whose absolute value equals 7/8 of the total connection. When the threshold is less than 0 o'clock normal response to new connection event, when the threshold is greater than 0, no longer respond to new connection events, and the ngx_accept_disabled minus 1, the code is as follows:

if (ngx_accept_disabled > 0)
{
  ngx_accept_disabled--
}
else
{
 if (Ngx_trylock_accept_mutex (cycle) = = Ngx_error)
 {return
  ;
 }
 ....
}

This shows that when the current number of connections to a process reaches 7/8 of the total number of connections that can be processed, the load-balancing mechanism is triggered and the process stops responding to the new connection.

Reference:

"Deep understanding of Nginx" p328-p334.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.