Nginx Event Driver Module Connection processing

Source: Internet
Author: User
Tags epoll

Overview

BecauseNginxWork inMaster-workerMulti-process mode, if allworkerThe process listens to the same port at the same time, and when a new connection event occurs for that port, eachworkerThe process will call the functionngx_event_acceptAttempts to establish communication with a new connection, that is, allworkerProcesses are awakened, which is called a "panic swarm" problem, which can cause system performance degradation. Fortunately inNginxAdopted aNgx_accept_mutexThe synchronous lock mechanism, which is the only one that obtains the lockworkerProcess to handle new connection events, and only one at a timeworkerThe process listens on a port. While this solves the "surprise cluster" problem, another problem arises, if each new connection event isworkerThe process obtains the right to lock and handles the connection event, which results in an uneven state between the processes, that is, in allworkerprocess, some processes handle a large number of connection events, and some processes do not handle connection events at all and remain idle. Therefore, this can causeworkerThe load imbalance between processes can affectNginx's overall performance. In order to solve the problem of load imbalance,NginxDefines the load threshold based on the implementation of a synchronous lockngx_accept_disabled, when aworkerThe load threshold for a process is greater than 0 o'clock, indicating that the process is in a state of heavy load,NginxThe process is controlled so that it does not have the opportunity to attempt to communicate with the new connection event, which creates an opportunity to handle new connection events for other processes that are not overloaded, thus achieving load balancing between processes.

Connection Event Handling

The new connection event is handled by the function ngx_event_accept .

/* Handle New Connection Event */voidngx_event_accept (ngx_event_t *ev) {socklen_t socklen;    ngx_err_t err;    ngx_log_t *log;    ngx_uint_t level;    ngx_socket_t s;    ngx_event_t *rev, *wev;    ngx_listening_t *ls;    Ngx_connection_t *c, *LC;    ngx_event_conf_t *ECF; U_char Sa[ngx_sockaddrlen]; #if (ngx_have_accept4) static ngx_uint_t use_accept4 = 1; #endif if (ev->t        Imedout) {if (Ngx_enable_accept_events ((ngx_cycle_t *) ngx_cycle)! = NGX_OK) {return;    } ev->timedout = 0;    }/* Gets the configuration item parameter structure of the Ngx_event_core_module module */ECF = ngx_event_get_conf (Ngx_cycle->conf_ctx, ngx_event_core_module);    if (Ngx_event_flags & ngx_use_rtsig_event) {ev->available = 1; } else if (! (    Ngx_event_flags & ngx_use_kqueue_event) {ev->available = ecf->multi_accept; The LC = ev->data;/* Gets the connection object corresponding to the event */ls = lc->listening;/* Gets the array of listening ports for the Connection object */ev->ready = 0;/* The status of the set event is not ready */NGX_LOG_DEBUG2 (ngx_log_debug_event, Ev->log, 0, "Accept on%V, readiness:%d", &am    P;ls->addr_text, ev->available);        do {socklen = Ngx_sockaddrlen; /* Accept to establish a new connection */#if (NGX_HAVE_ACCEPT4) if (USE_ACCEPT4) {s = accept4 (LC-&GT;FD, struct sockaddr *        ) SA, &socklen, sock_nonblock);        } else {s = Accept (LC-&GT;FD, (struct sockaddr *) SA, &socklen);  } #else s = Accept (LC-&GT;FD, (struct sockaddr *) SA, &socklen), #endif/* Corresponding processing when connection error is established */if (s = =            (ngx_socket_t)-1) {err = Ngx_socket_errno;                               if (err = = Ngx_eagain) {ngx_log_debug0 (ngx_log_debug_event, Ev->log, err,                "Accept () not ready");            Return            } level = Ngx_log_alert;            if (err = = ngx_econnaborted) {level = Ngx_log_err; } else if (err = = NGx_emfile | |            Err = = Ngx_enfile) {level = Ngx_log_crit; } #if (NGX_HAVE_ACCEPT4) Ngx_log_error (level, Ev->log, err, use_accept4?)            "Accept4 () failed": "Accept () failed");                if (use_accept4 && err = = Ngx_enosys) {use_accept4 = 0;                ngx_inherited_nonblocking = 0;            Continue } #else Ngx_log_error (level, Ev->log, err, "accept () failed"), #endif if (err = = ngx_econnaborted)                {if (Ngx_event_flags & ngx_use_kqueue_event) {ev->available--;                } if (ev->available) {continue; }} if (err = = Ngx_emfile | | err = = ngx_enfile) {if (Ngx_disable_accept_events (ngx_                cycle_t *) ngx_cycle)! = NGX_OK) {return; } if (Ngx_use_acCept_mutex) {if (Ngx_accept_mutex_held) {Ngx_shmtx_unlock (&ngx_accept_mute                        x);                    Ngx_accept_mutex_held = 0;                } ngx_accept_disabled = 1;                } else {Ngx_add_timer (EV, ecf->accept_mutex_delay);        }} return; } #if (Ngx_stat_stub) (void) Ngx_atomic_fetch_add (ngx_stat_accepted, 1); #endif/* * NGX_ACCEPT_DISABL The ED variable is a load balancing threshold that indicates whether the process is overloaded; * Set the Load Balancer threshold for one-eighth of the maximum number of connections per process minus the number of idle connections, * that is, when each process has more than 7/8 active connections to the maximum number of connections, * ngx_                              Accept_disabled greater than 0 indicates that the process is overloaded; */ngx_accept_disabled = NGX_CYCLE-&GT;CONNECTION_N/8        -ngx_cycle->free_connection_n;        /* Get a connection connection from the connections array to maintain the new connection */C = ngx_get_connection (S, ev->log); if (c = = NULL) {if (Ngx_close_socket (s) = =-1) {Ngx_log_error (Ngx_loG_alert, Ev->log, Ngx_socket_errno, ngx_close_socket_n "failed");        } return; } #if (Ngx_stat_stub) (void) Ngx_atomic_fetch_add (ngx_stat_active, 1); #endif/* Creates a pool of connections for new connections until the connection is closed.        Connection Pooling Pool */C->pool = Ngx_create_pool (ls->pool_size, Ev->log);            if (C->pool = = NULL) {ngx_close_accepted_connection (c);        Return        } c->sockaddr = Ngx_palloc (C->pool, Socklen);            if (c->sockaddr = = NULL) {ngx_close_accepted_connection (c);        Return        } ngx_memcpy (C->sockaddr, SA, Socklen);        Log = Ngx_palloc (C->pool, sizeof (ngx_log_t));            if (log = = NULL) {ngx_close_accepted_connection (c);        Return }/* Set a blocking mode for AIO and non-blocking mode for others *//* Sets the properties of the socket */if (ngx_inherited _nonblocking) {if (Ngx_event_flags & Ngx_use_aio_eVENT) {if (ngx_blocking (s) = =-1) {Ngx_log_error (Ngx_log_alert, Ev->log, Ngx_socket                    _errno, Ngx_blocking_n "failed");                    Ngx_close_accepted_connection (c);                Return }}} or else {/* When using the Epoll model, the socket's properties are non-blocking mode */if (! Ngx_event_flags & (ngx_use_aio_event| Ngx_use_rtsig_event)) {if (ngx_nonblocking (s) = =-1) {Ngx_log_error (Ngx_log_alert, Ev                    ->log, Ngx_socket_errno, ngx_nonblocking_n "failed");                    Ngx_close_accepted_connection (c);                Return        }}} *log = ls->log;        /* Initialize New connection */c->recv = NGX_RECV;        C->send = Ngx_send;        C->recv_chain = Ngx_recv_chain;        C->send_chain = Ngx_send_chain;        C->log = log;       C->pool->log = log; C->socklen = Socklen;        c->listening = ls;        C-&GT;LOCAL_SOCKADDR = ls->sockaddr;        C->local_socklen = ls->socklen; c->unexpected_eof = 1; #if (Ngx_have_unix_domain) if (c->sockaddr->sa_family = = Af_unix) {C-&gt            ; tcp_nopush = ngx_tcp_nopush_disabled; C->tcp_nodelay = ngx_tcp_nodelay_disabled; #if (ngx_solaris)/* SOLARIS ' s Sendfilev () supports Af_nca, af_ine         T, and Af_inet6 */c->sendfile = 0; #endif} #endif/* Get new Connection read event, write event */Rev = c->read;        Wev = c->write;        /* Write Event Ready */Wev->ready = 1; if (Ngx_event_flags & (ngx_use_aio_event|        Ngx_use_rtsig_event)) {/* Rtsig, AIO, IOCP */rev->ready = 1; } if (ev->deferred_accept) {rev->ready = 1; #if (ngx_have_kqueue) rev->available = 1        ; #endif} rev->log = log;        Wev->log = log;         /** TODO:MT:-Ngx_atomic_fetch_add () * or protection by critical sections or light mutex * * TODO:MP:-Allocated in a shared memory *-ngx_atomic_fetch_add () * or Prote Ction by critical sections or light mutexes */C->number = Ngx_atomic_fetch_add (Ngx_connection_counter, 1); #if (ngx_stat_stub) (void) Ngx_atomic_fetch_add (ngx_stat_handled, 1); #endif # if (ngx_threads) Rev->lock =        &c->lock;        Wev->lock = &c->lock;        Rev->own_lock = &c->lock; Wev->own_lock = &c->lock; #endif if (ls->addr_ntop) {c->addr_text.data = Ngx_pnalloc (c-&            Gt;pool, Ls->addr_text_max_len);                if (C->addr_text.data = = NULL) {ngx_close_accepted_connection (c);            Return                                       } C->addr_text.len = Ngx_sock_ntop (c->sockaddr, C->socklen,      C->addr_text.data, Ls->addr_text_max_len, 0);                if (C->addr_text.len = = 0) {ngx_close_accepted_connection (c);            Return        }} #if (ngx_debug) {struct sockaddr_in *sin;        ngx_cidr_t *cidr;        ngx_uint_t i; #if (ngx_have_inet6) struct sockaddr_in6 *sin6;        ngx_uint_t n; #endif CIDR = ecf->debug_connection.elts; for (i = 0; i < ecf->debug_connection.nelts; i++) {if (cidr[i].family! = (ngx_uint_t) c->sockaddr-&gt            ; sa_family) {goto next; } switch (cidr[i].family) {#if (NGX_HAVE_INET6) Case af_inet6:sin6 = (struct sockadd                R_IN6 *) c->sockaddr; for (n = 0; n < n++) {if (Sin6->sin6_addr.s6_addr[n] & cidr[i].u             . In6.mask.s6_addr[n])           ! = Cidr[i].u.in6.addr.s6_addr[n]) {goto next; }} break; #endif # if (ngx_have_unix_domain) case af_unix:break; #en                DIF default:/* af_inet */sin = (struct sockaddr_in *) c->sockaddr;                    if ((Sin->sin_addr.s_addr & cidr[i].u.in.mask)! = cidr[i].u.in.addr) {                Goto Next;            } break; } Log->log_level = ngx_log_debug_connection|            Ngx_log_debug_all;        Break        Next:continue; }} #endif ngx_log_debug3 (ngx_log_debug_event, log, 0, "*%ua Accept:%V fd:%d", c->n        Umber, &c->addr_text, s);            /* Register the read event corresponding to the new connection to the Epoll event object */if (Ngx_add_conn && (ngx_event_flags & ngx_use_epoll_event) = = 0) { if (Ngx_add_conn (c) = = Ngx_eRror) {ngx_close_accepted_connection (c);            Return        }} log->data = NULL;        Log->handler = NULL;        /* * Set the callback function to complete the final initialization of the new connection, * completed by the function Ngx_http_init_connection */Ls->handler (c); /* Adjust the event available flag bit, which is 1 for Nginx to make as many new connections as possible */if (Ngx_event_flags & ngx_use_kqueue_event) {Ev-&gt        ; available--; }} while (ev->available);}    /* Add a read event that listens to the socket connection to the Listener event */static ngx_int_tngx_enable_accept_events (ngx_cycle_t *cycle) {ngx_uint_t i;    ngx_listening_t *ls;    Ngx_connection_t *c;    /* Get the first address of the listener array */ls = cycle->listening.elts; /* Traverse the entire listening array */for (i = 0; i < cycle->listening.nelts; i++) {/* Get the connection for the current listening socket */c = ls[i].co        Nnection;            /* Whether the currently connected read event is active or not (if (c->read->active) {/* is in active state, indicating that the read event for the connection is already in the Event monitor object */        Continue }/* If the current connection is not added to the Event monitor object, theRegister the link with the event Monitor */if (Ngx_event_flags & ngx_use_rtsig_event) {if (Ngx_add_conn (c) = = Ngx_error) {            return ngx_error;  }} else {/* If the currently connected read event is not in the Event monitor object, add it */if (ngx_add_event (C->read, ngx_read_event, 0) = =            Ngx_error) {return ngx_error; }}} return NGX_OK;}    /* Delete the Read event listening connection from the event driver module */static ngx_int_tngx_disable_accept_events (ngx_cycle_t *cycle) {ngx_uint_t i;    ngx_listening_t *ls;    Ngx_connection_t *c;    /* Get the Listener interface */ls = cycle->listening.elts;        for (i = 0; i < cycle->listening.nelts; i++) {/* Gets the connection corresponding to the listening interface */c = ls[i].connection;        if (!c->read->active) {continue; }/* Remove the connection from the event-driven module */if (Ngx_event_flags & ngx_use_rtsig_event) {if (Ngx_del_conn (c, Ngx_dis            able_event) = = Ngx_error) {return ngx_error; }} else {/* is moved from the event-driven moduleRead events in addition to connections */if (ngx_del_event (C->read, ngx_read_event, ngx_disable_event) = = Ngx_error)            {return ngx_error; }}} return NGX_OK;}

When a new connection event occurs, only the process that obtains the synchronization lock can handle the connection event, avoiding the "panic swarm" issue, the process attempting to process the new connection event by the function ngx_ Trylock_accept_mutex implementation.

/* New Connection Event */ngx_int_tngx_trylock_accept_mutex (ngx_cycle_t *cycle) {/* Acquires Ngx_accept_mutex Lock, successfully returns 1, failed to return 0 */IF (Ngx_shmtx_trylock (&ngx_accept_mutex))        {ngx_log_debug0 (ngx_log_debug_event, Cycle->log, 0, "accept mutex locked");         /* * The flag bit Ngx_accept_mutex_held is 1 to indicate that the current process has acquired the NGX_ACCEPT_MUTEX lock; * When the following conditions are met, it indicates that the current process has obtained Ngx_accept_mutex lock before; * return directly; */if (Ngx_accept_mutex_held && ngx_accept_events = = 0 &&am p;!        (Ngx_event_flags & Ngx_use_rtsig_event))        {return NGX_OK; */* Add read events for all listening connections to the current Epoll event driver module */if (ngx_enable_accept_events (cycle) = = Ngx_error) {/* If add missing            , the lock is released */Ngx_shmtx_unlock (&AMP;NGX_ACCEPT_MUTEX);        return ngx_error;        }/* Sets the current process to acquire locks */ngx_accept_events = 0;    Ngx_accept_mutex_held = 1;/* indicates that the current process has been Ngx_accept_mutex lock */return NGX_OK; } Ngx_log_DEBUG1 (ngx_log_debug_event, Cycle->log, 0, "Accept Mutex lock failed:%ui", Ngx_accept_mutex_held);        /* * If the current process acquires a Ngx_accept_mutex lock failure and Ngx_accept_mutex_held is 1, * This is an error condition */if (Ngx_accept_mutex_held) {        /* Remove all listening connections from the event driver module by removing the Read event */if (ngx_disable_accept_events (cycle) = = Ngx_error) {return ngx_error;    } Ngx_accept_mutex_held = 0; } return NGX_OK;}

Nginx ngx_accept_disabled The load threshold to control whether the process handles new connection events and avoids inter-process load balancing issues.

if (ngx_accept_disabled > 0) {   ngx_accept_disabled--;} else{  if (Ngx_trylock_accept_mutex (cycle) = = Ngx_error) {        return;   } ...}

Resources:

"Deep understanding of Nginx"

"Deep analysis of Nginx"

"Nginx Source Analysis-Event loop"

"Some explanations on Ngx_trylock_accept_mutex"

Nginx Event Driver Module Connection processing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.