The relationship between Accept_mutex and performance (Nginx)

Source: Internet
Author: User
Tags epoll

Note: Operating environment CentOS 6+ backgroundWhen a stress test was performed on Nginx with 20 workers, it was found that if the Accept_ in the event configuration block in the configuration file Mutex switch on (1.11.3 version before the default), there will be workers pressure uneven, a small number of worker CPU utilization reached 98%, the majority of workers pressure only about 1%; if you turn Accept_mutex off, The pressure of all workers is not very different, and the QPS will be greatly improved; Analysis Process
    1. nginx 1 (master) +n (worker) multi-process Model: Master is responsible for reading the configured listening port in nginx.conf during the boot process, and then adding it to a cycle->listening array. The Init_module function is called in the
    2. init_cycle function, and the Init_module function calls the Module_init function of all registered modules to complete the request for resources required for the module and some other work Where the Module_init function of the event module requests a piece of shared memory for storing Accept_mutex lock information and the connection number information
      ////////////    nginx/src/event/ngx_event.c///////shm.size = size;    Shm.name.len = sizeof ("Nginx_shared_zone")-1;    Shm.name.data = (U_char *) "Nginx_shared_zone";     Shm.log = cycle->log;    if (Ngx_shm_alloc (&SHM)! = NGX_OK) {return ngx_error;     } shared = Shm.addr;    Ngx_accept_mutex_ptr = (ngx_atomic_t *) gkfx;     Ngx_accept_mutex.spin = (ngx_uint_t)-1;        if (Ngx_shmtx_create (&ngx_accept_mutex, (ngx_shmtx_sh_t *) GKFX, Cycle->lock_file.data)    ! = NGX_OK) {return ngx_error; }
      all worker processes are initiated by the master process through the fork () function, So all worker processes inherit all the open file descriptors of the master process (including the previously created shared memory FD) and the variable data (which includes the previously created Accept_mutex Lock). The Process_init function of each module is invoked during worker initiation, where the Process_init function of the event module adds a listening array of master configuration to the events of the Epoll listener. The Epoll listener list for all workers in this initial stage contains the FD in the listening array.
   nginx/src/event/ngx_event.c   ////////* For each     listening socket */     ls = cycle-> Listening.elts;    for (i = 0; i < cycle->listening.nelts; i++) {#if (Ngx_have_reuseport)        if (Ls[i].reuseport && ls[i].wor Ker! = Ngx_worker) {            continue;        } #endif         C = ngx_get_connection (LS[I].FD, cycle->log);         if (c = = NULL) {            return ngx_error;        }         C->type = Ls[i].type;        C->log = &ls[i].log;         c->listening = &ls[i];        Ls[i].connection = C;         Rev = c->read;         Rev->log = c->log;        rev->accept = 1;        ...............

When each worker is actually running, the Ngx_process_events_and_timers function is executed, which acquires the Accept_mutex lock, and all events add a ngx_post_events identity. Other workers who do not get the lock will remove FD from the Epoll listener list in the listening array so that only the worker holding Accept_mutex will respond the next time a new connection to the Listenling array occurs.
Nginx/src/event/ngx_event.c///////if (Ngx_use_accept_mutex) {if (ngx_accept_disabled ;         0) {ngx_accept_disabled--;            } else {if (Ngx_trylock_accept_mutex (cycle) = = Ngx_error) {return;          } if (Ngx_accept_mutex_held) {flags |= ngx_post_events;  Add ngx_post_events Identifier} else {if (timer = = Ngx_timer_infinite | | Timer >                Ngx_accept_mutex_delay) {timer = Ngx_accept_mutex_delay; }}}}////////////////////////ngx_int_tngx_trylock_accept_mutex (ngx_cycle_t *cycle) {if (Ngx_shm Tx_trylock (&ngx_accept_mutex)) {ngx_log_debug0 (ngx_log_debug_event, Cycle->log, 0, "         Accept mutex locked ");        if (Ngx_accept_mutex_held && ngx_accept_events = = 0) {return NGX_OK; } if (Ngx_enable_accePt_events (cycle) = = Ngx_error) {//Add cycle->listening to Epoll Ngx_shmtx_unlock of the current worker process (&ngx_accept            _mutex);        return ngx_error;        } ngx_accept_events = 0;         Ngx_accept_mutex_held = 1;    return NGX_OK; } ngx_log_debug1 (Ngx_log_debug_event, Cycle->log, 0, "Accept Mutex lock failed:%ui", Ngx_accept_     Mutex_held); if (Ngx_accept_mutex_held) {if (ngx_disable_accept_events (cycle, 0) = = Ngx_error) {//Will cycle->listening from current W        The epoll of the Orker process removes the return ngx_error;    } Ngx_accept_mutex_held = 0; } return NGX_OK;}
For worker processes that hold Accept_mutex locks, the FD returned by Epoll is placed in the queue, as much as possible to accept new connections from the Epoll. These fd are processed in the queue when the Accept_mutex lock is not held.
  nginx/src/event/modules/ngx_epoll_module.c  //////////            if (Flags & Ngx_post_ EVENTS) {                queue = rev->accept? &ngx_posted_accept_events                                    : &ngx_posted_events;                 Ngx_post_event (rev, queue);             } else {                Rev->handler (rev);            }
Each time the worker finishes reading a new connection from Epoll, it calls accept to process the new connection and calls the new connection's asynchronous fallback function to process it, while the new connection is processed in real time 1/8 * worker_connection-free_ The difference of connecttion is ngx_accept_disabled. After processing the accept new connection, the Accept_mutex lock is released and some normal connection requests are processed. The next cycle of the worker will first determine the value of ngx_accept_disabled, if greater than 0 indicates that the worker currently handles more than 7/8 of the total number of connections, and no longer participates in Accept_mutex competition.
    nginx/src/event/ngx_event.c///////////     (void) ngx_process_events (cycle, timer, flags);     Delta = Ngx_current_msec-delta;     NGX_LOG_DEBUG1 (ngx_log_debug_event, Cycle->log, 0,                   "timer Delta:%M", delta);     Ngx_event_process_posted (cycle, &ngx_posted_accept_events);     if (Ngx_accept_mutex_held) {        ngx_shmtx_unlock (&ngx_accept_mutex);    }     if (delta) {        ngx_event_expire_timers ();    }     Ngx_event_process_posted (cycle, &ngx_posted_events);
Conclusion
    1. The above analysis is mainly the case of Accept_mutex open. For situations that are not open, it is relatively straightforward for all worker epoll to listen to all of the FD in the listening array, so that when a new connection comes in, the worker "robs the resource". For a large number of distributed short links, the open Accept_mutex option is good, avoiding the context switch caused by worker contention for resources and the lock overhead of try_lock. However, for TCP long links that transmit large amounts of data, opening Accept_mutex will cause stress to be concentrated on a few workers, especially if the worker_connection value is set too large. Therefore, for the use of Accept_mutex switch, according to the actual situation, not generalize.
    2. According to the analysis of our pressure test procedure, we find that it is a long TCP connection, then call the HTTP request, and the worker_connection is also relatively large, so there is a problem that the Accept_mutex open worker load does not cause the QPS drop.
    3. The epollexclusive option has been added to the current version of the Linux kernel, and Nginx has added support for the ngx_exclusive_event option since the 1.11.3 release, thus avoiding the surprise swarm effect of multi-worker epoll. Since then Accept_mutex has changed from default on to default off.

The relationship between Accept_mutex and performance (Nginx)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.