Nginx source code learning notes (21) -- event module (2) -- event-driven core ngx_process_events_and_timers

Source: Internet
Author: User

First, let's continue to recall that there was a non-involved ngx_process_events_and_timers in the previous sub-thread execution operation. Today we will study this function.

This article from: http://blog.csdn.net/lengzijian/article/details/7601730

Let's take a look at the part in section 19th:


Today we will mainly explain the event-driven functions, in the red part of the figure:

Src/event/cycle (ngx_cycle_t * cycle) {ngx_uint_t flags; ngx_msec_t timer, Delta; If (ngx_timer_resolution) {timer = ngx_timer_infinite; flags = 0 ;} else {timer = ngx_event_find_timer (); flags = ngx_update_time;}/* ngx_use_accept_mutex variable indicates whether to use the accept mutex by default. You can use the accept_mutex command to disable it; accept mutex is used to avoid group alarms and achieve Load Balancing */If (ngx_use_accept_mutex) {/* ngx_acc The ept_disabled variable is calculated in the ngx_event_accept function. If ngx_accept_disabled is greater than 0, it indicates that the process has received too many links. Therefore, it gives up a chance to compete for accept mutex and removes itself by one. Then, continue to process the events on the existing connection. Nginx uses this to achieve load balancing that inherits the basic connection. */If (ngx_accept_disabled> 0) {ngx_accept_disabled --;} else {/* attempts to lock the accept mutex. Only the process that successfully acquires the lock will put the listen socket in epoll. Therefore, this ensures that only one process has a listening set interface. Therefore, all processes are blocked in epoll_wait and will not be surprised. */If (ngx_trylock_accept_mutex (cycle) = ngx_error) {return;} If (ngx_accept_mutex_held) {/* If the process gets the lock, a ngx_post_events flag will be added. This flag is used to put all generated events into a queue. After being released, the events are processed slowly. Because the processing time may be very time-consuming. If the lock is not applied before processing, the process occupies the lock for a long time, and other processes cannot obtain the lock, in this way, accept is less efficient. */Flags | = ngx_post_events;} else {/* does not obtain the obtained process. Of course, the ngx_post_events flag is not required. However, you need to set the delay time to compete for the lock. */If (timer = ngx_timer_infinite | timer> ngx_accept_mutex_delay) {timer = timer ;}}} Delta = ngx_current_msec;/* Next, epoll starts wait events, the specific implementation of ngx_process_events corresponds to the ngx_epoll_process_events function in the epoll module. This will be explained in detail here */(void) ngx_process_events (cycle, timer, flags ); // count the time consumption of this wait event. Delta = ngx_current_msec-delta; ngx_log_debug1 (ngx_log_debug_event, cycle-> log, 0, "timer delta: % M ", Delta);/* ngx_posted_accept_events is an event queue that stores the accept events that epoll receives from the wait listening set interface. After the ngx_post_events mark is used, all accept events will be saved to this queue */If (ngx_posted_accept_events) {ngx_event_process_posted (cycle, & ngx_posted_accept_events );} // after all the accept events are processed, if the lock is held, it will be released. If (ngx_accept_mutex_held) {ngx_shmtx_unlock (& ngx_accept_mutex);}/* Delta is the time consumed in previous statistics. If it is time consumed in milliseconds, check all time timers, if timeout is used, the expired timer is deleted from the time rbtree, And the handler function of the corresponding event is called to process */If (DELTA) {ngx_event_expire_timers ();} ngx_log_debug1 (ngx_log_debug_event, cycle-> log, 0, "posted events % P", ngx_posted_events);/* process common events (read/write events obtained on the connection ), because each event has its own handler method, */If (ngx_posted_events) {If (ngx_threaded) {ngx_wakeup_worker_thread (cycle);} else {ngx_event_process_posted (cycle, & ngx_posted_events );}}}

As mentioned before, the accept event is actually a new event on the listening set interface. The following describes the handler method of the accept time:

Ngx_event_accept:

Src/event/Export (ngx_event_t * eV) {socklen_t socklen; ngx_err_t err; ngx_log_t * log; S; ngx_event_t * Rev, * WEV; ngx_listening_t * ls; ngx_connection_t * C, * LC; ngx_event_conf_t * ECF; u_char SA [ngx_sockaddrlen]; // omitting some codes LC = ev-> data; LS = LC-> listening; ev-> ready = 0; ngx_log_debug2 (ngx_log_debug_event, ev-> log, 0, "accept on % v, ready: % d", & LS-> addr_te XT, ev-> available); do {socklen = ngx_sockaddrlen; // accept a new connection s = accept (LC-> FD, (struct sockaddr *) SA, & socklen); // omitting some code/* after an accept is connected to a new connection, it recalculates the value of ngx_accept_disabled. It is mainly used for load balancing and has been mentioned before. Here, we can see that the total connection method is "1/8 of the total number of connections-the remaining number of connections" refers to the maximum number of connections set by each process. This number can be specified in the configuration file. Therefore, after each process reaches 7/8 of the total number of connections, ngx_accept_disabled is greater than zero, and the connection is overloaded */ngx_accept_disabled = ngx_cycle-> connection_n/8-ngx_cycle-> free_connection_n; // obtain a connection c = ngx_get_connection (S, ev-> log); // when a memory pool is created for the new link // when the connection is closed, release pool C-> pool = ngx_create_pool (LS-> pool_size, ev-> log); If (c-> pool = NULL) {ngx_close_accepted_connection (c); return ;} c-> sockaddr = ngx_palloc (c-> pool, socklen); If (c-> sock ADDR = NULL) {ngx_close_accepted_connection (c); return;} ngx_memcpy (c-> sockaddr, SA, socklen); log = ngx_palloc (c-> pool, sizeof (ngx_log_t); If (log = NULL) {ngx_close_accepted_connection (c); return ;} /* set a blocking mode for AIO and non-blocking mode for others */If (ngx_inherited_nonblocking) {If (ngx_event_flags & ngx_use_aio_event) {If (ngx_blocking (S) =-1) {ngx_log_error (ngx_log _ Alert, ev-> log, ngx_socket_errno, ngx_blocking_n "failed"); ngx_close_accepted_connection (c); Return ;}} else {// we use the epoll model, here we set the connection to nonblocking if (! (Ngx_event_flags & (blocks | blocks) {If (ngx_nonblocking (S) =-1) {ngx_log_error (ngx_log_alert, ev-> log, delimiter, ngx_nonblocking_n "failed "); ngx_close_accepted_connection (c); Return ;}}* log = LS-> log; // initialize a new connection C-> Recv = ngx_recv; C-> send = ngx_send; c-> recv_chain = ngx_recv_chain; C-> send_chain = ngx_send_chain; C-> log = log; C-> pool-> log = log; C-> socklen = socklen; c-> listening = ls; C-> local_sockaddr = LS-> sockaddr; C-> unexpected_eof = 1; # If (ngx_have_unix_domain) if (c-> sockaddr-> sa_family = af_unix) {C-> tcp_nopush = ngx_tcp_nopush_disabled; C-> tcp_nodelay = ngx_tcp_nodelay_disabled; # If (ngx_solaris) /* Solaris's sendfilev () supports af_nca, af_inet, and af_inet6 */C-> sendfile = 0; # endif} # endif REV = C-> read; WEV = C-> write; WEV-> ready = 1; if (ngx_event_flags & (ngx_use_aio_event | ngx_use_rtsig_event) {/* rtsig, AIO, iocp */REV-> ready = 1;} If (ev-> deferred_accept) {rev-> ready = 1; # If (ngx_have_kqueue) Rev-> available = 1; # endif} rev-> log = log; WEV-> log = log;/** todo: MT:-ngx_atomic_fetch_add () * Or protection by critical section or light mutex ** todo: MP:-allocated in a shared memory *-ngx_atomic_fetch_add () * Or protection by critical section or light mutex */C-> Number = counter (ngx_connection_counter, 1); If (ngx_add_conn & (ngx_event_flags & ngx_use_epoll_event) = 0) {If (ngx_add_conn (c) = ngx_error) {ngx_close_accepted_connection (c); Return ;}} log-> DATA = NULL; log-> handler = NULL; /* Here, listen handler is very important. It will complete the final initialization of the new connection, and put the new connection from accept into epoll; the function hanging on the handler, in the subsequent HTTP module, ngx_http_init_connection details */LS-> handler (c); If (ngx_event_flags & ngx_use_kqueue_event) {ev-> available --;}} while (ev-> available );}

The handler method of the accpt event is the same. Then there is the handler method for each connection read/write event. This part will directly introduce us to the HTTP module. We are not in a hurry, but also need to learn epoll of the nginx classic module.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.