Nginx Source code Analysis--thread pool

Source: Internet
Author: User

Source: Nginx 1.13.0-release First, prefaceNginx is a multi-process model, master and worker mainly through pipe pipeline communication, the advantage of multi-process is that each process does not affect each other. However, it is often asked why Nginx does not use a multithreaded model (except in the case of a previous article, the other is to ask the author, HAHA). In fact, Nginx Code provides a core module of the Thread_pool (thread pool) to handle multi-tasking. The following is my understanding of the Thread_pool this module to do some sharing with you (the errors in the text, insufficient also please point out, thank you)   second, Thread_pool thread pool module IntroductionNginx's main functions are composed of a module, Thread_pool is no exception. The thread pool is primarily used for IO operations such as reading, sending files, and avoiding slow IO affecting worker uptime. Refer to an official configuration example first
Syntax:thread_pool name Threads=number [Max_queue=number];D efault:thread_pool Default threads=32 max_queue=65536; Context:main
According to the configuration described above, Thread_pool is named, the number of threads above and the queue size refer to the thread in each worker process, not the total number of threads in all workers.      All threads in a thread pool share a queue, the maximum number of people in the queue is the max_queue defined above, and if the queue is full, then adding a task to the queue will cause an error. Based on the previously mentioned module initialization process (before Master starts the worker) Create_conf--> Command_set function-->init_conf, follow this process to see the Thread_pool module initialization
/******************* NGINX/SRC/CORE/NGX_THREAD_POOL.C ************************///The infrastructure required to create the thread pool static void * Ngx_     Thread_pool_create_conf (ngx_cycle_t *cycle) {ngx_thread_pool_conf_t *tcf;    Request a piece of memory TCF = Ngx_pcalloc (cycle->pool, sizeof (ngx_thread_pool_conf_t)) from the memory pool pointed to by Cycle->pool;    if (TCF = = null) {return null; }//First request an array containing 4 ngx_thread_pool_t pointer-type elements//ngx_thread_pool_t a thread pool-related information if (Ngx_array_init (&tcf-&gt    ;p Ools, Cycle->pool, 4, sizeof (ngx_thread_pool_t *)) = NGX_OK) {return NULL; } return TCF;} Resolves the configuration of the Thread_pool in the processing configuration file and saves the relevant information in the ngx_thread_pool_t in static char * Ngx_thread_pool (ngx_conf_t *cf, ngx_command_t *    cmd, void *conf) {ngx_str_t *value;    ngx_uint_t i;     ngx_thread_pool_t *TP;     Value = cf->args->elts;  Based on the name of the Thread_pool configuration as the unique identifier of the thread pool (if duplicate name, only the first valid)//apply ngx_thread_pool_t structure to save the thread pool information//This shows that Nginx supports configuring multiple names with different thread pool TP = Ngx_thread_pool_add (cf, &value[1]); ...//handle all elements of the Thread_pool configuration line for (i = 2; i < cf->args->nelts; i++) {//Check the configured number of threads if (ngx_st                RNCMP (Value[i].data, "threads=", 8) = = 0) {...}    Check the configured maximum queue Length if (ngx_strncmp (Value[i].data, "max_queue=", 10) = = 0) {...} }    ......} Determines whether the configuration of each thread pool in an array containing multiple thread pools is correct static char * ngx_thread_pool_init_conf (ngx_cycle_t *cycle, void *conf) {.... ngx_thread     _pool_t **TPP;    TPP = tcf->pools.elts;     Iterate through all the thread pool configurations in the array and check its correctness for (i = 0; i < tcf->pools.nelts; i++) {...} return NGX_CONF_OK;}
       After the above process has been completed, Nginx's master saves a configuration of the thread pool (tcf->pools), which is also inherited when the worker is created. The Init_process function (if any) of each core module is then called in each worker.
/******************* NGINX/SRC/CORE/NGX_THREAD_POOL.C ************************///The infrastructure required to create the thread pool is static ngx_int_tngx_    Thread_pool_init_worker (ngx_cycle_t *cycle) {ngx_uint_t i;    ngx_thread_pool_t **TPP;    ngx_thread_pool_conf_t *tcf; If it is not a worker or if there is only one worker, do not employ the thread pool if (ngx_process! = Ngx_process_worker && ngx_process! = ngx_process_sing    LE) {return NGX_OK;     }//Initialize task Queue ngx_thread_pool_queue_init (&ngx_thread_pool_done);    TPP = tcf->pools.elts; for (i = 0; i < tcf->pools.nelts; i++) {//Initialize each thread pool if (Ngx_thread_pool_init (Tpp[i], Cycle->log, CY        Cle->pool)! = NGX_OK) {return ngx_error; }} return NGX_OK;} The thread pool initializes the static ngx_int_t Ngx_thread_pool_init (ngx_thread_pool_t *tp, ngx_log_t *log, ngx_pool_t *pool) {...//initialization     Task Queue Ngx_thread_pool_queue_init (&tp->queue);    Create the thread lock if (ngx_thread_mutex_create (&AMP;TP-&GT;MTX, log)! = NGX_OK) {    return ngx_error; }//Create thread condition variable if (ngx_thread_cond_create (&tp->cond, log)! = NGX_OK) {(void) Ngx_thread_mutex_destroy (        &AMP;TP-&GT;MTX, log);    return ngx_error; } ... for (n = 0; n < tp->threads; n++) {//create thread pool for each thread err = pthread_create (&tid, &a        TTR, ngx_thread_pool_cycle, TP);            if (err) {ngx_log_error (Ngx_log_alert, log, err, "Pthread_create () failed");        return ngx_error; }    }    ......} Thread pool threads handle main function static void *ngx_thread_pool_cycle (void *data) {...;)        {//Blocked way to get the thread lock if (Ngx_thread_mutex_lock (&AMP;TP-&GT;MTX, tp->log)! = NGX_OK) {return NULL;         }/* The number may become negative */tp->waiting--; If the task queue is empty, the cond_wait block waits for a new task when the call Cond_signal/broadcast fires while (Tp->queue.first = = NULL) {if (ngx_thr Ead_cond_wait (&tp->cond, &AMP;TP-&GT;MTX, Tp-> log)! = NGX_OK) {(void) Ngx_thread_mutex_unlock (&tp->mtx, Tp->log);            return NULL;        }//Get task from the task queue and remove it from the queue task = tp->queue.first;         Tp->queue.first = task->next;        if (Tp->queue.first = = NULL) {tp->queue.last = &tp->queue.first;        } if (Ngx_thread_mutex_unlock (&AMP;TP-&GT;MTX, tp->log)! = NGX_OK) {return NULL;        } ...//task processing function task->handler (task->ctx, Tp->log);         ..... ngx_spinlock (&ngx_thread_pool_done_lock, 1, 2048);        Adds a preprocessed task to the done queue waiting for the callback function to invoke the event to continue processing *ngx_thread_pool_done.last = task;                Ngx_thread_pool_done.last = &task->next;         To prevent compiler optimizations, ensure that the unlock operation is performed after the above statement is completed Ngx_memory_barrier ();                Ngx_unlock (&ngx_thread_pool_done_lock);    (void) ngx_notify (Ngx_thread_pool_handler); }}//ATEach event event that is contained in a task on the Pool_done queue is static void Ngx_thread_pool_handler (ngx_event_t *ev) {... ngx_spinlock      Ead_pool_done_lock, 1, 2048);    Gets the task linked to the head of the list of tasks = Ngx_thread_pool_done.first;    Ngx_thread_pool_done.first = NULL;     Ngx_thread_pool_done.last = &ngx_thread_pool_done.first;     Ngx_memory_barrier ();     Ngx_unlock (&ngx_thread_pool_done_lock); while (Task) {ngx_log_debug1 (Ngx_log_debug_core, Ev->log, 0, "Run completion handler for        Task #%ui ", task->id);        Traverse all task events in the queue event = &task->event;         Task = task->next;        Event->complete = 1;         event->active = 0;    Call the event corresponding to the processing function targeted processing event->handler (event); }}
three, Thread_pool thread pool use exampleAccording to the above, the thread pool in Nginx is mainly used to manipulate the IO operation of the file. Therefore, the use of the thread pool is seen in the module ngx_http_file_cache.c file that comes with Nginx.
/*********************** NGINX/SRC/OS/UNIX/NGX_FILES.C **********************///file_cache Module's handler function (involving the thread pool) static ssize_t Ngx_http_file_cache_aio_read (ngx_http_request_t *r, ngx_http_cache_t *c) {... #if (ngx_threads) if (CLC        F->aio = = ngx_http_aio_threads) {C->file.thread_task = c->thread_task;        The function registered here is called C->file.thread_handler = Ngx_http_cache_thread_handler in the Ngx_thread_read function in the following statement;        C->file.thread_ctx = R; Depending on the properties of the task, select the correct thread pool and initialize the individual members in the task structure n = ngx_thread_read (&c->file, C->buf->pos, C->body_sta         RT, 0, R->pool);        C->thread_task = c->file.thread_task;         C->reading = (n = = Ngx_again);    return n;  } #endif return Ngx_read_file (&c->file, C->buf->pos, C->body_start, 0);} The handler function for task tasks is static ngx_int_t Ngx_http_cache_thread_handler (ngx_thread_task_t *task, ngx_file_t *file) {... tp    = clcf->thread_pool; ..... task->evEnt.data = R; Registers the Thread_event_handler function, which is called when handling event events in the Pool_done queue Task->event.handler = Ngx_http_cache_thread_event_     Handler    Place the task in the task queue of the thread pool if (Ngx_thread_task_post (TP, Task)! = NGX_OK) {return ngx_error; }    ......} /*********************** nginx/src/core/ngx_thread_pool.c **********************///Add task to queue ngx_int_t Ngx_thread_ Task_post (ngx_thread_pool_t *tp, ngx_thread_task_t *task) {//If the current task is being processed, exit if (task->event.active) {ngx_        Log_error (Ngx_log_alert, Tp->log, 0, "task #%ui already Active", task->id);    return ngx_error;    } if (Ngx_thread_mutex_lock (&AMP;TP-&GT;MTX, tp->log)! = NGX_OK) {return ngx_error; }//Determine the relationship between the number of tasks that the current thread pool waits for and the maximum queue length if (tp->waiting >= tp->max_queue) {(void) Ngx_thread_mutex_unlock (         &AMP;TP-&GT;MTX, Tp->log);  Ngx_log_error (Ngx_log_err, Tp->log, 0, "thread pool \"%v\ "queue overflow:%i tasks Waiting",                    &tp->name, tp->waiting);    return ngx_error;     }//Activation task task->event.active = 1;    Task->id = ngx_thread_pool_task_id++;         Task->next = NULL; Notifies the blocked line Cheng event to join, can unblock if (ngx_thread_cond_signal (&tp->cond, tp->log)! = NGX_OK) {(void) ngx_thread_m        Utex_unlock (&AMP;TP-&GT;MTX, Tp->log);    return ngx_error;    } *tp->queue.last = task;     Tp->queue.last = &task->next;     tp->waiting++;     (void) Ngx_thread_mutex_unlock (&tp->mtx, Tp->log);                   Ngx_log_debug2 (Ngx_log_debug_core, Tp->log, 0, "task #%ui added to thread pool \"%v\ ",     Task->id, &tp->name); return NGX_OK;}
The above example basically shows how Nginx is currently using the thread pool, and using a thread pool to handle slow operations such as IO can improve the efficiency of the main thread of the worker. Of course, when you develop the module yourself, you can also refer to the thread pool method in the File_cache module to invoke the multithreaded boost program performance. (you are welcome to criticize)

Nginx Source code Analysis--thread pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.