Understanding and simple implementation of thread pool

Source: Internet
Author: User
Tags assert

Pool

Because the server's hardware resources are "abundant", a very direct way to improve the performance of the server is to change the space time, that is, "waste" the hardware resources of the server in exchange for its operational efficiency. This is the concept of the pool.

A pool is a collection of resources that are created and initialized at the start of the server, called a static resource allocation.

When the server enters the formal runtime, that is, when the client requests are processed, if it needs the relevant resources, it can be fetched directly from the pool without the need for dynamic allocation. Obviously, getting the required resources directly from the pool is much faster than allocating resources dynamically, as system calls to allocate system resources are time-consuming.

When the server finishes processing a client connection, it can put the related resources back into the pool without performing a system transfer to release the resources. From the end result, the pool is equivalent to the server Management system resources application facility, it avoids the server to the kernel frequently accesses. Increased efficiency.

Pools can be divided into many kinds, common with process pools, line castles, and memory pools.

Memory pool

A memory pool is a way of allocating memory. In general, we use system calls such as new, malloc, and so on to allocate memory, the disadvantage of which is that due to the variable size of the requested memory block, when used frequently, it can cause a lot of memory fragmentation and decrease performance.

The memory pool is the first to allocate a certain amount of memory blocks of equal size before actually using memory for backup. When there is a new memory requirement, a portion of the memory block is separated from the memory pool, and if the memory block is not enough, then continue to request new memory. One notable advantage of this is that the efficiency of memory allocation is improved.

Process Pool && thread pool

In object-oriented programming, the creation and destruction of objects is a complicated process, which is time-consuming, so in order to improve the running efficiency of the program, we can minimize the number of objects created and destroyed, especially the resource-intensive object creation and destruction.
So we can create a process pool (thread pool), pre-put some processes (threads) in, to use the time to call directly, and then return the process to the process pool, save the time to create the deletion process, but of course, the need for additional overhead.
The use of thread pool and process pool can make the management process and the thread work to the system management, do not need the programmer to the inside of the thread, process management.

Take a process pool as an example

A process pool is a set of child processes that are pre-created by the server, and the number of these child processes is between 3~10 (which is, of course, a typical case). The number of threads in the thread pool should be the same as the number of CPUs.

All child processes in the process pool run the same code and have the same attributes, such as priority, Pgid, and so on.

When a new task arrives, the master process chooses one of the child processes in the process pool to serve it in some way. Choosing a child process that already exists is much less expensive than creating a child process dynamically. There are two methods for the main process to select which child process to serve for the new task:

    • The main process uses an algorithm to proactively select child processes. The simplest and most commonly used algorithm is the random algorithm and round Robin (rotation algorithm).
    • The master process and all child processes are synchronized through a shared work queue, and the child processes sleep on the work queue. When a new task arrives, the main process adds the task to the work queue. This wakes up the child processes that are waiting on the task, but only one child process will get the "take over" of the new task, it can fetch the task from the work queue and execute it, while the other child processes will continue to sleep on the work queue.

When a child process is selected, the master process also needs to use some kind of notification mechanism to tell the target child that a new task needs to be processed and to pass the necessary data. The simplest way to do this is to pre-establish a pipeline between the parent and child processes and then implement all interprocess communication through the pipeline. Passing data between parent and child threads is much simpler because we can define the data as global, and they are shared by all threads themselves.

Application of thread pool

Thread pools are primarily used for
1, a large number of threads to complete the task, and the time to complete the task is relatively short.
It is very appropriate to use thread pooling techniques to perform tasks such as Web server requests.
Because a single task is small, and the number of tasks is huge, a popular site will have a lot of clicks.
But for long-time tasks, such as a Telnet connection request, the thread pool's advantages are not obvious. Because the Telnet session time is much larger than the thread's creation time.

2, for the performance of demanding applications, such as requiring the server to respond quickly to customer requests.

3, accept a sudden large number of requests, but not so that the server resulting in a large number of threading applications.

Benefits of thread pooling && process pooling

The process pool process pool reduces the time that is created and returned. Increased efficiency.

Simulating a thread pool with C + +

A thread pool created under the Linux system in C language. The thread pool maintains a list of tasks (each cthread_worker structure is a task).
The Pool_init () function pre-creates max_thread_num threads, each with a thread_routine () function. In this function

1  while 0 )2{3       pthread_cond_wait (& (Pool->queue_ready),& (pool->  Queue_lock)); 4 }

Indicates that the thread is in a blocked wait state if there are no tasks in the task list. Otherwise, the task is removed from the queue and executed.
The Pool_add_worker () function adds a task to the task list of the thread pool and wakes up a blocked thread (if any) by calling Pthread_cond_signal (& (Pool->queue_ready)) after joining. )。
The Pool_destroy () function is used to destroy the thread pool, and the task in the thread pool task list will no longer be executed, but the running thread will keep the task running and then exit.

Specific code:

#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include &lt  ;p thread.h> #include <assert.h>/* * All running and waiting tasks in the thread pool are a cthread_worker * since all tasks are in the list, it is a list structure */typedef struct      The worker {/* callback function, which calls this function when the task is run, may also be declared as a different form */void * (*process) (void *arg);    void *arg;/* parameters of the callback function */struct worker *next;  } Cthread_worker;      /* Thread pool structure */typedef struct {pthread_mutex_t queue_lock;      pthread_cond_t Queue_ready;      /* Linked list structure, all waiting tasks in the thread pool */Cthread_worker *queue_head;      /* Whether to destroy the thread pool */int shutdown;      pthread_t *threadid;      /* Number of active threads allowed in the thread pool */int max_thread_num;  /* Number of tasks currently waiting for queue */int cur_queue_size;  } Cthread_pool;  int Pool_add_worker (void * (*process) (void *arg), void *arg);  void *thread_routine (void *arg);  static Cthread_pool *pool = NULL;      void Pool_init (int max_thread_num) {pool = (Cthread_pool *) malloc (sizeof (Cthread_pool)); Pthread_mutex_init (& (pool-&Gt;queue_lock), NULL);      Pthread_cond_init (& (Pool->queue_ready), NULL);      Pool->queue_head = NULL;      Pool->max_thread_num = Max_thread_num;      pool->cur_queue_size = 0;      Pool->shutdown = 0;      Pool->threadid = (pthread_t *) malloc (max_thread_num * sizeof (pthread_t));      int i = 0; for (i = 0; i < Max_thread_num; i++) {Pthread_create (& (Pool->threadid[i]), NULL, Thread_routine      , NULL);  }/* Join task to thread pool */int pool_add_worker (void * (*process) (void *arg), void *arg) {/* Construct a new task */Cthread_worker      *newworker = (Cthread_worker *) malloc (sizeof (Cthread_worker));      newworker->process = process;      Newworker->arg = arg;      Newworker->next = null;/* do not forget to empty */Pthread_mutex_lock (& (Pool->queue_lock));      /* Add the task to the wait queue */cthread_worker *member = pool->queue_head; if (member! = NULL) {while (member->next! = null) member = member->next;          Member->next = Newworker;      } else {pool->queue_head = Newworker;      } assert (Pool->queue_head! = NULL);      pool->cur_queue_size++;      Pthread_mutex_unlock (& (Pool->queue_lock));      /* OK, wait for a task in the queue, wake up a waiting thread; Note If all threads are busy, this sentence has no effect./pthread_cond_signal (& (Pool->queue_ready));  return 0; }/* Destroys the thread pool and waits for the task in the queue to no longer be executed, but the running thread will run out of the task and then exit */int Pool_destroy () {if (Pool->shutdown) return-1      ;/* Prevent two calls */Pool->shutdown = 1;      /* Wake up all waiting threads and the thread pool will be destroyed */Pthread_cond_broadcast (& (Pool->queue_ready));      /* Block waiting threads to exit or become zombies */int i;      for (i = 0; i < pool->max_thread_num; i++) Pthread_join (Pool->threadid[i], NULL);      Free (pool->threadid);      /* Destroy the wait queue */cthread_worker *head = NULL;          while (pool->queue_head! = NULL) {head = pool->queue_head;          Pool->queue_head = pool->queue_head->next;Free (head);      }/* Condition variable and mutex do not forget to destroy */Pthread_mutex_destroy (& (Pool->queue_lock));      Pthread_cond_destroy (& (Pool->queue_ready));      Free (pool);      /* After destroying the pointer empty is a good habit */pool=null;  return 0;      } void * Thread_routine (void *arg) {printf ("Starting thread 0x%x\n", pthread_self ());          while (1) {Pthread_mutex_lock (& (Pool->queue_lock)); /* If the wait queue is 0 and the thread pool is not destroyed, it is in a blocked state; Note that Pthread_cond_wait is an atomic operation that will be unlocked before waiting, and will be locked after waking up */while (pool->cur_queue_size = = 0 &&!pool->shut              Down) {printf ("Thread 0x%x is waiting\n", pthread_self ());          Pthread_cond_wait (& (Pool->queue_ready), & (Pool->queue_lock));              }/* thread pool to be destroyed */if (Pool->shutdown) {/* encountered Break,continue,return and other jump statements, do not forget to unlock the first */              Pthread_mutex_unlock (& (Pool->queue_lock));  printf ("Thread 0x%x would exit\n", pthread_self ());            Pthread_exit (NULL);          } printf ("Thread 0x%x is starting to work\n", pthread_self ());          /*assert is a good helper for debugging */ASSERT (Pool->cur_queue_size! = 0);          ASSERT (Pool->queue_head! = NULL);          /* Wait for the queue length minus 1 and remove the header element from the list */pool->cur_queue_size--;          Cthread_worker *worker = pool->queue_head;          Pool->queue_head = worker->next;          Pthread_mutex_unlock (& (Pool->queue_lock));          /* Call the callback function to perform the task */(* (worker->process)) (WORKER-&GT;ARG);          Free (worker);      worker = NULL;  }/* This sentence should be unreachable */pthread_exit (NULL); }//Below is the test code void * myprocess (void *arg) {printf ("ThreadID is 0x%x, working on Task%d\n", Pthread_self (), *      (int *) arg);  Sleep (1);/* Rest for one second, lengthen the execution time of the task */return NULL; } int main (int argc, char **argv) {pool_init (3);/* up to three active threads in the thread pool *///////////////int *workingnum = (i      NT *) malloc (sizeof (int) * 10); INT I;          for (i = 0; i < i++) {Workingnum[i] = i;      Pool_add_worker (Myprocess, &workingnum[i]);      }/* Wait for all tasks to complete */sleep (5);      /* Destroy thread pool */Pool_destroy ();      Free (workingnum);  return 0;   }

  

Understanding and simple implementation of thread pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.