Use the C ++ language to design a scalable Thread Pool

Source: Internet
Author: User
Abstract: In the design of various business solutions, the efficiency of server processing tasks is an important criterion for measuring the advantages and disadvantages of solutions. Concurrent processing of tasks using multithreading technology is a major means to improve server efficiency. However, frequent thread creation, destruction, and task allocation also reduce system efficiency. This article designs a general thread pool. According to the characteristics of the tasks processed by different servers, you can set the corresponding thread pool parameters to maximize system performance.

Keyword: thread pool multithreading task virtual function exception


In the design process of various business solutions, the efficiency of server processing tasks often determines the success or failure of the solution. Multi-threaded processing tasks are the main means to improve server efficiency. They increase the utilization of server resources and enable concurrent processing of tasks. However, if the server processes tasks with a light weight and high frequency, thread creation and destruction will occur frequently, the overhead of the system used to process thread creation and destruction will account for a considerable proportion, which reduces the system efficiency. The thread pool technology can reduce the impact of frequent thread creation and destruction on system performance.

A thread pool is a technology used to create threads in advance. Before a task arrives, the thread pool creates a certain number of threads (N1) and puts them in the idle queue. These threads are all in the congested State, which does not consume CPU, but takes up a small amount of memory space. When the task arrives, the buffer pool selects an idle thread to pass the task to this thread for running. When N1 threads are processing tasks, the buffer pool automatically creates a certain number of new threads for processing more tasks. When the system is idle, most threads remain in the paused state. The thread pool automatically destroys some threads and recycles system resources.

The general thread buffer pool design not only needs to implement the above functions, but also takes into account the portability of this design to reduce repeated development. The key points to consider in the design are:

Universality of task objects;
Thread creation and destruction policies;
Task Allocation Policy.
Analysis and Design

1. versatility of task objects

Different business solutions have their own unique task processing methods, and the division of tasks varies greatly. In order to make it more universal when processing task objects, the design of task objects must be completely independent of the actual task processing logic. From the perspective of task execution, a task is only one or multiple execution processes of the process. You can define the task interface as follows:

Class task



Task ();

Virtual ~ Task ();

Virtual bool run () = 0;


The task class is the base class of all task classes, and the pure virtual function run () is the entry to the task flow, when processing a task, the worker thread starts to execute the task processing process. When designing a new task, you only need to inherit the task interface, and the new task can be executed in the thread pool.

The task creation, execution, and destruction are designed as follows:

(1) create a task as needed. The new operation is used to dynamically create a specific task object, and then input the thread pool. The thread pool automatically allocates a thread to execute the task.

(2) Whether the task is completed is determined by itself. It is impossible to predict when an unknown task is executed. It must be determined by the task itself. This policy is implemented through the returned values of task: Run. When a worker thread executes a task, if the returned value is true, it indicates that the task is completed, and the delete operation is used to destroy the task. If the returned value is false, it indicates that the task is not completed, continue to execute this task.

This policy eliminates the need to worry too much about task interface specifications when designing a new task processing flow. Instead, you only need to initialize various resources in the constructor of the new task class, reclaim Resources in the destructor of the new job class and implement the main processing logic in the run () method. Then the new job class can be executed in the thread pool.

2. Thread creation and destruction

The number of threads in the thread buffer pool should be determined according to the task processing requirements.

When the buffer pool is just created, there are a certain number of threads (N1) in the thread pool, so that new tasks can be executed in a timely manner. For example, when a client sends a login request to the server, the server usually needs to create several associated tasks. That is to say, a single interaction between the client and the server usually produces a certain number of tasks. Based on the business processed by a server, it is estimated that the number of tasks generated by one service is N2 on average. So N1 should be an integer multiple of N2, n1 = n2 × N1, reducing the probability of creating a thread due to insufficient threads, so that the server can be most efficient at the initial stage of business processing.

When all threads in the thread buffer pool are busy, the thread pool will create new threads and create N3 threads. From the above analysis, in order to reduce the probability of creating a thread due to insufficient threads, N3 should also be an integer multiple of N2, N3 = n2 × N2.

When server services decrease and a large number of threads are idle, some threads should be destroyed. Obviously, the Timeout Policy should be used here. When some threads remain idle after the time t expires, some Idle threads will be destroyed. Destroys N4 Idle threads. To reduce the probability of creating a thread due to insufficient threads, N4 should also be an integer multiple of N2, N4 = n2 × N3. Of course, even if the server remains idle, N1 threads should be retained in order to timely process new tasks.

3. Task Allocation Policy

There are various task objects in service processing, and these business objects have different usage of system resources. Regardless of the space complexity of these tasks, from the perspective of thread execution tasks, the main concern is the time complexity.

When receiving a new task, the thread buffer pool first needs to look for Idle threads, input a new task, execute the task, delete the task, and set the idle thread flag. Looking for Idle threads, passing in tasks, and final cleanup tasks are all additional overhead for task execution. If most of the executed tasks are lightweight tasks, the resource waste caused by additional overhead becomes very prominent. To solve this problem, you can input N5 lightweight tasks to a thread, which executes N5 lightweight tasks in turn. Because they are completed in a short time, they do not affect the timeliness of task response. Obviously, N5 is greater than or equal to 1.


Due to the length of the source code, not all code can be listed one by one. Here, the thread buffer pool is provided in the form of pseudocode in the Process of thread creation, destruction, task allocation, and task execution.

(1) Main cycle for allocating tasks in the thread pool (also a thread)

In addition to the task allocation algorithm, the algorithm for creating and destroying some threads is also included.

For (;;){

PThread = GetIdleThread (); // check the idle thread queue

If (pThread! = NULL ){

If (CheckNewTask () {// a new task exists.

TaskList tl;

GetTask (tl); // obtain a certain number of tasks

AddTaskToThread (pTask, tl); // transmits the task to the thread

Continue; // continue the loop



If (pThread = NULL & nThread <THREAD_MAX) // No idle thread

CreateNewThread (); // create a thread

Continue; // continue the loop


// No task to be processed or the maximum number of threads has been reached.

If (WaitForTaskOrThreadTimeout ()){

If (IncrIdleTime ()> IDLE_MAX) {// The system is idle, timing

// The system remains idle for a long time and destroys a certain number of Idle threads

DecrIdleThread ();




Return 0; // thread termination


(2) task execution process of the worker thread

For (;;){

// Check whether a task is running in the task queue

If (! Checktaskqueue () {// no task in the queue

Ppool-> ontaskidle (this); // notification thread pool, which is idle

If (waitfortask ())

Continue; // continue the loop


Return 0; // terminate a thread

} Else {// a task needs to be run

Ptask = gettask (); // get a new task

Try {

While (! Ptask-> Run ()){

// The loop body is empty and runs continuously until the task is executed.



Catch (... ){

Writelog (... ); // An exception occurs during task execution and logs are recorded.


Delete ptask; // After the task is executed, delete the task.



The try-catch control block is used to capture exceptions in the core part of the task execution. Although exceptions may slightly affect the program speed, the tasks to be executed are unknown and cannot be executed properly. Server service program crashes due to a task exception, which is absolutely not allowed. Exception capture not only ensures smooth execution of server processes, but also saves exception information to log files and tracks errors.

Performance Testing

To check whether the performance of the thread pool is the same as expected, and to analyze the impact of different parameter configurations of the thread pool on system performance, a test program is compiled to test the three sets of parameters, test result 1:

The X-axis is the number of tasks, and the y-axis is the time consumed, in seconds.

Parameter 1: N2 = 1, N5 = 1; parameter 2: N2 = 5, N5 = 1; parameter 3: N2 = 5, N5 = 5

During the test, the total number of threads of the system is limited to 500, and the tasks are all 5 ms. Only N2 and N5 are tested here. N2 indicates the number of tasks that the system increases to the thread pool at a time on average. N5 indicates the number of tasks executed by each thread at a time.

When the number of tasks is small, the three occupy the same system performance. However, when the task volume is large, parameter 1 is slightly more efficient than parameter 2, and parameter 3 is almost twice the execution efficiency of the first two.

Because it is a lightweight task, N2 changes have little impact on system efficiency, while N5 has a significant impact.


The test shows that after the thread pool is used on the server, the system performance does not necessarily improve. Tasks of different systems have different characteristics. Therefore, you need to adjust some key parameters of the buffer pool based on the characteristics of server tasks to maximize system efficiency. These parameters are N1, N2, N3, N4, N5, N1, N2, and N3 during the above analysis.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.