Process pool, thread pool

Source: Internet
Author: User

The concept of pool

Because the server's hardware resources are "abundant", a very direct way to improve the performance of the server is to change the space time, that is, "waste" the hardware resources of the server in exchange for its operational efficiency. This is the concept of the pool. A pool is a collection of resources that are completely created and initialized at the start of the server, which is called static resource allocation. When the server enters the formal runtime, that is, when the client requests are processed, if it needs the relevant resources, it can be fetched directly from the pool without the need for dynamic allocation. Obviously, getting the required resources directly from the pool is much faster than allocating resources dynamically, as system calls to allocate system resources are time-consuming. When the server finishes processing a client connection, it can put the related resources back into the pool without performing a system transfer to release the resources. From the end result, the pool is equivalent to the server Management system resources application facility, it avoids the server to the kernel frequently accesses.

Pools can be divided into multiple, common memory pools, process pools, thread pools, and connection pools.

Memory pool

A memory pool is a way of allocating memory. Often we are accustomed to using system calls, such as new and malloc, to allocate memory directly, the disadvantage of which is that due to the variable size of the requested memory block, a large amount of memory fragmentation is caused by frequent use and thus degrades performance.

A memory pool is a request to allocate a certain number of memory blocks of equal size (in general) to be reserved before actually using memory. When there is a new memory requirement, a portion of the memory block is separated from the memory pool, and if the memory block is not enough, then continue to request new memory. One notable advantage of this is that the efficiency of memory allocation is improved.

Process pool and thread pool

The process pool and the thread pool are similar, so here we take the process pool as an example. If there is no special declaration, the following description of the process pool also applies to the thread pool.

A process pool is a set of child processes that are pre-created by the server, and the number of these child processes is between 3~10 (which is, of course, a typical case). The number of threads in the thread pool should be the same as the number of CPUs.

All child processes in the process pool run the same code and have the same attributes, such as priority, Pgid, and so on.

When a new task arrives, the master process chooses one of the child processes in the process pool to serve it in some way. Choosing a child process that already exists is much less expensive than creating a child process dynamically. There are two methods for the main process to select which child process to serve for the new task:

1) The main process uses an algorithm to actively select the child process. The simplest and most commonly used algorithm is the random algorithm and Round Robin (rotation algorithm).

2) The master process and all child processes are synchronized through a shared work queue, and the child processes sleep on the work queue. When a new task arrives, the main process adds the task to the work queue. This wakes up the child processes that are waiting on the task, but only one child process will get the "take over" of the new task, it can fetch the task from the work queue and execute it, while the other child processes will continue to sleep on the work queue.

When a child process is selected, the master process also needs to use some kind of notification mechanism to tell the target child that a new task needs to be processed and to pass the necessary data. The simplest way to do this is to pre-establish a pipeline between the parent and child processes and then implement all interprocess communication through the pipeline. Passing data between parent and child threads is much simpler because we can define the data as global, and they are shared by all threads themselves.

Thread pools are primarily used for:

1) A large number of threads are required to complete the task, and the time to complete the task is relatively short. For example, the Web server completes the task of Web page request, using thread pool technology is very suitable. Because a single task is small and the number of tasks is huge. But for long-time tasks, such as a Telnet connection request, the thread pool's advantages are not obvious. Because the Telnet session time is much larger than the thread's creation time.

2) Performance-critical applications, such as requiring the server to respond quickly to customer requests.

3) receive a large number of bursts of requests, but do not allow the server to generate a large number of threading applications.

Reprinted from: http://blog.csdn.net/u011012049/article/details/48436427

Process pool, thread pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.