Basic concepts of memory pool, process pool (thread pool)

Source: Internet
Author: User

When implementing a concurrent server, there are obviously many drawbacks to dynamically creating a subprocess (a child thread), and when the TCP multi-process (multi-threaded) version is implemented in the previous article, this problem is encountered and a review of the previously mentioned disadvantages:

1. Dynamic creation of processes (or threads) is time-consuming, which results in slower customer responses. 2. Switching between processes (or threads) consumes a large amount of CPU time. 3. Due to the limited resources of the system, the number of child processes (or threads) that can be created is limited. 4. The dynamically created child process is the full image of the current process. The current process must handle system resources such as its assigned file descriptor and heap memory in a prudent manner, or the child process may replicate these resources, thereby dramatically reducing the available resources of the system, which in turn affects server performance

So, we thought about whether we could create some processes (threads) ahead of time to improve performance. If the server's hardware resources are "abundant", a very direct way to improve server performance is to change the time in space, that is, "waste" the server's hardware resources in exchange for its operational efficiency. This requires understanding the concept of the pool. What is a pool.

A pool is a collection of resources that are completely created and initialized at the start of a server, which is called a static resource allocation. When the server enters the formal run phase, that is, when the client request is processed, it can be obtained directly from the pool if it needs the relevant resources, without dynamic allocation. Obviously, getting the required resources directly from the pool is much faster than allocating resources dynamically, because system calls that allocate system resources are time-consuming. Once the server has finished processing a client connection, it is possible to put the related resources back into the pool without performing a system transfer to release the resources. From the final results, the pool is equivalent to the server management system resources of the application facilities, it avoids the server to the kernel of frequent access.

Pools can be divided into multiple, common memory pools, process pools, thread pools, and connection pools.

Note: Because the process pool is similar to the thread pool, the concept of a process pool is also applicable to the thread pool as described below.

One, Memory pool

Often we are accustomed to using APIs like new, malloc, and so on to allocate memory, and we know that, in doing so, because the size of the requested memory block is variable, it can cause a lot of memory fragmentation when used frequently, which in turn degrades performance. 1, Concept memory pool (Memory pool) is a way of allocating memory. The memory pool is used to allocate a certain amount of memory blocks that are equal in size (in general) for standby before the memory is really in use. When new memory requirements are available, part of the memory block is separated from the memory pool, and the memory block is not enough to continue requesting new memory.
That is, when the application needs to request memory, not directly to the operating system, but directly from the memory pool, and, similarly, when the program freed memory, not really return the memory to the operating system, but return to the memory pool. When the program exits (or at a specific time), the memory pool frees up the real memory that was previously requested. A notable advantage of this is that the memory fragmentation problem is avoided as much as possible, and the efficiency of memory allocation is improved.

2. The custom memory pool of the classification application has different types depending on the applicable scenario. From a thread-safe point of view, the memory pool can be divided into single-threaded memory pools and multithreaded memory pools. Single-threaded memory pools are used by only one thread for the entire lifecycle, so there is no need to consider mutually exclusive access issues; A multithreaded memory pool may be shared by multiple threads, so you need to lock each time you allocate and release memory.
In contrast, single-threaded memory pools have higher performance, while multi-threaded memory pools are more widely available.

Second, process pool (thread pool)

1, Concept (1), the process pool is a predefined set of child processes created by the server, the number of which is between 3-10 (generally), and the number of threads in the thread pool is generally the same as the number of CPUs.

(2) All child processes in the process pool run the same code and have the same attributes, such as priority, Pgid, and so on.

(3), relative to the dynamic creation of the child process, now just select an existing subprocess, the cost is much less.
Then there is the new problem, when a new task arrives, how does the main process select a child process in the pool to serve it.

2. Sub-process selection algorithm

(1) The main process uses an algorithm to actively select the child process. The simplest commonly used algorithm is the stochastic algorithm and the rotation selection (round-robin) algorithm.

(2) The main process and all child processes are synchronized through a shared work queue, and the child processes are sleeping on the queue. When a new task arrives, the main process adds the task to the work queue. This wakes the child process that is waiting for the task.

However, only one subprocess takes the "take over" of the new task, which can be removed from the work queue and executed, while the other child processes will continue to sleep on the work queue.

When a child process is selected, the main process also needs to use some kind of notification mechanism to tell the target child that a new process needs to be processed and that the necessary data is passed. The easiest way to do this is to build a pipeline between the parent process and the child process, and then implement all interprocess communication through the pipeline (of course, a set of protocols are predefined to standardize the use of the pipe). Passing data between the parent thread and the child thread is much simpler because we can define the data globally, and they are all shared by all threads.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.