Thread Pool Mode Comparison --- Ice thread pool model -- l/F leader followers Mode

Source: Internet
Author: User

The thread pool mode is generally divided into two types: l/F leader and followers mode, and HS/ha semi-sync/semi-asynchronous mode.
HS/ha Semi-sync/ Semi-asynchronous mode: It is divided into three layers, namely the synchronization layer, the queue layer, and the asynchronous layer. It is also called the producer consumer mode. The main thread processes the I/O events and parses them before dropping data into the queue, then the consumer reads the data for application logic processing;
Advantage: simplified programming separates low-layer asynchronous I/O from high-level synchronous application services without lowering the performance of low-layer services. Centralized inter-layer communication.
Disadvantage: data needs to be transmitted between threads. Therefore, dynamic memory allocation, data copying, and context switching bring about overhead. High-level services cannot benefit from the efficiency of underlying asynchronous services.
L/F Leader followers Mode: In the LF thread pool, a thread can be in one of three thread states: Leader, follower, or processor. A thread in the leader state is responsible for listening to the network port. When a message arrives, the thread is responsible for message separation, select a new leader from a thread in the follower state based on a mechanism such as FIFO or priority, and then assign and handle the event as processor. After processing, the thread sets its status to follower to wait for the leader to become a new one. Only one thread in the thread pool can be in the leader state at the same time, which ensures that the same event will not be processed repeatedly by multiple threads.
Disadvantages: complexity and lack of flexibility;
Advantage: it enhances the high-speed cache similarity of CPU and eliminates dynamic memory allocation and data exchange between threads.
Two modes of performance analysis: 
L/F mode processes a message at a time of Multi-Channel Separation, allocation, and processing. In addition to the thread management time, multiple threads in LF share an event source. Therefore, it is necessary to coordinate the actions between them, that is, there is synchronization overhead. L/F synchronization overhead is only the overhead for applying for/Releasing locks, and thread context switching is not required during LF request processing, however, thread context switching is required when the thread is changed from follower to leader. When both requests arrive at the same time, this context switching will affect the processing time of the second request, it also brings a certain amount of context overhead.
T (L/F) = T (Multi-Channel Separation) + T (distribution) + T (processing) + T (synchronization) + T (context)
In HS/ha mode, the listening thread and the working thread exchange data through a message queue. This will bring about data transmission overhead ,. At the same time, both the listening thread and the working thread need to access the message queue, which leads to resource competition. Additional synchronization mechanisms are required to coordinate their behavior, including the acquisition and release of resource locks by the listening thread, the corresponding working thread obtains and releases the resource lock, and the overhead that the listening thread notifies the working thread after putting a request into the queue. We call this the synchronization overhead, the synchronization overhead in HS/ha mode is greater than that in l/F mode ,. A request is put into the message queue by the listening thread, but is processed by the working thread. Therefore, each request will cause a thread context cut.
The resulting overhead is called the context overhead.
T (H/h) = T (Multi-Channel Separation) + T (distribution) + T (processing) + T (synchronization) + T (data transfer) + T (context)
From the above analysis, we can see that the performance of the L/F mode Thread Pool Mode is superior to that of the HS/ha mode without concurrency.
Concurrency performance analysis: 
T (Multi-Channel Separation), T (Allocation): lf and HH process the arrival of each message as an event. The task of event allocation is to find the event processor for an event in the event processor registry. This step takes time to change with the number of currently registered event processors. When the thread pool accepts user connection requests, it registers an event processor for each connection. All requests sent through this connection are processed by the same event processor. The event processor table uses a balanced binary tree. Therefore, the event allocation time can be considered as increasing with the number of concurrent users;
The time required by T (processing) to process messages and Management threads is not affected by the number of concurrent users.
T (thread management). The thread management overhead caused by multithreading only changes with the number of threads in the thread pool, which is relatively fixed.
The throughput of LF and HH increases as the number of concurrent users increases. When the number of concurrent users reaches a certain level, the CPU becomes a bottleneck in the system. Increasing the number of concurrent users will not increase the number of concurrent requests, but will increase the time for Multi-Channel Separation and allocation, this reduces the system throughput.
Thread line for optimal performance: 
As the number of threads increases, the throughput increases. When the maximum value is reached, there is a short persistence phase. Increasing the number of threads will reduce the throughput. When the request type is computing-intensive
The effect of HH throughput is not very obvious. The reason is that the thread management overhead is also greatly increased when the number of threads is increased in the HH thread pool. Therefore, increasing the number of threads to improve system performance is not an effective way for HH.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.