Reasonable estimation of thread pool size

Source: Internet
Author: User
Tags redis serialization

An estimate formula was found in server Performance IO optimization:
Number of best threads = ((thread waiting time + thread CPU time)/thread CPU Time) * Number of CPUs
For example, the average CPU run time per thread is 0.5s, while the thread wait time (non-CPU run time, such as IO) is 1.5S,CPU core number 8, then according to the above formula is estimated to be: ((0.5+1.5)/0.5) *8=32. This formula is further translated into:
Number of best threads = (ratio of thread wait time to thread CPU time + 1) * Number of CPUs
A conclusion can be drawn:
The higher the percentage of thread wait time, the more threads are required. The higher the percentage of thread CPU time, the fewer threads are required.


The fastest part of a system is the CPU, so deciding on a system throughput limit is CPU. Increases CPU processing power, which can increase the system throughput limit. However, according to the short board effect, the real system throughput can not be calculated based solely on the CPU. To increase the throughput of the system, you need to start with "system short board" (such as network latency, IO):

* Maximize the parallelism ratio of short board operations, such as multi-threaded download technology
* Improved short-board capability, such as using NIO instead of IO

The first can be linked to the Amdahl law, which defines the computational formula for the speedup of a serial system after parallelization:
Acceleration ratio = System time-consuming/optimized system time before optimization
The greater the speedup, the better the optimization effect of the system parallelism. The Addahl law also gives the relationship between the system parallelism, the number of CPUs and the speedup ratio, the speedup ratio of speedup, the system serialization ratio (the ratio of serial execution code) to the number of F,CPU N:
Speedup <= 1/(F + (1-f)/N)
When n is large enough, the smaller the serialization ratio F, the greater the speedup than the speedup.


Is it more efficient to use a thread pool than to use a single thread?
The answer is no, such as Redis is single-threaded, but it is very efficient, basic operations can reach 100,000 levels/s. From the thread point of view, part of the reason is:

* Multithreading brings thread context switching overhead, which is not the same overhead for single threads
* Lock

The reason for the "Redis fast" is that Redis is basically a memory operation, in which case a single thread can use the CPU efficiently. and the multi-threaded application scenario is generally: there is a considerable proportion of IO and network operations.
The need to combine system realities (such as IO-intensive or CPU-intensive or pure-memory operations) and hardware environments (CPU, memory, hard disk read and write speeds, network conditions, etc.) to continually try to find a reasonable value that is realistic.

Reference: http://ifeve.com/how-to-calculate-threadpool-size/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.