Java thread pool technology, java Thread Pool

Source: Internet
Author: User

Java thread pool technology, java Thread Pool

1. How is the thread pool implemented?
Introduction: multithreading technology mainly solves the problem of multiple threads in a processor unit. It can significantly reduce the idle time of the processor unit and increase the throughput of the processor unit.
Assume that the time required for a server to complete a task is T1 thread creation time, T2 thread execution time, and T3 thread destruction time.
If T1 + T3 is greater than T2, a thread pool can be used to improve server performance.

 

 

 

 

 


The thread pool technology focuses on how to shorten or adjust the T1 and T3 time to improve the performance of the server program. It arranges T1 and T3 in the start and end time periods or some idle time periods of the server program, so that when the server program processes customer requests, there will be no overhead of T1 and T3.
The thread pool not only adjusts the time periods generated by T1 and T3, but also significantly reduces the number of threads created. For example:
Assume that a server processes 50000 requests a day, and each request requires a separate thread. In the thread pool, the number of threads is generally fixed, so the total number of threads generated will not exceed the number of threads in the thread pool. If the server does not use the thread pool to process these requests, the total number of threads is 50000. Generally, the thread pool size is much smaller than 50000. Therefore, the server program that uses the thread pool does not waste time processing to create 50000 requests, thus improving efficiency.

2. ThreadPoolTaskExecutor in Spring
ThreadPoolTaskExecutor is a spring thread pool technology that is implemented using java. util. concurrent. ThreadPoolExecutor in jdk.
Int corePoolSize: Minimum number of threads maintained by the thread pool.
Int maximumPoolSize: Maximum number of threads maintained by the thread pool.
Long keepAliveTime: the lifetime of Idle threads.
TimeUnit: time unit, which has the following enumerated values: nanosecond, microsecond, millisecond, and second.
BlockingQueue <Runnable> workQueue: the queue of tasks waiting for execution.
RejectedExecutionHandler handler: Used to reject the execution of a task. This happens in two cases:
First, in the execute method, if addIfUnderMaximumPoolSize (command) is false, the thread pool is saturated;
Second, find runState in the execute method! = RUNNING | poolSize = 0, that is, if shutdown is completed, ensureQueuedTaskHandled (Runnable command) is called. In this method, reject may be called.
I. Initialization
Method 1. Direct call:
1. ThreadPoolTaskExecutor poolTaskExecutor = new ThreadPoolTaskExecutor ();
2. // The Buffer Queue used by the thread pool
3. poolTaskExecutor. setQueueCapacity (200 );
4. // the minimum number of threads maintained by the thread pool
5. poolTaskExecutor. setCorePoolSize (5 );
6. // maximum number of threads maintained by the thread pool
7. poolTaskExecutor. setMaxPoolSize (1000 );
8. // the idle time allowed by the thread pool maintenance thread
9. poolTaskExecutor. setKeepAliveSeconds (30000 );
10. poolTaskExecutor. initialize ();
Method 2. configuration file:

Obtain in the program:
ApplicationContext ctx = new ClassPathXmlApplicationContext ("applicationContext. xml ");
ThreadPoolTaskExecutor poolTaskExecutor = (ThreadPoolTaskExecutor) ctx. getBean ("threadPool ");
Ii. Use the thread pool to start a thread:
Thread udpThread = new Thread (udp); // create a Thread
PoolTaskExecutor.exe cute (udpThread); // Add the thread to the thread pool
// Obtain the number of threads active in the current thread pool:
Int count = poolTaskExecutor. getActiveCount ();
Logger. debug ("[x]-now threadpool active threads totalNum:" + count );
Iii. Configuration explanation: the processing process of ThreadPoolTaskExecutor:
When a task is added to the thread pool through the execute (Runnable) method:
1. When the number of threads in the thread pool is smaller than corePoolSize, a new thread should be created to process the added tasks even if all threads in the thread pool are idle.
2. When the number in the thread pool is equal to corePoolSize, but the Buffer Queue workQueue is not full, the task is put into the buffer queue.
3. When the number in the thread pool is greater than corePoolSize, the Buffer Queue workQueue is full, and the number in the thread pool is smaller than maximumPoolSize, a new thread is created to process the added task.
4. When the number in the thread pool is greater than corePoolSize, the Buffer Queue workQueue is full, and the number in the thread pool is equal to maximumPoolSize, the handler policy is used to process the task. That is, the processing task priority is: Core Thread corePoolSize, task queue workQueue, maximumPoolSize. If all three are full, handler is used to process the rejected task.
5. When the number of threads in the thread pool is greater than corePoolSize, if the idle time of a thread exceeds keepAliveTime, the thread will be terminated. In this way, the thread pool can dynamically adjust the number of threads in the pool.
It will first create a CorePoolSize thread. When the thread continues to be added, it will first be put into the Queue. When the CorePoolSize and Queue are full, it will add a new thread to be created. When the thread reaches the MaxPoolSize, the org. springframework. core. task. taskRejectedException.
In addition, if MaxPoolSize is set to a greater value than the number of threads supported by the system, a java. lang. OutOfMemoryError: unable to create new native thread exception will be thrown.
There are four types of Reject policy reservation:
(1) ThreadPoolExecutor. AbortPolicy is the Default policy. If a handler is rejected, a RejectedExecutionException will be thrown during running.
(2) ThreadPoolExecutor. CallerRunsPolicy policy. The caller's thread will execute this task. If the executor is closed, it will be discarded.
(3) ThreadPoolExecutor. DiscardPolicy policy. tasks that cannot be executed will be discarded.
(4) ThreadPoolExecutor. DiscardOldestPolicy policy. If the execution program is not closed, the task in the Job Queue header will be deleted, and then the execution program will be retried (if the execution fails again, the process will be repeated ).

3. ThreadPoolExecutor policy configuration and application scenarios, and thread pool Parameter Design in different business scenarios
Three Common ThreadPoolExecutor: Executor provides a set of factory methods for creating commonly used ExecutorService, namely FixedThreadPool, SingleThreadExecutor, and CachedThreadPool. All three threadpoolexecutors call the ThreadPoolExecutor constructor to create them. The difference is that the parameters are different.
FixedThreadPool-fixed thread pool size and unbounded task queue
1. Obtain available thread execution tasks from the thread pool. If no available thread exists, use ThreadFactory to create a new thread until the number of threads reaches nThreads;
2. After the number of threads in the thread pool reaches nThreads, new tasks will be put into the queue.
Advantage: it ensures that all tasks are executed and never reject new tasks.
Disadvantage: there is no limit on the number of queues. Memory problems may occur when the task execution time is infinitely prolonged.
SingleThreadExecutor --- the thread pool size is fixed to 1, and the task queue is unbounded.
Advantage: It is applicable to scenarios where a single thread is required to process tasks logically. the unbounded batch blockingqueue of colleagues ensures that all new tasks can be put into the queue and will not be rejected.
Disadvantage: Same as FixedThreadPool, memory problems may occur when the processing task waits infinitely.
CachedThreadPool --- infinite thread pool (Integer. MAX_VALUE), waiting queue length is 1
The number of core threads is 0, which means that all tasks will first enter the queue; the maximum number of threads is Integer. MAX_VALUE, which can be considered as unlimited. The KeepAlive time is set to 60 seconds, which means that when no task exists, the thread will exit after 60 seconds. CachedThreadPool's processing policy for jobs is that the submitted jobs will be immediately allocated to a thread for execution. The number of threads in the thread pool will automatically expand and decrease as the number of jobs changes, too many threads are created when the task execution time is infinitely prolonged.

Summary
Use FixedThreadPool to ensure that all submitted tasks are executed.
If only one thread can be used for task processing, use SingleThreadExecutor
If you want to allocate a thread for the job to be submitted as soon as possible, use CachedThreadPool
If the service allows a task to fail, or the task execution process may take too long to affect other business applications, fault tolerance can be performed by using a thread pool with a limited number of threads and a queue with a limited length.
High concurrency, low time consumption, consider using newCachedThreadPool, low concurrency, high time consumption, consider using newFixedThreadPool or newSingleThreadPool. Their implementation ideas are just different from you. Asynchronous execution can be considered for high concurrency and high time consumption.
There are three common queuing policies:
1. Direct submission: the default option of the work queue is SynchronousQueue, which directly submits tasks to the thread without holding them. If there is no thread that can be used to run the task immediately, trying to add the task to the queue will fail, so a new thread will be constructed. This policy prevents locks when processing requests that may have internal dependencies. Direct submission usually requires no boundaries, that is, maxPoolSize is Integer. MAX_VALUE, to avoid rejecting new tasks.
2. Unbounded queue: the use of unbounded queues (such as batch blockingqueue) will lead to new tasks waiting in the queue when all corePoolSize threads are busy. In this way, the created thread will not exceed the corePoolSize. (Therefore, the value of maxPoolSize is invalid.) When each task is completely independent from other tasks, that is, the task execution does not affect each other, it is applicable to the use of unbounded queues.
3. bounded queue: When a wired maxPoolSize is used, a bounded Queue (such as ArrayBlockingQueue) helps prevent resource depletion, but may be difficult to adjust and control. The queue size and the maximum pool size may have to be compromised: using large queues and small pools can minimize CPU usage, operating system resources, and context switching overhead, but may cause manual throughput reduction. If the tasks are frequently congested (for example, if they are I/O boundaries), the system may schedule a longer time than you permit for more threads. Using a small queue usually requires a large pool size, and the CPU usage is high, but it may encounter unacceptable scheduling overhead, which will also reduce the throughput.
At this point, we have enough theory to understand. We can adjust corePoolSize and maximumPoolSizes. This parameter also has the choice of BlockingQueue.
Example 1: Use the direct submission policy, that is, SynchronousQueue.
First, SynchronousQueue is unbounded. That is to say, there is no limit on its ability to store tasks. However, due to the characteristics of this Queue, after an element is added, you must wait for another thread to remove it before adding it. Here, neither the Core Thread nor the new thread is created, but we can imagine the following scenario.
We use the following parameter to construct ThreadPoolExecutor:
1. new ThreadPoolExecutor (
2. 2, 3, 30, TimeUnit. SECONDS,
3. new <span style = "white-space: normal;"> SynchronousQueue </span> <Runnable> (),
4. new RecorderThreadFactory ("CookieRecorderPool "),
5. new ThreadPoolExecutor. CallerRunsPolicy ());

When two core threads are running.
1. in this case, A task (A) continues. According to the previous introduction, if the running thread is equal to or greater than corePoolSize, Executor always prefers to add requests to the queue without adding new threads. ", Therefore, A is added to the queue.
2. Another task (B) has been created, and the core two threads are not busy yet. OK. Next we will try to describe it in 1 first, but we cannot add it because of SynchronousQueue used.
3. in this case, the "if the request cannot be added to the queue, create a new thread, unless the thread is created beyond the maximumPoolSize, in which case the task will be rejected. ", Therefore, a new thread is required to run this task.
4. it's okay for the time being. But if these three tasks are not completed, two consecutive tasks are coming. The first one is added to the queue, and the last one is added? The queue cannot be inserted, and the number of threads reaches maximumPoolSize, so we have to execute the exception policy.
Therefore, SynchronousQueue usually requires that maximumPoolSize be unbounded, so as to avoid the above situation (if you want to limit it, use a bounded queue directly ). The role of SynchronousQueue is clearly stated in jdk: this policy can avoid locks when processing requests that may have internal dependencies.
What does it mean? If your tasks A1 and A2 are internally associated and A1 needs to be run first, then A1 is submitted first and A2 is submitted. When SynchronousQueue is used, we can ensure that A1 must be executed first, before A1 is executed, A2 cannot be added to the queue.
Example 2: Use the unbounded queue policy, that is, define blockingqueue
For newFixedThreadPool, according to the rules mentioned above:
If the number of running threads is less than corePoolSize, Executor always prefers to add new threads without queuing.
What will happen when the task continues to increase?
If the running thread is equal to or greater than corePoolSize, Executor always prefers to add requests to the queue without adding new threads.
OK. Now the task is added to the queue. When will the new thread be added?
If the request cannot be added to the queue, a new thread is created, unless the creation of this thread exceeds the maximumPoolSize. In this case, the task is denied.
This is very interesting. May I be unable to join the queue? Unlike SynchronousQueue, SynchronousQueue has its own characteristics. For unbounded queues, SynchronousQueue can always be added (resource depletion, of course, another theory ). In other words, it will never trigger new threads! The number of threads with corePoolSize will be running all the time. After the current thread is busy, the task will be taken from the queue to start running. Therefore, it is necessary to prevent the task from being too long. For example, the task execution is relatively long, and the speed of adding a task far exceeds the time for processing the task. In addition, if the task memory is larger, in a short time, it will pop up.
Example 3: Use ArrayBlockingQueue for bounded queues.
This is the most complex application, so JDK is not recommended. Compared with the above, the biggest feature is to prevent resource depletion.
1. new ThreadPoolExecutor (
2. 2, 4, 30, TimeUnit. SECONDS,
3. new ArrayBlockingQueue <Runnable> (2 ),
4. new RecorderThreadFactory ("CookieRecorderPool "),
5. new ThreadPoolExecutor. CallerRunsPolicy ());
Assume that all tasks cannot be completed.
For first-come A and B, run directly. Next, if C and D come, they will be put into queu. If we come back to E and F, the thread will be added to run E, f. However, if a task is executed again, the queue cannot accept it again, and the number of threads reaches the maximum limit. Therefore, a denial policy is used to process the task.
Summary:
1. The use of ThreadPoolExecutor is still very skillful.
2. Using unbounded queue may exhaust system resources.
3. The use of bounded queue may not meet the performance well. You need to adjust the number of threads and the size of queue.
4. The number of threads also has overhead, so it needs to be adjusted according to different applications.
KeepAliveTime:
The jdk explains that when the number of threads is greater than the core, this is the maximum time for Idle threads to wait for new tasks before termination.
What does it mean? Then the above explanation showed that the workers sent to the boss had always been "borrowed". As the saying goes, "there will be a pay-as-you-go", but the problem here is when I will pay back, if the borrowed worker just finished a task and then found that the task still exists, wouldn't he have to borrow it again? As a result, the boss will surely die.
Reasonable strategy: If you borrow it, you need to borrow it for a while. After a period of time, you can return it if you find that you can no longer use these workers. A certain period of time is the meaning of keepAliveTime, and TimeUnit is the measurement of keepAliveTime value.
RejectedExecutionHandler
In another case, even if a worker is lent to the boss, the task continues and the team is still too busy to accept it.
The RejectedExecutionHandler interface provides the opportunity to customize methods for rejecting tasks. The ThreadPoolExecutor contains the 4 policy by default, because the source code is very simple, which is directly posted here.
CallerRunsPolicy: The execute itself that the thread calls to run the task. This policy provides a simple feedback control mechanism to speed down the submission of new tasks.
1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){
2. if (! E. isShutdown ()){
3. r. run ();
4 .}
5 .}
This policy obviously does not want to discard the task. However, because there are no resources in the pool, you can directly use the execute thread itself to execute it.
AbortPolicy: If the handler is rejected, the system will throw the runtime RejectedExecutionException.
1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){
2. throw new RejectedExecutionException ();
3 .}
This policy directly throws an exception and discards the task.
DiscardPolicy: tasks that cannot be executed will be deleted.
1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){
2 .}
This policy is almost the same as AbortPolicy. It also discards tasks, but does not throw an exception.
DiscardOldestPolicy: if the execution program is not closed, the task in the Job Queue header will be deleted, and the execution program will be retried (if it fails again, the process will be repeated)
1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){
2. if (! E. isShutdown ()){
3. e. getQueue (). poll ();
4. e.exe cute (r );
5 .}
6 .}
This policy is a little more complex. If the pool is not closed, first discard the earliest task cached in the queue, and then try to run the task again. This policy requires proper care.
Imagine: if other threads are still running, the new task will kill the old task and cache it in the queue. Another task will kill the oldest task in the queue.
Summary:
The type of keepAliveTime is related to maximumPoolSize and BlockingQueue. If BlockingQueue is unbounded, maximumPoolSize will never be triggered, and keepAliveTime is meaningless.
Conversely, if the number of cores is small, the number of bounded BlockingQueue is small, and The keepAliveTime is set to a small value, if the task is frequent, the system will frequently request to recycle the thread.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.