Threadpoolexecutor usage and thinking (on)-three implementation differences between thread pool size setting and blockingqueue

Source: Internet
Author: User

Threadpoolexecutor is exposed to multiple tasks. Now, let's take a look at it.

 

Note:

 

  1. JDK official documentation (javadoc) is the best and most authoritative reference for learning.
  2. The article is divided into upper and lower layers. The previous article mainly introduces the meanings and differences of threadpoolexecutor's acceptance of task-related input parameters. The pool size parameters corepoolsize and maximumpoolsize and blockingqueue selection (Synchronousqueue,LinkedBlockingQueue,ArrayBlockingQueueIn the middle part, I will talk about the topics related to the KeepAliveTime parameter. In the next section, I will introduce some APIs that are rarely used and their close relatives: scheduledthreadpoolexecutor.
  3. If you understand the error, point it out directly.

 

 

View the JDK help documentation. It can be found that this class is relatively simple and inherited from abstractexecutorservice, while abstractexecutorservice implements the executorservice interface.

 

The signature of the complete construction method of threadpoolexecutor is:

 

ThreadPoolExecutor(int corePoolSize, int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,

ThreadFactory threadFactory,
RejectedExecutionHandler handler)
 

 

Remember, and explain it later.

 

======================== ======================================

 

In fact, the threadpoolexecutor constructor has a lot of explanations on the Internet, most of which are very good, but I 'd like to change the way to start with the executors class. It is easy to understand the characteristics of the names of several constructor methods. But the underlying implementation of the executors class is threadpoolexecutor!

 

Threadpoolexecutor is the underlying implementation of the executors class.

 

In the JDK help document, there is a saying:

"It is strongly recommended that programmers use it more conveniently.ExecutorsFactory methodExecutors.newCachedThreadPool()(The unbounded thread pool can be used for automatic thread recovery ),Executors.newFixedThreadPool(int)(Fixed size thread pool) andExecutors.newSingleThreadExecutor()(Single background thread), which are predefined for most use cases ."

 

It can be inferred that threadpoolexecutor is closely related to the executors class.

 

======================== ======================================

 

 

OK, let's take a look at the source code, starting with newfixedthreadpool.

 

Executorservice newfixedthreadpool (INT nthreads): fixed-size thread pool.

 

As you can see, corepoolsize and maximumpoolsize are the same (in fact, we will introduce that the maximumpoolsize parameter is meaningless if unbounded queue is used ), what are the names of the KeepAliveTime and unit value tables? -This implementation does not require keep.
Alive! The final blockingqueue selects linkedblockingqueue, which has a feature that is unbounded.

 

Java code
  1. Public static executorservice newfixedthreadpool (INT nthreads ){
  2. Return new threadpoolexecutor (nthreads, nthreads,
  3. 0l, timeunit. milliseconds,
  4. New linkedblockingqueue <runnable> ());
  5. }

 

Executorservice newsinglethreadexecutor (): single thread.

 

It can be seen that it is similar to fixedthreadpool, but the input parameter in fixedthreadpool degrades to 1 directly.

 

 

Java code
  1. Public static executorservice newsinglethreadexecutor (){
  2. Return new finalizabledelegatedexecutorservice
  3. (New threadpoolexecutor (1, 1,
  4. 0l, timeunit. milliseconds,
  5. New linkedblockingqueue <runnable> ()));
  6. }

 

 

Executorservice newcachedthreadpool (): unbounded thread pool, which can be automatically recycled.

 

This implementation is interesting. The first is the unbounded thread pool, so we can find that maximumpoolsize is big. Next, use synchronousqueue for blockingqueue selection. This blockingqueue may be unfamiliar. Simply put, in this queue, each insert operation must wait for another

The corresponding remove operation of the thread. For example, if you want to add an element first, it will be blocked until another thread takes one element, and vice versa. (What do you think? Is the producer consumer mode with buffer 1 ^_^)

Have you noticed the features of Automatic Thread recovery in the introduction? Why? Not to mention it, but notice that the corepoolsize and maximumpoolsize in this implementation are different in size.

 

public static ExecutorService newCachedThreadPool() {        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,                                      60L, TimeUnit.SECONDS,                                      new SynchronousQueue<Runnable>());    }

 

======================== ======================================

 

If you have a lot of questions, it is inevitable (unless you know it well)

 

Start with the input parameter blockingqueue <runnable> workqueue. In JDK, it is clear that there are three types of queue. Reference: (I will slightly modify it and highlight it in red)

 

 

All BlockingQueueCan be used to transfer and maintain submitted tasks. You can use this queue to interact with the pool size:

  • If the number of running threads is less than corepoolsize, executor always prefers to add new threads without queuing. (What does it mean? If the current running thread is smaller than corepoolsize, the task will not be stored and added to the queue. Instead, it will directly copy the guy (thread) to start running)
  • If the running thread is equal to or greater than corepoolsize, executor always prefers to add requests to the queue,Without adding a new thread.
  • If the request cannot be added to the queue, a new thread is created, unless the creation of this thread exceeds the maximumpoolsize. In this case, the task is denied.
First, you need to know the three types of queue.

There are three common queuing policies:

  1. Submit directly.The default Job Queue option isSynchronousQueueIt directly submits tasks to the thread without holding them. If there is no thread that can be used to run the task immediately, trying to add the task to the queue will fail, so a new thread will be constructed. This policy prevents locks when processing requests that may have internal dependencies. Direct submission is generally required to be unbounded.
    Maximumpoolsizes to avoid rejecting newly submitted tasks. This policy allows unbounded threads to grow when the command arrives continuously beyond the average number that the queue can handle.
  2. Unbounded queues.Use unbounded queues (for exampleLinkedBlockingQueue) Will cause
    When corepoolsize threads are busy, new tasks are waiting in the queue. In this way, the created thread will not exceed the corepoolsize. (Therefore, the value of maximumpoolsize is invalid .) When each task is completely independent from other tasks, that is, task execution does not affect each other, it is suitable to use unbounded queues. For example, in a Web Page Server. This kind of queuing can be used to handle transient bursts of requests. This policy allows unbounded threads to grow when the command arrives continuously beyond the average number that the queue can handle.
  3. Bounded queue.When a limited maximumpoolsizes is used, a bounded Queue (suchArrayBlockingQueue) Helps prevent resource depletion, but may be difficult to adjust and control. The queue size and the maximum pool size may need to be compromised: Large queues and small pools can be used to minimize
    CPU usage, operating system resources, and context switching overhead may cause manual throughput reduction. If the tasks are frequently congested (for example, if they are I/O boundaries), the system may schedule a longer time than you permit for more threads. Using a small queue usually requires a large pool size, and the CPU usage is high, but it may encounter unacceptable scheduling overhead, which will also reduce the throughput.

 

======================== ======================================

 

At this point, we have enough theory to understand. We can adjust corepoolsize and maximumpoolsizes. This parameter also has the choice of blockingqueue.

 

Example 1: Use the direct submission policy, that is, synchronousqueue.

 

First, synchronousqueue is unbounded. That is to say, there is no limit on its ability to store data tasks. However, due to the characteristics of this queue,After an element is added, you must wait for another thread to remove it before adding it again.. Here, neither the Core Thread nor the new thread is created, but we can imagine the following scenario.

 

We use the following parameter to construct threadpoolexecutor:

new ThreadPoolExecutor(2, 3, 30, TimeUnit.SECONDS, new <span style="white-space: normal;">SynchronousQueue</span><Runnable>(), new RecorderThreadFactory("CookieRecorderPool"), new ThreadPoolExecutor.CallerRunsPolicy());

 

When two core threads are running.

 

  1. In this case, a task (a) continues. According to the previous introduction, if the running thread is equal to or greater than corepoolsize, executor always prefers to add requests to the queue,Without adding a new thread.", Therefore, a is added to the queue.
  2. Another task (B) is coming, and the core two threads are not busy yet. OK. Next we will try to describe in 1 first. However, due to the synchronousqueue used, we cannot add it.
  3. In this case, the "if the request cannot be added to the queue, create a new thread, unless the thread is created beyond the maximumpoolsize, in which case the task will be rejected .", Therefore, a new thread is required to run this task.
  4. Yes, but if these three tasks are not completed yet, two consecutive tasks are coming. The first one is added to the queue, and the other one? The queue cannot be inserted, and the number of threads reaches maximumpoolsize, so we have to execute the exception policy.
Therefore, synchronousqueue usually requires that maximumpoolsize be unbounded, so as to avoid the above situation (if you want to limit it, use a bounded queue directly ). The role of synchronousqueue is clearly stated in JDK: this policy can avoid locks when processing requests that may have internal dependencies. What does it mean? If your task A1 and A2 have an internal Association and A1 needs to be run first, submit A1 first and then A2. When synchronousqueue is used, we can ensure that A1 must be executed first, before A1 is executed, A2 cannot be added to the queue. Example 2: Use the unbounded queue policy, that isLinkedBlockingQueueTake this. NewfixedthreadpoolAccording to the rules mentioned above: if the number of running threads is less than corepoolsize, executor always prefers to add new threads without queuing. What will happen when the task continues to increase? Write

 

If the running thread is equal to or greater than corepoolsize, executor always prefers to add requests to the queue without adding new threads.

OK. Now the task is added to the queue. When will the new thread be added?

 

Write that if the request cannot be added to the queue, create a new thread, unless the creation of this thread exceeds the maximumpoolsize, in which case the task will be rejected.

This is very interesting. May I be unable to join the queue? Unlike synchronousqueue, synchronousqueue has its own characteristics. For unbounded queues, synchronousqueue can always be added (resource depletion, of course, another theory ). In other words, it will never trigger new threads! The number of threads with corepoolsize will be running all the time. After the current thread is busy, the task will be taken from the queue to start running. Therefore, it is necessary to prevent the task from being too long. For example, the task execution is relatively long, and the speed of adding a task far exceeds the time for processing the task. In addition, if the task memory is larger, in a short time, it will pop up.

 

Think about it.

 

Example 3: Use a bounded queueArrayBlockingQueue。

 

This is the most complex application, so JDK is not recommended. Compared with the above, the biggest feature is to prevent resource depletion.

 

For example, see the following constructor:

 

new ThreadPoolExecutor(2, 4, 30, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(2), new RecorderThreadFactory("CookieRecorderPool"), new ThreadPoolExecutor.CallerRunsPolicy());

 

 

Assume that all tasks cannot be completed.

 

For first-come a and B, run directly. Next, if C and D come, they will be put into queu. If we come back to E and F, the thread will be added to run E, f. However, if a task is executed again, the queue cannot accept it again, and the number of threads reaches the maximum limit. Therefore, a denial policy is used to process the task.

 

Summary:

  1. Threadpoolexecutor is still very skillful.
  2. Using an unbounded queue may exhaust system resources.
  3. The use of bounded queue may not meet the performance well. You need to adjust the number of threads and the size of queue.
  4. The number of threads is also overhead, so it needs to be adjusted according to different applications.
Generally, static tasks can be classified:
  1. Large quantity, but short execution time
  2. The number is small, but the execution time is long.
  3. Large execution time and long execution time
  4. In addition to the above features, there are also some internal relationships between tasks
After reading this article, I hope I can select the appropriate type.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.