Threadpoolexecutor use and think (up)-the thread pool size setting differs from the three implementations of Blockingqueue

Source: Internet
Author: User
Tags keep alive

Threadpoolexecutor was exposed to the work in many places. Take advantage of the time is still empty, learn to summarize.

Previous notes:

    1. The official JDK document (Javadoc) is the best and most authoritative reference for learning.
    2. The article is divided into middle and lower. In the previous chapter, we mainly introduce the significance and difference of the two aspects of Threadpoolexecutor accepting task, the pool size parameter corepoolsize and maximumpoolsize,blockingqueue selection ( Synchronousqueue, LinkedBlockingQueue, medium mainly talk about the parameters related to keepalivetime this topic ArrayBlockingQueue The following section describes some of the less-used APIs for this class, and his next of kin: Scheduledthreadpoolexecutor.
    3. If you understand the error, please point it out directly.

Looking at the JDK Help documentation, you can see that the class is relatively simple, inherits from Abstractexecutorservice, and Abstractexecutorservice implements the Executorservice interface.

The signature of the complete construction method for Threadpoolexecutor is:

ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)

Just remember, explain it slowly in the back.

=============================== Magic Split Line ==================================

In fact, for Threadpoolexecutor's constructor online there are more than n explanation, mostly speak very good, but I want to change a way, from executors this class to start. Because of his several tectonic plant construction methods The name made it easy to understand what the characteristics are. But the bottom-level implementation of the Executors class is threadpoolexecutor!

Threadpoolexecutor is the underlying implementation of the Executors class.

In the JDK help documentation, there is a passage:

"It is strongly recommended that programmers use a more convenient Executors factory approach Executors.newCachedThreadPool() (no thread pool, automatic thread recycling), Executors.newFixedThreadPool(int) (fixed-size thread pool), and Executors.newSingleThreadExecutor() (a single background thread), which have predefined settings for most usage scenarios. "

It can be inferred that threadpoolexecutor is closely related to the Executors class.

=============================== Magic Split Line ==================================

OK, then take a look at the source bar, starting from Newfixedthreadpool.

Executorservice newfixedthreadpool (int nthreads): fixed-size thread pool.

As you can see, the size of thecorepoolsize and maximumpoolsize is the same (in fact, the maximumpoolsize parameter is meaningless if you use the unbounded queue ), What is the value table name for KeepAliveTime and unit? -That's the implementation don't want to keep alive! The last Blockingqueue chose Linkedblockingqueue, the queue has a characteristic that he is unbounded.

Java code
  1. Public static Executorservice newfixedthreadpool (int nthreads) {
  2. return New Threadpoolexecutor (Nthreads, nthreads,
  3. 0L, Timeunit.milliseconds,
  4. new linkedblockingqueue<runnable> ());
  5. }

Executorservice Newsinglethreadexecutor (): Single thread.

As you can see, it's much like Fixedthreadpool, except that the entry in the Fixedthreadpool is directly degraded to 1.

Java code
  1. Public static Executorservice Newsinglethreadexecutor () {
  2. return new Finalizabledelegatedexecutorservice
  3. (new Threadpoolexecutor (1, 1,
  4. 0L, Timeunit.milliseconds,
  5. new Linkedblockingqueue<runnable> ()));
  6. }

Executorservice Newcachedthreadpool (): No boundary pool, can be automated thread recovery.

This implementation is interesting. The first is the unbounded thread pool, so we can find maximumpoolsize as Big big. followed by the Blockingqueue selection on the use of synchronousqueue. May be a little strange for the blockingqueue, simply said: In the queue, each insert operation must wait for another

The corresponding removal operation for the thread. For example, I add an element first, and then if you continue to try to add it will block until another thread takes an element and vice versa. (Think of what?) is the producer-consumer mode of buffer 1 ^_^)

Notice the feature of the auto-recycle thread in the introduction, and why? Not to mention it, but notice that the size of the corepoolsize and maximumpoolsize in the implementation is different.

Java code
  1. Public static Executorservice Newcachedthreadpool () {
  2. return New Threadpoolexecutor (0, Integer.max_value,
  3. 60L, Timeunit.seconds,
  4. new synchronousqueue<runnable> ());
  5. }

=============================== Magic Split Line ==================================

If there's a lot of questions, it's a certainty (unless you know it too)

Start with the blockingqueue<runnable> workqueue the first entry. In the JDK, it has been made clear that there are three types of queue. The following is a reference: (I'll change it a little and highlight it in red)

All BlockingQueueCan be used to transfer and maintain submitted tasks. You can use this queue to interact with the pool size:
    • If you run fewer threads than Corepoolsize, Executor always prefers to add new threads without queuing. (What do you mean?) if the currently running thread is less than corepoolsize, the task will not be stored at all, added to the queue, but directly copied (thread) to start running)
    • If you are running a thread that is equal to or more than Corepoolsize, Executor always prefers to join the request to the queue without adding a new thread .
    • If the request cannot be queued, a new thread is created unless the thread is created beyond maximumpoolsize, in which case the task is rejected.
Don't worry about the example, because first you need to know the three types on the queue. There are three common strategies for queuing:
    1. submit directly. The default option for the Task Force column is SynchronousQueue that it will submit the task directly to the thread without keeping them. Here, if there is no thread available to run the task immediately, attempting to join the task to the queue will fail, and a new thread will be constructed. This policy avoids locking when processing a set of requests that may have internal dependencies. Direct submissions typically require unbounded maximumpoolsizes to avoid rejecting newly submitted tasks. This policy allows the possibility of an increase in the number of lines that are allowed to continue when the command arrives in a row that exceeds the average that the queue can handle.
    2. unbounded queues. using unbounded queues (for example, without a predefined capacity LinkedBlockingQueue ) will cause all corepoolsize threads to be busy waiting in the queue. This way, the created thread will not exceed corepoolsize. (therefore, the value of the maximumpoolsize is not valid.) When each task is completely independent of other tasks, that is, when task execution does not affect each other, it is appropriate to use a unbounded queue, for example, in a Web page server. This queueing can be used to handle transient burst requests, which allow the possibility of an increase in the number of lines that are allowed to occur when the command reaches an average of more than the queue can handle.
    3. bounded queues. when using limited maximumpoolsizes, bounded queues (such as ArrayBlockingQueue ) help prevent resource exhaustion, but may be difficult to adjust and control. The queue size and maximum pool size may need to be compromised: using large queues and small pools minimizes CPU usage, operating system resources, and context switching overhead, but can result in artificially reduced throughput. If tasks are frequently blocked (for example, if they are I/O boundaries), the system may schedule more threads than you permit.  Using small queues typically requires a large pool size, high CPU utilization, but may encounter unacceptable scheduling overhead, which also reduces throughput.

=============================== Magic Split Line ==================================

Here, the understanding of the theory is enough, can be adjusted is corepoolsize and maximumpoolsizes This parameter is also the choice of blockingqueue.

Example one: Use the direct commit policy, also known as Synchronousqueue.

First of all, Synchronousqueue is unbounded, that is, the ability to save several tasks is unlimited, but because of the nature of the queue itself, you must wait for another thread to continue to add after you have added the element . Not the core thread here is the newly created thread, but let's just imagine the next scenario.

We use the parameters to construct the Threadpoolexecutor:

Java code
    1. new threadpoolexecutor (  
    2.             2, 3, 30,&NBSP;TIMEUNIT.SECONDS,&NBSP;&NBSP;&NBSP;
    3.             new  <span style= "white-space: normal;" >SynchronousQueue</span><Runnable> (),    
    4.              new  Recorderthreadfactory ( "Cookierecorderpool"),    
    5.             new  Threadpoolexecutor.callerrunspolicy ());   

When a core thread already has 2 running.

    1. A task continues at this point (a), according to the "if running thread equals or more than Corepoolsize," Executor always prefers to join the request to the queue without adding a new thread . ", so a is added to the queue.
    2. Another task (B), and the Core 2 threads are not finished, OK, the next first try 1 described, but because of the use of synchronousqueue, so must not be added.
    3. This will satisfy the above mentioned "if the request cannot be queued, create a new thread, unless this thread is created beyond maximumpoolsize, in which case the task will be rejected." , so a new thread must be created to run the task.
    4. For the time being, but if these three tasks are still unfinished, there are two tasks in a row, the first one added to the queue, and the latter one? The queue cannot be inserted, and the number of threads reaches Maximumpoolsize, so you have to execute the exception policy.
Therefore, the use of synchronousqueue usually requires maximumpoolsize to be unbounded, so that this can be avoided (if you want to limit the direct use of bounded queues). For the use of synchronousqueue the role of the JDK is clearly written:This policy avoids locking when processing a set of requests that may have internal dependencies. What do you mean? If your task A1,A2 has an internal association, A1 need to run first, then submit A1, then submit A2, when using synchronousqueue we can guarantee that A1 must first be executed, A1 is not possible to add to the queue before A2 is executed Example Two: The use of unbounded queue policies, LinkedBlockingQueue i.e.Here, take it. Newfixedthreadpool, according to the rules mentioned earlier: If you run fewer threads than Corepoolsize, Executor always prefers to add new threads without queuing. So what happens when the task continues to increase? Writes that if the thread running is equal to or more than corepoolsize, Executor always prefers to join the request to the queue without adding a new thread.

OK, now that the task is in the queue, when will the new thread be added?

Writes if the request cannot be queued, a new thread is created unless the thread is created beyond maximumpoolsize, in which case the task is rejected.

It's interesting here, could it be impossible to join the queue? Unlike Synchronousqueue, which has its own characteristics, for unbounded queues, it is always possible to join (resource exhaustion, of course, another). in other words, there will never be a trigger to create a new thread! The number of threads in the corepoolsize will run all the time, running out of the queue to get the task started. So to prevent the task of soaring, such as the implementation of the task is relatively long, and the speed of adding tasks far more than the time of processing tasks, but also continue to increase, if the task memory is larger, and soon burst, hehe.

You can think about that, huh?

Example three: bounded queues, usingArrayBlockingQueue。

This is the most complex use, so the JDK does not recommend the use of some reason. Compared with the above, the most important feature is to prevent the depletion of the situation occurs.

For example, consider the following construction method:

Java code
    1. new threadpoolexecutor (  
    2.             2, 4, 30,&NBSP;TIMEUNIT.SECONDS,&NBSP;&NBSP;&NBSP;
    3.             new  ArrayBlockingQueue<Runnable> (2),    
    4.             new  Recorderthreadfactory ( "Cookierecorderpool"),    
    5.             new  Threadpoolexecutor.callerrunspolicy ());   

Assume that all tasks are never done.

For the first, a, b to run directly, then, if the c,d, they will be put into the Queu, if the next e,f, then increase the thread run e,f. However, if you come back to the task, the queue is no longer acceptable, and the number of threads reaches the maximum limit, so a deny policy is used to handle it.

Summarize:

    1. The use of Threadpoolexecutor is still very skillful.
    2. Using a unbounded queue may drain system resources.
    3. Using a bounded queue may not be good enough to satisfy performance and need to adjust the number of threads and queue size
    4. The number of threads naturally has a cost, so it needs to be adjusted for different applications.
In general, static tasks can be categorized as:
    1. Large number, but very short execution time
    2. Small number, but longer execution time
    3. The number is large and the execution time is long.
    4. In addition to the above features, there are some intrinsic relationships between tasks
After reading this article, I hope to be able to choose the right type of article source http://dongxuan.iteye.com/blog/901689

Threadpoolexecutor use and think (up)-the thread pool size setting differs from the three implementations of Blockingqueue

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.