Queues in the thread pool

Source: Internet
Author: User

Threadpoolexecutor detailed

Construction method:threadpoolexecutor (int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)

Corepoolsize-the number of threads that are saved in the pool, including idle threads.

The maximum number of threads allowed in the maximumpoolsize-pool.

KeepAliveTime-When the number of threads is greater than the core, this is the maximum time to wait for a new task before terminating the extra idle thread.

The time unit of the Unit-keepalivetime parameter.

WorkQueue-the queue used to hold the task before execution. This queue keeps only the runnable tasks that are submitted by the Execute method.

Threadfactory-The factory that the executor uses when it creates a new thread.

Handler-the handler that is used when execution is blocked because the thread range and queue capacity are exceeded.

Threadpoolexecutor is the underlying implementation of the Executor class.

Introduce several classes of source code (executors):

Executorservice newfixedthreadpool (int nthreads): fixed-size thread pool.

 1  public  static  Executorservice Newfixedthreadpool (int   nthreads { 2  return  new   Threadpoolexecutor (Nthreads, Nthreads,  3  0l, Timeunit.mill Iseconds,  4  n EW  linkedblockingqueue<runnable> ());  5 } 

As you can see, thesize of corepoolsize and Maximumpoolsize is the same (in fact, the maximumpoolsize parameter is meaningless if you use the unbounded queue. ), theKeepAliveTime value indicates that the implementation does not want to keep alive. The last Blockingqueue chose linkedblockingqueue, the queue has a characteristic that he is unbounded .

Executorservice Newsinglethreadexecutor (): Single thread

 1  public  static   Executorservice Newsinglethreadexecutor () { 2  return  new   Finalizabledelegatedexecutorservice  3  (new  threadpoolexecutor (1, 1 4  0l 5  new  linkedblockingqueue<r Unnable> 6 } 

Executorservice Newcachedthreadpool (): No boundary pool, automatic thread recovery

1  Public Static Executorservice Newcachedthreadpool () {2         return New Threadpoolexecutor (0, Integer.max_value,3                                       60L, Timeunit.seconds,4                                        New synchronousqueue<runnable>()); 5 }

It is an unbounded thread pool, so we can find maximumpoolsize as integer.max_value. followed by the Blockingqueue selection on the use of synchronousqueue. May be a little strange for the blockingqueue, simply said: in the queue, each insert operation must wait for the corresponding removal of another thread.

Blockingqueue parsing

Start with the Blockingqueue<runnable> workqueue the first entry. In the JDK, it has been made clear that there are three types of queue.

All blockingqueue can be used to transfer and maintain submitted tasks. You can use this queue to interact with the pool size:

1. If the thread running is less than corepoolsize, executor always prefers to add a new thread without queuing. (if the currently running thread is less than corepoolsize, the task will not be added to the queue at all, but it should start running directly)

2. If the thread running is equal to or more than corepoolsize, then executor always prefers to join the request to the queue without adding a new thread.

3. If the request cannot be queued, a new thread is created unless the thread is created beyond maximumpoolsize, in which case the task is rejected.

Queue queue has three common strategies

  submit directly . The default option for the work queue is Synchronousqueue, which will submit tasks directly to the thread without maintaining them. Here, if there is no thread available to run the task immediately, attempting to join the task to the queue will fail, and a new thread will be constructed. This policy avoids locking when processing a set of requests that may have internal dependencies. Direct submissions typically require unbounded maximumpoolsizes to avoid rejecting newly submitted tasks. This policy allows the possibility of an increase in the number of lines that are allowed to continue when the command arrives in a row that exceeds the average that the queue can handle.

  unbounded queues . Using unbounded queues (for example, linkedblockingqueue that do not have a predefined capacity) will cause all corepoolsize threads to be busy while the newer tasks are waiting in the queue. This way, the created thread will not exceed corepoolsize. (therefore, the value of the maximumpoolsize is not valid.) When each task is completely independent of other tasks, that is, when task execution does not affect each other, it is appropriate to use a unbounded queue, for example, in a Web page server. This queueing can be used to handle transient burst requests, which allow the possibility of an increase in the number of lines that are allowed to occur when the command reaches an average of more than the queue can handle.

  bounded queues . When using limited maximumpoolsizes, bounded queues (such as Arrayblockingqueue) help prevent resource exhaustion, but may be difficult to adjust and control. The queue size and maximum pool size may need to be compromised: using large queues and small pools minimizes CPU usage, operating system resources, and context switching overhead, but can result in artificially reduced throughput. If tasks are frequently blocked (for example, if they are I/O boundaries), the system may schedule more threads than you permit. Using small queues typically requires a large pool size , high CPU utilization, but may encounter unacceptable scheduling overhead, which also reduces throughput.

Blockingqueue's Choice.

Example one: Use the direct commit policy, also known as Synchronousqueue.

First of all, Synchronousqueue is unbounded, that is, the ability to save several tasks is unlimited, but because of the nature of the queue itself, you must wait for another thread to continue to add after you have added the element. Not the core thread here is the newly created thread, but let's just imagine the next scenario.

We use the parameters to construct the Threadpoolexecutor:

1. New Threadpoolexecutor (

2.2, 3, Timeunit.seconds,

3. New Synchronousqueue<runnable> (),

4. New Recorderthreadfactory ("Cookierecorderpool"),

    1. New Threadpoolexecutor.callerrunspolicy ());

New Threadpoolexecutor (

2, 3, Timeunit.seconds,

New Synchronousqueue<runnable> (),

New Recorderthreadfactory ("Cookierecorderpool"),

New Threadpoolexecutor.callerrunspolicy ());

When a core thread already has 2 running.

    1. A task continues at this point (a), according to the "if running thread equals or more than corepoolsize," executor always prefers to join the request to the queue without adding a new thread. ", so a is added to the queue.
    2. Another task (B), and the Core 2 threads are not finished, OK, the next first try 1 described, but because of the use of synchronousqueue, so must not be added.
    3. This will satisfy the above mentioned "if the request cannot be queued, create a new thread, unless this thread is created beyond maximumpoolsize, in which case the task will be rejected." , so a new thread must be created to run the task.
    4. For the time being, but if these three tasks are still unfinished, there are two tasks in a row, the first one added to the queue, and the latter one? The queue cannot be inserted, and the number of threads reaches Maximumpoolsize, so you have to execute the exception policy.

Therefore, the use of synchronousqueue usually requires maximumpoolsize to be unbounded, so that this can be avoided (if you want to limit the direct use of bounded queues). It is well written in the JDK that uses synchronousqueue: This policy avoids locking when processing a set of requests that may have internal dependencies.

What do you mean? If your task A1,A2 has internal Association, A1 need to run first, then submit A1, and then submit A2, when using synchronousqueue we can guarantee that A1 must be executed first, before A1 is executed, A2 cannot add to the queue.

Example two: Using the unbounded queue policy, i.e. Linkedblockingqueue

This takes Newfixedthreadpool, according to the rules mentioned earlier:

If you run fewer threads than Corepoolsize, Executor always prefers to add new threads without queuing. So what happens when the task continues to increase?

If you are running a thread that is equal to or more than corepoolsize, Executor always prefers to join the request to the queue without adding a new thread. OK, now that the task is in the queue, when will the new thread be added?

If the request cannot be queued, a new thread is created unless the thread is created beyond maximumpoolsize, in which case the task is rejected. It's interesting here, could it be impossible to join the queue? Unlike Synchronousqueue, which has its own characteristics, for unbounded queues, it is always possible to join (resource exhaustion, of course, another). In other words, there will never be a trigger to create a new thread! The number of threads in the corepoolsize will run all the time, running out of the queue to get the task started. So to prevent the task of soaring, such as the implementation of the task is relatively long, and the speed of adding tasks far more than the time of processing tasks, but also continue to increase, soon burst.

Example three: bounded queues, using Arrayblockingqueue.

This is the most complex use, so the JDK does not recommend the use of some reason. Compared with the above, the most important feature is to prevent the depletion of the situation occurs.

For example, consider the following construction method:

1. New Threadpoolexecutor (

2.2, 4, Timeunit.seconds,

3. New Arrayblockingqueue<runnable> (2),

4. New Recorderthreadfactory ("Cookierecorderpool"),

5. New Threadpoolexecutor.callerrunspolicy ());

New Threadpoolexecutor (

2, 4, Timeunit.seconds,

New Arrayblockingqueue<runnable> (2),

New Recorderthreadfactory ("Cookierecorderpool"),

New Threadpoolexecutor.callerrunspolicy ());

Assume that all tasks are never done.

For the first, a, b to run directly, then, if the c,d, they will be placed in the queue, if the next e,f, then increase the thread run e,f. However, if you come back to the task, the queue is no longer acceptable, and the number of threads reaches the maximum limit, so a deny policy is used to handle it.

KeepAliveTime

The explanation in the JDK is that when the number of threads is greater than the core, this is the maximum amount of time that the extra idle thread waits for a new task before terminating.

A bit of a mouthful, in fact, this is not difficult to understand, in the use of "pool" in the application, most of them have similar parameters need to be configured. such as the database connection pool, the Maxidle,minidle parameter in DBCP.

What do you mean? Then the above explanation, and then sent to the boss of the workers are always "borrowed", as the saying goes, "There is still", but the problem here is when, if the borrowed workers just completed a task to return, and later found that the task is still, it is not to borrow? This one, the boss must have died.

Reasonable strategy: Since borrowed, then borrow more for a while. It was not until after "a certain period" that the workers could not be used again, and they could return. Some time here is the meaning of KeepAliveTime, Timeunit is the measure of KeepAliveTime value.

Rejectedexecutionhandler

In another case, even if the boss borrowed the workers, but the task continued to come, or not busy, then the whole team had to refuse to accept.

The Rejectedexecutionhandler interface provides the opportunity for a custom method to reject the processing of a task. The 4 policy is already included by default in Threadpoolexecutor because the source code is very simple and is posted directly here.

Callerrunspolicy: The thread invokes the execute itself that runs the task. This strategy provides a simple feedback control mechanism that can slow down the submission of new tasks.

1. public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

2. if (!e.isshutdown ()) {

3. R.run ();

4.}

5.}

public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

if (!e.isshutdown ()) {

R.run ();

}

}

This strategy obviously does not want to abandon the task. However, since there are no resources in the pool, it is executed directly using the thread itself that invoked the execute.

AbortPolicy: Handler rejection will throw runtime rejectedexecutionexception

1. public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

2. Throw new Rejectedexecutionexception ();

3.}

public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

throw new Rejectedexecutionexception ();

}

This strategy throws an exception directly, discarding the task.

Discardpolicy: The task that cannot be performed will be deleted

1. public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

2.}

public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

}

This strategy is almost as much a abortpolicy as a drop-off task, except that he does not throw an exception.

Discardoldestpolicy: If the execution program has not been closed, the task in the head of the work queue is deleted, and then the execution of the program is retried (repeat this process if it fails again)

1. public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

2. if (!e.isshutdown ()) {

3. E.getqueue (). poll ();

4. E.execute (R);

5.}

    1. }

public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {

if (!e.isshutdown ()) {

E.getqueue (). poll ();

E.execute (R);

}

}

The strategy is a little more complicated, with the pool not shutting down and first losing the oldest task cached in the queue, and then trying to run the task again. This strategy requires proper care.

Imagine: If other threads are still running, then the new task kicks off the old task, slows down the queue, and a task kicks off the oldest task in the queue.

Summarize:

KeepAliveTime are related to the types of maximumpoolsize and Blockingqueue. If the blockingqueue is unbounded, then the maximumpoolsize will never be triggered, and the natural keepalivetime will have no meaning.

Conversely, if the number of cores is small, bounded blockingqueue values are smaller, and the KeepAliveTime is small, if the task is frequent, then the system will frequently apply for recycling threads.

public static Executorservice newfixedthreadpool (int nthreads) {

return new Threadpoolexecutor (Nthreads, Nthreads,

0L, Timeunit.milliseconds,

New linkedblockingqueue<runnable> ());

}

Reference from:

https://www.oschina.net/question/565065_86540

Queues in the thread pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.