Java concurrent programming: thread pool usage __ thread pool

Source: Internet
Author: User
Tags volatile

Reprint Source:http://www.cnblogs.com/dolphin0520/p/3932921.html

In the previous article, we created a thread when we used the thread, so it was easy to implement, but there was a problem:

If you have a large number of concurrent threads, and each thread is finished with a short task, creating a thread frequently can greatly reduce the efficiency of the system because it takes time to create threads and destroy threads frequently.

So is there a way to make threads reusable, to perform one task, not to be destroyed, and to continue to perform other tasks.

In Java, this can be achieved by thread pooling. Today we will explain the Java thread pool in detail, first of all, we start from the core of the Threadpoolexecutor class, and then explain the implementation of its principle, then give its use examples, finally discussed how to reasonably configure the size of the thread pool.

The following is a table of contents outline for this article:

A. Threadpoolexecutor class in Java

Two. Deep analysis of the principle of thread pool implementation

Three. Using the example

Four. How to rationally configure the size of the thread pool

If there are any mistakes please forgive and welcome to criticize.

Please respect the results of the work of the author, reproduced please indicate the original link:

Http://www.cnblogs.com/dolphin0520/p/3932921.html A. Threadpoolexecutor class in Java

The Java.uitl.concurrent.ThreadPoolExecutor class is one of the core classes in the thread pool, so if you want a thorough understanding of the thread pools in Java, you must first understand this class. Here we look at the specific implementation of the Threadpoolexecutor class source code.

Four construction methods are provided in the Threadpoolexecutor class:

public class Threadpoolexecutor extends Abstractexecutorservice {... public ThreadPool Executor (int corepoolsize,int maximumpoolsize,long keepalivetime,timeunit Unit, blockingqueue<runnable>

    Workqueue); Public threadpoolexecutor (int corepoolsize,int maximumpoolsize,long keepalivetime,timeunit Unit, BlockingQueue

    <Runnable> workqueue,threadfactory threadfactory); Public threadpoolexecutor (int corepoolsize,int maximumpoolsize,long keepalivetime,timeunit Unit, BlockingQueue

    <Runnable> Workqueue,rejectedexecutionhandler handler); Public threadpoolexecutor (int corepoolsize,int maximumpoolsize,long keepalivetime,timeunit Unit, BlockingQueue<
    runnable> workqueue,threadfactory Threadfactory,rejectedexecutionhandler handler); ...
}

As you can see from the above code, Threadpoolexecutor inherits the Abstractexecutorservice class and provides four constructors, in fact, by observing the source code of each constructor, The previous three constructors were found to be initialized by the fourth constructor called.

The following explains the meaning of each parameter in the constructor: Corepoolsize: The size of the core pool, which is very much related to the implementation principle of the thread pool described later. When the thread pool is created, by default, there are no threads in the threads pools, but wait for a task to arrive to create a thread to perform the task, unless the prestartallcorethreads () or Prestartcorethread () method is invoked. As you can see from the names of these 2 methods, the idea is to create a corepoolsize thread or a thread before the task arrives. By default, after the thread pool is created, the number of threads in the threads pools is 0, and when a task arrives, a thread is created to perform the task, and when the number of threads in the thread pool reaches corepoolsize, the incoming task is placed in the cache queue; Maximumpoolsize: Thread pool Maximum number of threads, this parameter is also a very important parameter, which represents the maximum number of threads can be created in the thread pool; KeepAliveTime: Indicates how long the thread will last until the task is executed. By default, KeepAliveTime works only if the number of threads in the thread pool is greater than corepoolsize, until the number of threads in the thread pool is not greater than corepoolsize, that is, when the number of threads in the thread pool is greater than corepoolsize, If a thread is idle for KeepAliveTime, it terminates until the thread pool has a number of threads that are not more than corepoolsize. However, if the Allowcorethreadtimeout (Boolean) method is invoked and the number of threads in the thread pool is not greater than corepoolsize, the KeepAliveTime parameter also works until the thread pool has a thread number of 0; Unit: Parameter KeepAliveTime, with 7 values, and 7 static properties in the Timeunit class:

Timeunit.days;               Days
timeunit.hours;             Hour
timeunit.minutes;           Minute
Timeunit.seconds;           Second
timeunit.milliseconds;      Millisecond
timeunit.microseconds;      Delicate
timeunit.nanoseconds;       Na sec
Workqueue: A blocking queue that is used to store tasks that are waiting to be performed, and the choice of this parameter is important to have a significant impact on the running process of the thread pool, and generally, there are several options for blocking queues here:
Arrayblockingqueue;
Linkedblockingqueue;
Synchronousqueue;

Arrayblockingqueue and Priorityblockingqueue use less, generally using linkedblockingqueue and synchronous. The queue policy for the thread pool is related to Blockingqueue. Threadfactory: Thread factory, primarily used to create threads; handler: Represents the following four values when the policy is rejected when processing a task:

Threadpoolexecutor.abortpolicy: Discards the task and throws the rejectedexecutionexception exception. 
Threadpoolexecutor.discardpolicy: Also discards the task, but does not throw an exception. 
Threadpoolexecutor.discardoldestpolicy: Discard the task at the top of the queue and try to perform the task again (Repeat this procedure)

The configuration of specific parameters and the relationship of the thread pool will be described in the next section.

The code from the Threadpoolexecutor class given above can be known, Threadpoolexecutor inherits Abstractexecutorservice, Let's take a look at the implementation of Abstractexecutorservice:

Public abstract class Abstractexecutorservice implements Executorservice {protected <T> RUNNABLEFUTURE&LT;T&G T
    Newtaskfor (Runnable Runnable, T value) {};
    Protected <T> runnablefuture<t> newtaskfor (callable<t> callable) {};
    Public future<?> Submit (Runnable Task) {};
    Public <T> future<t> Submit (Runnable task, T result) {};
    Public <T> future<t> Submit (callable<t> Task) {}; Private <T> T Doinvokeany (collection<? extends callable<t>> tasks, Boolean t
    Imed, Long Nanos) throws Interruptedexception, Executionexception, timeoutexception {}; Public <T> T Invokeany (collection<? extends callable<t>> tasks) throws Interruptedexception, Exe
    cutionexception {};  Public <T> T Invokeany (collection<? extends callable<t>> tasks, long timeout, Timeunit unit) throws inteRruptedexception, Executionexception, timeoutexception {};  Public <T> list<future<t>> InvokeAll (collection<? extends callable<t>> tasks) throws
    interruptedexception {};
                                         Public <T> list<future<t>> InvokeAll (collection<? extends callable<t>> tasks,
Long timeout, timeunit unit) throws interruptedexception {}; }

Abstractexecutorservice is an abstract class that implements the Executorservice interface.

We then look at the implementation of the Executorservice interface:

Public interface Executorservice extends Executor {void shutdown ();
    Boolean IsShutDown ();
    Boolean isterminated ();
    Boolean awaittermination (long timeout, timeunit unit) throws Interruptedexception;
    <T> future<t> Submit (callable<t> Task);
    <T> future<t> Submit (Runnable task, T result);
    Future<?> Submit (Runnable Task); <T> list<future<t>> InvokeAll (collection< extends callable<t>> tasks) throws Interr
    Uptedexception;
                                  <T> list<future<t>> InvokeAll (collection<? extends callable<t>> tasks,

    Long timeout, timeunit unit) throws Interruptedexception; <T> T Invokeany (collection< extends callable<t>> tasks) throws Interruptedexception, Executione
    Xception; <T> T invokeany (collection<? extends callable<t>> tasks, long timeout, timeunit UniT) throws Interruptedexception, Executionexception, timeoutexception; }

And Executorservice is inheriting the executor interface, we look at the implementation of the Executor interface:

Public interface Executor {
    void execute (Runnable command);
}

Here, we should understand the relationship between Threadpoolexecutor, Abstractexecutorservice, Executorservice and executor.

Executor is a top-level interface that declares only one method execute (Runnable), the return value is void, and the argument is a Runnable type, which can be understood literally, and is used to perform the assigned task;

The Executorservice interface then inherits the executor interface and declares a number of methods: Submit, InvokeAll, Invokeany, and shutdown;

The abstract class Abstractexecutorservice implements the Executorservice interface and realizes all the methods declared in Executorservice.

Then Threadpoolexecutor inherits the class Abstractexecutorservice.

There are several very important methods in the Threadpoolexecutor class:

Execute ()
submit ()
shutdown ()
Shutdownnow ()

The Execute () method is actually a method declared in executor, implemented in Threadpoolexecutor, this method is the core method of Threadpoolexecutor, by which a task can be submitted to the thread pool. To the thread pool to execute.

The Submit () method is a method declared in the Executorservice, which has a specific implementation in Abstractexecutorservice and is not rewritten in threadpoolexecutor. This method is also used to submit a task to the thread pool, but unlike the Execute () method, it can return the result of a task execution, see the implementation of the Submit () method, and find that it is actually the execute () method of the call. It only uses future to get the results of the task execution (future related content will be described in the next article).

Shutdown () and Shutdownnow () are used to close the thread pool.

There are many other ways to do this:

For example: Getqueue (), Getpoolsize (), Getactivecount (), Getcompletedtaskcount (), and so on to obtain the thread pool-related properties of the method, interested friends can access the API. two. Deep analysis of the principle of thread pool implementation

In the last section, we have introduced the threadpoolexecutor from the macroscopic, below we will explain in depth the concrete realization principle of the thread pool, from the following several aspects explains:

  1. Thread pool status

2. Implementation of the mandate

3. Thread initialization in the thread pool

4. Task caching queues and queuing policies

5. Task Rejection Policy

6. Shutdown of the thread pool

7. Dynamic adjustment of thread pool capacity

1. Thread pool status

A volatile variable is defined in Threadpoolexecutor, and several static final variables are defined to represent each state of the thread pool:

volatile int runstate;
static final int RUNNING    = 0;
static final int SHUTDOWN   = 1;
static final int STOP       = 2;
static final int terminated = 3;

Runstate represents the state of the current thread pool, which is a volatile variable to ensure visibility between threads;

The following static final variables represent the possible runstate of a number of values.

When the thread pool is created, it is in the running state at the initial time;

If the shutdown () method is invoked, the thread pool is in a shutdown state, at which point the thread pool cannot accept the new task, and it waits for all tasks to complete;

If the Shutdownnow () method is invoked, the thread pool is in the stop state, at which point the thread pool cannot accept the new task and will attempt to terminate the task being performed;

When the thread pool is in a shutdown or stop state, and all worker threads have been destroyed, the thread pool is set to the terminated state after the task cache queue has been emptied or the execution has finished.

2. Implementation of the mandate

Before we get to the whole process of putting a task on the thread pool to a task execution, let's take a look at some of the other more important member variables in the Threadpoolexecutor class:

private final blockingqueue<runnable> Workqueue;   A task cache queue that holds tasks waiting to be performed private final reentrantlock Mainlock = new Reentrantlock (); The main state lock of the thread pool, which is used to change the thread pool state (such as thread pool size//, runstate, etc.) using this lock private f  Inal hashset<worker> workers = new hashset<worker> ();    Used to store the working set private volatile long keepalivetime;   Thread Survival Time private volatile Boolean allowcorethreadtimeout;     Whether to allow the core thread to be set to live time private volatile int corepoolsize;   The size of the core pool (that is, when the number of threads in the thread pool is greater than this parameter, the committed task is placed in the task cache queue) private volatile int maximumpoolsize;       Thread pool maximum tolerable number of threads private volatile int poolsize; Current number of threads in the thread pool private volatile rejectedexecutionhandler handler;   Task reject policy private volatile threadfactory threadfactory;   Thread factory, used to create threads private int largestpoolsize;   Used to record the maximum number of threads that have ever occurred in the thread pool private long completedtaskcount; Used to record the number of tasks that have been completed 

The role of each variable has been marked out, here to focus on the corepoolsize, Maximumpoolsize, largestpoolsize three variables.

Corepoolsize in many places is translated into the core pool size, in fact, I understand this is the size of the thread pool. For a simple example:

If there is a factory, there are 10 workers in the factory, each worker can only do one task at the same time.

So as long as 10 workers are free, the task is assigned to the idle workers;

When 10 of workers have a task to do, if there is a task, the task to wait in line;

If the number of new tasks is growing much faster than the speed at which workers do their jobs, then the plant's supervisors may want to take remedial action, such as recruiting 4 more temporary workers;

The task was then assigned to the 4 temporary workers;

If 14 workers do not have enough speed to do the task, then the plant supervisor may consider not accepting new tasks or abandoning some of the tasks ahead.

When some of the 14 workers are idle and the new task is growing at a slower pace, the plant supervisor may consider quitting 4 temporary workers, only to keep the original 10, after all, the extra workers are paid.

The corepoolsize in this example is 10, and Maximumpoolsize is 14 (10+4).

That is, corepoolsize is the thread pool size, and maximumpoolsize seems to me to be a remedy for the thread pool, which is a remedy when the task volume suddenly is too large.

For ease of understanding, however, the corepoolsize is translated to the core pool size later in this article.

Largestpoolsize is just a variable used to record the maximum number of threads that have ever been in the thread pool, with no relation to the capacity of the thread pool.

Let's go to the bottom of the list and see what process the task has undergone from submission to final execution.

In the Threadpoolexecutor class, the most core task submission method is the Execute () method, although it is possible to submit a task through submit, but in fact the execute () method is ultimately invoked in the Submit method. So we just need to study the implementation principle of the Execute () method:

public void Execute (Runnable command) {
    if (command = = null)
        throw new NullPointerException ();
    if (poolsize >= corepoolsize | |!addifundercorepoolsize (command)) {
        if (runstate = = RUNNING && workqueue.o Ffer (command) {
            if (runstate!= RUNNING | | poolsize = = 0)
                ensurequeuedtaskhandled (command);
        }
        else if (!addifundermaximumpoolsize (command))
            reject (command);/is shutdown or saturated
    }
}

The above code may not look so easy to understand, let us explain in one sentence:

First, to determine whether the submitted task command is NULL, or NULL, to throw a null pointer exception;

Then the sentence, this sentence to understand:

if (poolsize >= corepoolsize | |!addifundercorepoolsize (command))

Because it is a or a conditional operator, the first half of the value is evaluated, and if the current number of threads in the thread pool is not less than the core pool size, it goes directly to the following if statement block.

If the current number of threads in the thread pool is less than the core pool size, then the second half is executed, which is the execution

Addifundercorepoolsize (command)

If you finish addifundercorepoolsize This method returns false, the following block of if statement continues, otherwise the entire method executes directly.

If you finish addifundercorepoolsize This method returns False, and then it goes on to determine:

if (runstate = = RUNNING && workqueue.offer (command))

If the current thread pool is in the running state, the task is placed in the task cache queue, or if the current thread pool is not in a running state or if the task fails to put in the cache queue, execute:

Addifundermaximumpoolsize (command)

If the Addifundermaximumpoolsize method fails, The Reject () method is executed for task refusal processing.

Back to the front:

if (runstate = = RUNNING && workqueue.offer (command))

The execution of this sentence continues to be judged if the current thread pool is in running state and the task is placed in the task cache queue successfully:

if (runstate!= RUNNING | | poolsize = = 0)

This is to prevent a contingency measure that the thread pool is turned off by other threads when the task is added to the task cache queue and the shutdown or Shutdownnow methods are suddenly invoked. If this is the case, execute:

ensurequeuedtaskhandled (command)

The emergency processing, from the name can be seen is to ensure that the tasks added to the task cache queue to be processed.

We then look at the implementation of 2 key methods: Addifundercorepoolsize and Addifundermaximumpoolsize:

Private Boolean addifundercorepoolsize (Runnable firsttask) {
    Thread t = null;
    Final Reentrantlock mainlock = This.mainlock;
    Mainlock.lock ();
    try {
        if (Poolsize < corepoolsize && Runstate = = RUNNING)
            t = addthread (firsttask);        Create thread to perform Firsttask task   
        } finally {
        mainlock.unlock ();
    }
    if (t = = null) return
        false;
    T.start ();
    return true;
}

This is a concrete implementation of the Addifundercorepoolsize method, which can be seen from its name as a method to execute when the core pool is smaller than the kernel. below see its concrete implementation, first get to the lock, because this place involves the thread pool state changes, first through the IF statement to determine whether the current thread pool in the number of threads is less than the core pool size, a friend may have doubts: the previous in the Execute () method has not already judged it, Only the thread pool executes the Addifundercorepoolsize method when the number of threads is less than the core pool size, and why this place continues to be judged. The reason is simple, the previous judgment process is not locked, so it may be judged by the Execute method poolsize less than Corepoolsize, and after the judgement, in other threads to the thread pool to submit the task, may lead to poolsize not less than corepoolsize, so it is necessary to continue to judge this place. It then follows that the thread pool state is running and is simple because it is possible to invoke shutdown or Shutdownnow methods in other threads. And then there's the execution:

t = Addthread (firsttask);

This method is also very critical, the parameters passed in for the submitted task, the return value is the thread type. Then, then, to determine if T is null, empty indicates that the thread creation failed (that is, poolsize>=corepoolsize or runstate is not equal to running), otherwise the T.start () method is invoked to start the thread.

Let's take a look at the implementation of the Addthread method:

Private Thread Addthread (Runnable firsttask) {
    worker w = new Worker (Firsttask);
    Thread t = threadfactory.newthread (w);  Creates a thread that executes the task   
    if (t!= null) {
        w.thread = t;            The member Variable       
        workers.add (w) that assigns the reference of the thread being created to W;
        int nt = ++poolsize;     Current number of threads plus 1       
        if (NT > Largestpoolsize)
            largestpoolsize = NT;
    }
    return t;
}

In the Addthread method, you first create a worker object with the submitted task, and then call the thread factory threadfactory create a new thread T, and then assign the thread T reference to the Worker object's member variable thread. The worker object is then added to the working set through Workers.add (W).

Now let's look at the implementation of the worker class:

Private Final class Worker implements Runnable {private final Reentrantlock Runlock = new Reentrantlock ();
    Private Runnable Firsttask;
    volatile long completedtasks;
    Thread thread;
    Worker (Runnable firsttask) {this.firsttask = Firsttask;
    Boolean isactive () {return runlock.islocked ();
        } void Interruptifidle () {final Reentrantlock runlock = This.runlock;
            if (Runlock.trylock ()) {try {if (thread!= thread.currentthread ()) Thread.Interrupt ();
            finally {Runlock.unlock ();
    }} void Interruptnow () {thread.interrupt ();
        } private void Runtask (Runnable Task) {final Reentrantlock runlock = This.runlock;
        Runlock.lock (); try {if (Runstate < STOP && thread.interrupted () && Runsta
           Te >= STOP) Boolean ran = false; BeforeExecute (thread, Task);           
            BeforeExecute method is a method of Threadpoolexecutor class, there is no concrete implementation, the user can according to//oneself need to overload this method and the following AfterExecute method to do some statistical information, such as the execution time of a certain task
                try {task.run ();
                ran = true;
                AfterExecute (task, NULL);
            ++completedtasks;
                The catch (RuntimeException ex) {if (!ran) AfterExecute (task, ex);
            Throw ex;
        finally {Runlock.unlock ();
            } public void Run () {try {Runnable task = Firsttask;
            Firsttask = null; while (Task!= null | |
                (Task = Gettask ())!= null) {Runtask (Task);
            task = null;   Finally {Workerdone (this); Clean up work when there are no tasks in the task queue}}

It actually implements the Runnable interface, so the above thread t = Threadfactory.newthread (w); the effect is basically the same as the following sentence:

Thread t = new Thread (w);

It's equivalent to a runnable task that executes this runnable in thread T.

Since the worker implements the Runnable interface, the most central method of nature is the run () method:

public void Run () {
    try {
        Runnable task = Firsttask;
        Firsttask = null;
        while (Task!= null | | (Task = Gettask ())!= null) {
            runtask (Task);
            task = null;
        }
    } finally {
        workerdone (this);
    }
}

As can be seen from the implementation of the Run method, it first executes the task Firsttask that is passed through the constructor, and after the Runtask () is executed firsttask, the while loop continues to go through Gettask () to fetch the new task to execute, then where to take it. Naturally it is taken from the task cache queue, Gettask is a method in the Threadpoolexecutor class, not a method in the worker class, and the following is the implementation of the Gettask method:

Runnable Gettask () {for (;;)
            {try {int state = Runstate;
            if (State > SHUTDOWN) return null;
            Runnable R;
            if (state = = SHUTDOWN)//help drain queue r = Workqueue.poll (); else if (poolsize > Corepoolsize | | allowcorethreadtimeout)//If the number of threads is larger than the core pool size or allows idle time to be set for the core pool thread, the//is taken by poll
            , if you wait for a certain time to not get the task, then return null R = Workqueue.poll (KeepAliveTime, timeunit.nanoseconds);
            else R = Workqueue.take ();
            if (r!= null) return R; if (Workercanexit ()) {//If the task is not fetched, that is, R is null, determine if the current worker can exit if (runstate >= SHUTDOWN)//Wake up Oth   ERs interruptidleworkers ();
            Break worker return NULL in idle state;
        }/Else Retry} catch (Interruptedexception IE) {//on interruption, Re-check runstate }
    }
}

In Gettask, the current thread pool state is judged first, and NULL is returned directly if the runstate is greater than shutdown (that is, stop or terminated).

If Runstate is shutdown or running, the task caches queues from the task.

If the number of threads in the current thread pool is greater than the core pool size corepoolsize or allows idle survival time to be set for threads in the core pool, call poll (Time,timeunit) to fetch the task, which will wait for a certain amount of time and return NULL if the task is not taken.

Then determine if the task R is null, NULL, and then call the Workercanexit () method to determine whether the current worker can quit, let's take a look at the implementation of Workercanexit ():

Private Boolean Workercanexit () {
    final reentrantlock mainlock = This.mainlock;
    Mainlock.lock ();
    Boolean canexit;
    If the runstate is greater than or equal to the stop, or the task cache queue is empty
    /or  allows idle survival time for the core pool thread and the number of threads in the thread pool is greater than 1
    try {
        canexit = runstate > = STOP | |
            Workqueue.isempty () | |
            (allowcorethreadtimeout &&
             poolsize > Math.max (1, corepoolsize));
    } finally {
        mainlock.unlock ();
    }
    return canexit;
}

This means that if the thread pool is in a stop state, or if the task queue is empty or allows idle survival time for the core pool thread and the number of threads is greater than 1 o'clock, the worker is allowed to exit. If you allow worker to exit, call Interruptidleworkers () interrupt the idle worker, and we'll look at the implementation of Interruptidleworkers ():

void Interruptidleworkers () {
    final reentrantlock mainlock = This.mainlock;
    Mainlock.lock ();
    try {for
        (Worker w:workers)  //is actually calling the Worker's Interruptifidle () method
            W.interruptifidle ();
    } finally {
        Mainlock.unlock ();
    }

As you can see from the implementation, it actually calls the Interruptifidle () method of the worker, in the worker's Interruptifidle () method:

void Interruptifidle () {
    final reentrantlock Runlock = THIS.RUNL

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.