Java Multithreading--JUC package source Analysis--Threadpoolexecutor source Analysis __ Thread pool

Source: Internet
Author: User
Tags closure static class throw exception

In the Juc package, the thread pool section itself has many components, which can be said to be an integrated application of the various techniques previously analyzed. Starting with this article, we will synthesize the previous knowledge and analyze each component of the thread pool one by one.
-executor/executors
-Introduction to Threadpoolexecutor use
-threadpoolexecutor Realization Principle
Threadpoolexecutor interrupts with graceful closure shutdown + awaittermination
-a misunderstanding of shutdown executor/executors

Executor is the most basic interface of the thread pool framework:

Public interface Executor {
    void execute (Runnable command);
}

While executors is a tool class for the thread pool framework, it makes it easy to create a thread pool of different policies:

single-threaded thread pool: corepoolsize = Maxpoolsize = 1, linkedblockingqueue public static for queues
    Executorservice Newsinglethreadexecutor () {return
        new Finalizabledelegatedexecutorservice (
            1, 1,
                                    0L, Timeunit.milliseconds,
                                    new   linkedblockingqueue<runnable> ());
    }
Fixed number of thread pools: corepoolsize = Maxpoolsize = N, linkedblockingqueue public
    static executorservice for queues Newfixedthreadpool (int nthreads) {return
        new Threadpoolexecutor (Nthreads, Nthreads,
                                      0L, Timeunit.milliseconds,
                                      new linkedblockingqueue<runnable> ());
    
//1.
Cachedthreadpool,corepoolsize = 0, the queue is synchronousqueue,maxpoolsize = Integer.max_value (this means that each task creates a thread.) 2. As for Synchronousqueue, the following will be analyzed separately with an article. It is a special queue, does not have its own capacity, put in a, you have to wait for the cable to take out, in order to remove blocking//3. From the construction parameters can be seen, idle thread, 60s no one to use, recycling public static Executorservice Newcachedthreadpool () {return new Threadpoolexecutor (0 , Integer.max_value, 60L, Timeunit.seconds, NE
    W synchronousqueue<runnable> ()); }
single-threaded, thread pool public static scheduledexecutorservice with periodic scheduling function
    Newsinglethreadscheduledexecutor () {return
        new Delegatedscheduledexecutorservice
            (New Scheduledthreadpoolexecutor (1));
    }

multithreaded, thread pool public
    static Scheduledexecutorservice newscheduledthreadpool (int corepoolsize) with periodic scheduling function {
        return new Scheduledthreadpoolexecutor (corepoolsize);
    }

    Public scheduledthreadpoolexecutor (int corepoolsize) {
        super (corepoolsize, Integer.max_value, 0, Timeunit.nanoseconds,
              new Delayedworkqueue ());
    

From the above can be seen, executors of the various tool functions, are used threadpoolexecutor/scheduledthreadpoolexecutor these 2 classes, the following do a detailed analysis. Threadpoolexecutor A detailed explanation of Threadpoolexecutor constructor function

The following is the most complete constructor of the Threadpoolexecutor parameter, figuring out the meaning of each parameter, and understanding the different strategies of the thread pool, and understanding the various tool functions in the Executors tool class.

    Public threadpoolexecutor (int corepoolsize, int maximumpoolsize, Long KeepAliveTime, timeunit unit, Blockingqueue<run Nable> Workqueue, Threadfactory threadfactory, rejectedexec
            Utionhandler handler) {if (Corepoolsize < 0 | | |
            Maximumpoolsize <= 0 | |
            Maximumpoolsize < Corepoolsize | |
        KeepAliveTime < 0) throw new IllegalArgumentException ();
        if (Workqueue = null | | threadfactory = NULL | | | | handler = NULL) throw new NullPointerException ();
        This.corepoolsize = corepoolsize;
        This.maximumpoolsize = maximumpoolsize;
        This.workqueue = Workqueue;
        This.keepalivetime = Unit.tonanos (KeepAliveTime);
        This.threadfactory = threadfactory;
    This.handler = handler; }

Corepoolsize: Number of threads that the thread pool always maintains
Maxpoolsize:corepoosize full, the queue is full, the expansion of the thread to this value
Idle threads in Keepalivetime/timeunit:maxpoolsize, too much time to destroy, the number of bus threads to shrink back to corepoolsize
Blockingqueue: Queue type used by the thread pool
Threadfactory: Thread creation factory, can be customized, there is also a default
Rejectedexecutionhandler:corepoolsize full, queue full, maxpoolsize full, the final rejection strategy. threadpool Task Processing process

From the above constructor interpretation, you can see that each submit task, like the following processing process:
Step1: Determines the current number of threads >= corepoolsize. If less than, the new thread executes; if greater than, enter STEP2
Step2: Determine if the queue is full. Not full, put in; be full, enter STEP3
Step3: Determines the current number of threads >= maxpoolsize. If less than, the new thread executes; if greater than, enter STEP4
STEP4: Reject a task According to the Deny policy

To sum up: First judge the corepoolsize, then judge the Blockingqueue, then judge the Maxpoolsize, finally using the rejection policy ThreadPool 4 rejection policy

Threadpoolexecutor's 4 internal classes, which define 4 different policies. The default is AbortPolicy

Strategy 1: Let the caller execute directly in his thread, the thread pool does not handle public static class Callerrunspolicy implements Rejectedexecutionhandler {Pub Lic Callerrunspolicy () {} public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {if (!e.i
            Sshutdown ()) {R.run (); }}//Policy 2: Thread pool Direct throw exception public static class AbortPolicy implements Rejectedexecutionhandler {Publi C AbortPolicy () {} public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {throw new Rejec
                                                 Tedexecutionexception ("Task" + r.tostring () + "rejected from" +
        E.tostring ()); }//Policy 3: The thread pool discards the task directly as if nothing happened public static class Discardpolicy implements Rejectedexecutionhandler {p Ublic Discardpolicy () {} public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {}}//Policy
   4: Delete the oldest task in the queue, put the task into the queue public static class Discardoldestpolicy implements Rejectedexecutionhandler {public discardoldestpolicy () {}
                public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {if (!e.isshutdown ()) {
                E.getqueue (). poll ();
            E.execute (R); }
        }
    }
Threadpoolexecutor Realization Principle

It is generally known that the basic principle of threadpool is a queue + a set of worker threads, which are constantly placed in the queue, and worker threads are constantly fetching. But in the concrete implementation, there are different implementation strategies:
Strategy 1: Blocking queues vs. non-blocking queues
In Threadpoolexecutor, a blocking queue is used, which is the following Blockingqueue interface:

Private final blockingqueue<runnable> Workqueue;

This means that the worker inside does not need to set the wait/notify mechanism, it just take from the queue, take to the execution, can not get, automatically blocking.

Also have the use of non-blocking queues, such as Tomcat 6 inside the thread pool implementation (later the source code detailed analysis), when there is no request processing, the worker internal to achieve blocking, and then new requests come in, and then notify Woker.

Strategy 2: New requests are placed directly into the queue or new thread is first.
The threadpool approach is to prioritize new thread processing, and when thread count >= Corepoolsize, consider putting into the queue.

Strategy 3: Unbounded queues vs. bounded queues.
If the unbounded queue means that the maxpoolsize logic will never be executed. This executors in the above, Fixedthreadpool has been reflected.

In addition, there are many implementation details, the following code detailed analysis of the source analysis

//core structure: a Blockingqueue + a thread set + a lock (control to workers, various threadcount mutually exclusive access) public class
    Threadpoolexecutor extends Abstractexecutorservice {...  

    Private final blockingqueue<runnable> Workqueue;

    Private final Reentrantlock Mainlock = new Reentrantlock ();
Private final hashset<worker> workers = new hashset<worker> (); }
    public void Execute (Runnable command) {if (command = = null) throw new NullPointerException (); if (poolsize >= corepoolsize | |!addifundercorepoolsize (command)) {//less than Corepoolsize's judgment if (Runstat
                    E = = RUNNING && workqueue.offer (command) {//into queue if (runstate!= RUNNING | | poolsize = 0) ensurequeuedtaskhandled (command);
                After entering the queue, 2 detection} else if (!addifundermaximumpoolsize (command))//less than Maxpoolsize's judgment Reject (command); Greater than Maxpoolsize, reject request}//poolsize < corepoolsize, direct new Thread, join HashSet Private Boolean Addi
        Fundercorepoolsize (Runnable firsttask) {Thread t = null;
        Final Reentrantlock mainlock = This.mainlock;
        Mainlock.lock (); 
        try {if (Poolsize < corepoolsize && Runstate = = RUNNING) T = addthread (firsttask); finally {Mainlock.unlock ();
        return t!= null; //Queue full, Poolsize < maxpoolsize, new thread again, join HashSet private Boolean addifundermaximumpoolsize (Runnable firs
        Ttask) {Thread t = null;
        Final Reentrantlock mainlock = This.mainlock;
        Mainlock.lock (); try {if (Poolsize < maximumpoolsize && Runstate = = RUNNING) T = Addthread (Firsttas
        k);
        finally {Mainlock.unlock ();
    return t!= null; }

Worker's implementation

    Private Final class Worker implements Runnable {... Private Runnable Firsttask; So there's firsttask this variable because when you create a Worker, you can directly assign it to a task to execute, and you can do it without assigning it to a task, letting it go to the blockingqueue to recycle the worker (Runnable Firsttask
        ) {this.firsttask = Firsttask; ///1 dead loops, continuously from blockingqueue, take task execution.
                If it is not, it will block public void run () {try {Hasrun = true in Gettask ()). 
                Runnable task = Firsttask;
                Firsttask = null; while (Task!= null | |
                    (Task = Gettask ())!= null) {Runtask (Task);
                task = null;  Finally {Workerdone (this);
   Worker thread Exit}} ... //gettask There is a key point: when Poolsize <= corepoolsize, is indefinitely blocked down, the thread will always exist, will not quit, die; when Poolsize >

    Corepoolsize or Allow Corethread also die, the thread will only block the KeepAliveTime time, time, queue or empty, no request, the thread quit, died, and Poolsize--. Runnable Gettask () {for (;;)
               {try { int state = Runstate;
                if (State > SHUTDOWN) return null;
                Runnable R;  if (state = = SHUTDOWN) R = Workqueue.poll ();
                    Poll are non-blocking calls without directly returning null else if (poolsize > Corepoolsize | | allowcorethreadtimeout)  R = Workqueue.poll (KeepAliveTime, timeunit.nanoseconds);  Wait 1 timeout, the default is the constructor inside the 60s else R = Workqueue.take ();
                Take is a blocking call, no, always blocking if (R!= null) return R;
                    if (Workercanexit ()) {if (runstate >= SHUTDOWN) interruptidleworkers ();
                return null;
        } catch (Interruptedexception IE) {//on interruption, Re-check runstate} }
    }
interrupted with graceful closure thread pool state toggle Diagram

    volatile int runstate;
    static final int RUNNING    = 0;
    static final int SHUTDOWN   = 1;
    static final int STOP       = 2;
    static final int terminated = 3;

Initially in the running state, after the call to shutdown (), switch to the shutdown state, call Shutdownnow (), and switch to the stop state.

What's the difference between shutdown and Shutdownnow?
Shutdown (): Does not empty the queue inside the task, will wait for all tasks completed. And it only interrupts the idle threads of those > corepoolsize

Shutdownnow (): Clears all tasks in the queue while sending interrupt signals to all threads

When the queue is empty && pool is also empty, the thread pool enters the terminated state. Shutdown/shutdownnow Source Analysis

 public void shutdown () {SecurityManager security = System.getsecuritymanager ();  if (security!= null) security.checkpermission (shutdownperm); Permission checks, check the current caller, and have permission to close the thread pool.

        There is no permission to throw an exception.
        Final Reentrantlock mainlock = This.mainlock;
        Mainlock.lock (); try {if (security!= null) {for (Worker w:workers) security.checkacce  SS (W.thread);
            Permission check} int state = Runstate;   if (State < SHUTDOWN) runstate = SHUTDOWN; Switch from running to shutdown. You cannot switch from stop or terminated to shutdown try {for (Worker w:workers) {W.interru  Ptifidle ();
                Traverse all threads, send signal to it} catch (SecurityException se) {runstate = state;
            Throw SE; } tryterminate ();
        An attempt was made to terminate the thread pool} finally {Mainlock.unlock (); }} public List<rUnnable> Shutdownnow () {SecurityManager security = System.getsecuritymanager ();

        if (security!= null) security.checkpermission (shutdownperm);
        Final Reentrantlock mainlock = This.mainlock;
        Mainlock.lock (); try {if (security!= null) {for (Worker w:workers) security.checkacces
            S (w.thread);
            int state = Runstate;    if (State < stop) runstate = stop;  Switch to stop state try {for (Worker w:workers) {w.interruptnow ();
                Variable all threads, send interrupt signal, whether or not a task is being performed} catch (SecurityException se) {//Try to back out
                Runstate = State;
            Tryterminate () here would to be a no-op throw se;  } list<runnable> Tasks = Drainqueue (); Empty queue request tryterminate ();
        Attempt to terminate thread pool return tasks; }finally {Mainlock.unlock (); }
    }

From the above, you can see that the difference between shutdown and Shutdownnow is 3 points:
(1) One is to switch to the shutdown state, one is to switch to the stop state
(2) Traverse all threads, a call to the Interruptifidle, a call to the Interruptnow.
(3) Shutdownnow will empty the task in the queue

What's the difference between Interruptifidle and Interruptnow?

    Private Final class Worker implements Runnable {...

        Private final Reentrantlock Runlock = new Reentrantlock ();
            void Interruptifidle () {final Reentrantlock runlock = This.runlock; 
                        if (Runlock.trylock ()) {try {if (hasrun && thread!= thread.currentthread ())
                Thread.Interrupt ();
                finally {Runlock.unlock ();
        }} void Interruptnow () {if (Hasrun) Thread.Interrupt ();
                public void Run () {try {Hasrun = true;
                Runnable task = Firsttask;
                Firsttask = null; while (Task!= null | |
                    (Task = Gettask ())!= null) {//gettask, there is also a logical runtask (Task) that responds to interrupts;
                task = null; Finally {Workerdone (this);
            }//Each time a task is taken out of the queue, the private void Runtask (Runnable task) is locked before execution.
            Reentrantlock runlock = This.runlock;
            Runlock.lock ();
                    try {if (runstate >= STOP | | (thread.interrupted () && runstate >= STOP))

                && hasrun) Thread.Interrupt ();
                Boolean ran = false;
                BeforeExecute (thread, Task);
                    try {task.run ();
                    ran = true;
                    AfterExecute (task, NULL);
                ++completedtasks;
                    The catch (RuntimeException ex) {if (!ran) AfterExecute (task, ex);
                Throw ex;
            finally {Runlock.unlock (); }
        }

As you can see, the key difference between Interruptifidle and Interuptnow is that the former locks access, which means that if the thread being interrupted is executing the runtask, the lock is not available. At this point shutdown will block until Woker executes the runtask. a misunderstanding of shutdown

According to the above analysis, is not shutdown must be blocked to the queue all requests are executed, and then return. Or, shutdown return, is not the queue inside the request must be done.

Not necessarily. The thread pool does not necessarily close immediately after shutdown returns. Why, then?

Take a look at the Gettask function below

    Runnable Gettask () {for (;;)
                {try {int state = Runstate;
                if (State > SHUTDOWN) return null;
                Runnable R;  if (state = = SHUTDOWN)//If the thread pool is SHUTDOWN, it is not blocked, whether or not it can be obtained, is directly returned r = Workqueue.poll ();
                    Key point: If it is a shutdown state, it will loop until all the tasks in the queue are empty, else if (poolsize > Corepoolsize | | allowcorethreadtimeout)
                R = Workqueue.poll (KeepAliveTime, timeunit.nanoseconds);  else R = Workqueue.take ();
                CASE1: The other thread first placed the interrupt flag bit, and then the current thread calls take//case 2: First call take blocking here, and then the other thread put the interrupt flag bit//2 case, will throw an exception, into the following interruptedexception
                if (r!= null) return R; if (Workercanexit ()) {if (runstate >= SHUTDOWN)//Wake up others Interrup
                    Tidleworkers ();
                return null;
    }//Else retry        When catch (Interruptedexception IE) {//blocked, receive interrupts, do not process, cycle check again//on interruption, Re-check Runstat
                e}} public void Run () {try {Hasrun = true;
                Runnable task = Firsttask;
                Firsttask = null; while (Task!= null | |
                    (Task = Gettask ())!= null) {//gettask, there is also a logical runtask (Task) that responds to interrupts;
                task = null;
            Finally {Workerdone (this); }//Each time a task is taken out of the queue, the private void Runtask (Runnable Task) {Final Reentrantloc is added before execution.
            K Runlock = This.runlock;
            Runlock.lock ();
                    try {if (runstate >= STOP | | (thread.interrupted () && runstate >= STOP))

                && hasrun) Thread.Interrupt ();
              Boolean ran = false;  BeforeExecute (thread, Task);
                    try {task.run ();
                    ran = true;
                    AfterExecute (task, NULL);
                ++completedtasks;  }
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.