Thread pool
There has always been this question: we usually use threads are all kinds of new thread (), and then directly inside the run () method to do the various operations we want to do, after the use of what to do to manage it? Why does the thread pool keep the core thread from releasing and always receiving tasks for processing?
Thread
Thread without him, there are two main methods, we first look at the start () method Introduction:
/**
* Causes this thread to begin execution; The Java Virtual machine
* Calls the <code>run</code> method of this thread.
* <p>
* The result is this and the threads is running concurrently:the
* Current thread (which returns the
* <code>start</code> method) and the other thread (which executes its
* <code>run</code> method).
* <p>
* It's never legal to start a thread more than once.
* In particular, a thread is not being restarted once it has completed
* Execution.
*
* @exception Illegalthreadstateexception If the thread was already
* started.
* @see #run ()
* @see #stop ()
*/
Public synchronized void Start () {
if (threadstatus! = 0)
throw new Illegalthreadstateexception ();
/* Notify The group that this thread was about to be started
* So, it can be added to the group ' s list of threads
* and the group ' s unstarted count can be decremented. */
Group.add (this);
started = false;
try {
Nativecreate (this, stackSize, daemon);
started = true;
} finally {
try {
if (!started) {
Group.threadstartfailed (this);
}
} catch (Throwable ignore) {
/* do nothing. If Start0 threw a throwable then
It'll be passed up the call stack */
}
}
}
From this method explanation, the start () method will eventually be given to the VM to execute the Run () method, so in general, we execute start () on a random thread, and the run () operation will be given to the VM to execute.
It also shows that it is illegal to re-enable threads, and when a thread finishes, May is not restarted once.
So what does the thread pool do in this case? Why would he be able to perform various tasks repeatedly?
--------------------------------------------------------------------------------
With all sorts of questions, let's see how the thread pool is implemented.
Thread pool
There are several common ways to create thread pools:
1. Newfixedthreadpool ()
2. Newsinglethreadexecutor ()
3. Newcachedthreadpool ()
4. Newscheduledthreadpool ()
The thread pool instances created by these 4 methods are not described in detail, but rather as the number of threads created, as well as issues such as recycling, because in fact these 4 methods will eventually invoke a unified construction method:
Public threadpoolexecutor (int corepoolsize,
int Maximumpoolsize,
Long KeepAliveTime,
Timeunit Unit,
Blockingqueue<runnable> workQueue) {
This (Corepoolsize, maximumpoolsize, KeepAliveTime, Unit, WorkQueue,
Executors.defaultthreadfactory (), DefaultHandler);
}
Specifically, the difference between these values determines the role of 4 thread pools:
1. Corepoolsize represents the number of core thread pools where the thread pool will reclaim more threads when the current number of threads is greater than the core thread pool
2. The maximumpoolsize represents the largest number of thread pools, and more threads are created when the thread pool needs to perform more tasks than the core thread, but the maximum cannot exceed this number
3. KeepAliveTime represents the time of the idle thread to survive, how long it takes to recycle when the extra thread finishes the task, the unit of time to control
4. WorkQueue is very important, and this task force will store all the runnable objects to be executed.
@param workQueue the queue to use for holding tasks before they areexecuted. This queue would hold only the {@code Runnable} tasks submitted by the {@code Execute} method.
We usually use the thread pool as a direct example. Execute (Runnable), follow it together and see what this method does specifically
public void Execute (Runnable command) {
if (command = = null)
throw new NullPointerException ();
/*
* Pro Ceed in 3 steps:
*
* 1. If fewer than corepoolsize threads is running, try to
* Start a new thread with the given command as its first
* Task. The call to Addworker atomically checks runstate and
* workercount, and so prevents false alarms that would add
* Threads when it shouldn ' t, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should has added a thread
* (because existing ones died since last checking) or that
* The pool shut down since entry to this method. So we
* recheck state and if necessary rolled back the enqueuing if
* stopped, or start a new thread if there is no Ne.
*
* 3. If We cannot the queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
In conjunction with the comments above, we learned that for the first time, the current number of core threads is judged first,
If it is less than the initialized value, it is created immediately; then the second if, insert the task into the worker thread, double-judge the task,
Assuming that the front cannot be joined directly to the thread pool worker collection, the join to the Workqueue queue waits for execution.
The If else judgment statement inside will check the status of the current thread pool. If the status of the thread pool itself is to be closed and cleaned up,
We can't commit the thread in. We're going to reject them here.
int c = Ctl.get ();
if (Workercountof (c) < corepoolsize) {
if (Addworker (command, True))
Return
c = Ctl.get ();
}
if (IsRunning (c) && workqueue.offer (command)) {
int recheck = Ctl.get ();
if (! isrunning (Recheck) && Remove (command))
Reject (command);
else if (workercountof (recheck) = = 0)
Addworker (null, FALSE);
}
else if (!addworker (command, FALSE))
Reject (command);
}
So in fact the main function is the Addworker () method, we continue to follow in:
Private Boolean Addworker (Runnable Firsttask, Boolean core) {
··· Extra code
try {
w = new Worker (firsttask); 1. Focus
Final Thread t = w.thread;
if (t! = null) {
Final Reentrantlock mainlock = This.mainlock;
Mainlock.lock ();
try {
Recheck while holding lock.
On threadfactory failure or if
Shut down before lock acquired.
int rs = runstateof (Ctl.get ());
if (Rs < SHUTDOWN | |
(rs = = SHUTDOWN && Firsttask = = null)) {
if (t.isalive ())//PreCheck that T is startable
throw new Illegalthreadstateexception ();
Workers.add (w);
int s = workers.size ();
if (S > Largestpoolsize)
Largestpoolsize = s;
Workeradded = true;
}
} finally {
Mainlock.unlock ();
}
if (workeradded) {
T.start (); 2. Focus
Workerstarted = true;
}
}
} finally {
if (! workerstarted)
Addworkerfailed (w);
}
return workerstarted;
}
We look at the key part, in fact, the most important thing is firsttask this runnable, we have been tracking this object can be, this object will be new Worker (), then this wroker () is a wrapper class, which carries the task we actually need to perform, After a series of judgments will be executed T.start (); This t is the thread inside the wrapper class worker class, so the whole logic is transformed into the worker's interior.
Private Final Class Worker
Extends Abstractqueuedsynchronizer
Implements Runnable
{
/**
* This class would never is serialized, but we provide a
* Serialversionuid to suppress a javac warning.
*/
Private static final long serialversionuid = 6138294804551838833L;
/** Thread This worker is running in. Null if factory fails. */
Final thread thread;
/** Initial task to run. possibly null. */
Runnable Firsttask;
/**
* Creates with given first task and thread from Threadfactory.
* @param firsttask the first task (null if none)
*/
Worker (Runnable firsttask) {
SetState (-1); Inhibit interrupts until Runworker
This.firsttask = Firsttask;
This.thread = Getthreadfactory (). Newthread (this);
}
/** Delegates main run loop to outer runworker. */
public void Run () {
Runworker (this);
}
... Omit code
}
This worker wrapper class, the important attribute two, thread is just above that method executes the start () object, this thread is the worker object itself as a Runnable object built out, So when we call the Thread.Start () method, the actual call is the worker class's run () method. Now it's time to trace it, look at this runworker, what the hell.
final void Runworker (Worker w) {
Thread wt = Thread.CurrentThread ();
Runnable task = W.firsttask;
W.firsttask = null;
W.unlock (); Allow interrupts
Boolean completedabruptly = true;
try {
while (task! = NULL | | (Task = Gettask ()) = null) {
W.lock ();
If pool is stopping, ensure thread is interrupted;
If not, the ensure thread is not interrupted. This
Requires a recheck in second case to deal with
Shutdownnow Race while clearing interrupt
if (Runstateatleast (Ctl.get (), STOP) | |
(Thread.interrupted () &&
Runstateatleast (Ctl.get (), STOP)) &&
!wt.isinterrupted ())
Wt.interrupt ();
try {
BeforeExecute (WT, Task);
Throwable thrown = null;
try {
Task.run ();
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error (x);
} finally {
AfterExecute (task, thrown);
}
} finally {
task = null;
w.completedtasks++;
W.unlock ();
}
}
completedabruptly = false;
} finally {
Processworkerexit (w, completedabruptly);
}
}
This method is still relatively understood:
1. A large loop, judging condition is task! = NULL | | (Task = Gettask ())! = Null,task Nature is the task we are going to perform, when the task is empty and gettask () does not take the task, the while () will end, the loop body inside is task.run ();
2. Here we can actually play a mind, that basic sorta, it must be this cycle has not exited, so as to maintain the continuous operation of this thread, when there are external tasks come in, the loop body can Gettask () and execute.
3. Below the last put Gettask () inside the code, verify the conjecture
Private Runnable Gettask () {
Boolean timedout = false; Did the last poll () Time out?
for (;;) {
int c = Ctl.get ();
int rs = runstateof (c);
Check If queue empty only if necessary.
if (Rs >= SHUTDOWN && (rs >= STOP | | workqueue.isempty ())) {
Decrementworkercount ();
return null;
}
int WC = Workercountof (c);
Is workers subject to culling?
Boolean timed = Allowcorethreadtimeout | | WC > corepoolsize;
if (WC > Maximumpoolsize | | (timed && timedout))
&& (WC > 1 | | workqueue.isempty ())) {
if (Compareanddecrementworkercount (c))
return null;
Continue
}
try {
Runnable r = timed?
Workqueue.poll (KeepAliveTime, timeunit.nanoseconds):
Workqueue.take ();
if (r! = null)
return R;
TimedOut = true;
} catch (Interruptedexception retry) {
TimedOut = false;
}
}
}
The truth is, the inside is also a cycle of death, mainly see Runnable r = timed?
Workqueue.poll (KeepAliveTime, timeunit.nanoseconds):
Workqueue.take ();
Work Queue WorkQueue will always go to get the task, the core thread will always be stuck in the Workqueue.take () method until it gets runnable and then return, non-core thread will Workqueue.poll (KeepAliveTime, Timeunit.nanoseconds), if the timeout has not been received, the next cycle judgment Compareanddecrementworkercount will return the Null,worker object's run () method to determine the loop body null, Task end , and then the thread is reclaimed by the system
As can be seen from the above code, the role of Gettask () is
If the current number of active threads is greater than the number of core threads, when fetching a task in the cache queue, if there is no task in the cache queue, wait for the length of KeepAliveTime, and no task returns null at this point, which means that the while loop in the Runworker () method is exited. Its corresponding thread is about to be destroyed, that is, a thread is missing from the thread pool. So as long as the number of threads in the thread pool is greater than the number of core threads, these extra threads are destroyed one by one.
If the current number of active threads is less than or equal to the number of core threads, the same goes for the cache queue, but when there is no task in the cache queue, it goes into a blocking state until the task is removed, so the thread is blocked and is not destroyed because there are no tasks in the cache queue. This ensures that the thread pool has n threads that are live and can handle tasks at any time to achieve the purpose of re-use.
Summary
Through the above analysis, it should be a relatively clear answer to the "thread pool core thread is how to be reused" this problem, but also to the thread pool implementation mechanism has a further understanding:
When a new task comes, see if the current number of threads exceeds the number of core threads, and if it's not over, create a new thread directly to perform the task, and if it's over, see if the cache queue is full, not full, put the new task into the cache queue, and fill up with a new thread to perform the task. If the number of threads in the thread pool has reached the specified maximum number of threads, then the task is rejected according to the appropriate policy.
When the tasks in the cache queue are executed, the number of threads in the thread pool is more than the number of core threads, and the extra threads are destroyed until the number of threads in the thread pool equals the number of core threads. These threads will not be destroyed at this time, they are in a blocking state, waiting for a new task to come.
Attention:
The "Core thread", "Non-core thread" in this article is a virtual concept, which is a virtual concept for the convenience of description, in which no thread is marked as "core thread" or "non-core thread", and all threads are the same, except when the thread pool has more threads than the specified number of core threads. Will destroy the extra threads, leaving only the specified number of threads in the pool. The threads that are destroyed are random, possibly the first created thread, or the last thread created, or the thread created at other times. At first I thought that some threads would be labeled "core threads", while others were "non-core threads", destroying only those "non-core threads" when destroying redundant threads, and "core threads" not being destroyed. This understanding is wrong.
There is also an important interface blockingqueue worth to understand, it defines a number of teams in the queue to synchronize the operation of the method, but also can block, a great role.
Summarize
One sentence can be summarized, the thread pool is to use a bunch of thread-wrapped Wroker class collection, in which there is conditional on the death cycle, so you can continue to accept the task.
Why the thread pool can keep threads from being freed and run various tasks at any time