Let's start with the overall process of threadpoolexecutor: For the threadpoolexecutor code, let's look at the execute method:
Public void execute (runnable command) {If (command = NULL) throw new nullpointerexception (); // when poolsize is greater than or equal to corepoolsize, no threads are added, otherwise, the new initialization thread if (poolsize> = corepoolsize |! Addifundercorepoolsize (command) {// The execution is performed outside the thread execution status, and can be added to the queue if (runstate = running & workqueue. offer (command) {If (runstate! = Running | poolsize = 0) ensurequeuedtaskhandled (command);} // when the poolsize is greater than or equal to corepoolsize, the new initialization thread else if (! Addifundermaximumpoolsize (command) // unable to add initialization execution thread, how to execute reject (call rejectedexecutionhandler) reject (command); // is shutdown or saturated }}
Let's look at the real thread performer (worker ):
Private final class worker implements runnable {/*** runs a single task between before/after methods. */private void runtask (runnable task) {final reentrantlock runlock = This. runlock; runlock. lock (); try {/** if pool is stopping ensure thread is interrupted; * If not, ensure thread is not interrupted. this requires * a double-check of State in case the interrupt was * cleared concurrently wit H A shutdownnow -- if so, * the interrupt is re-enabled. * /// when the execution status of the thread pool is closed, the current thread's interrupt () operation if (runstate> = stop | (thread. interrupted () & runstate> = stop) & hasrun) thread. interrupt ();/** track execution state to ensure that afterexecute * is called only if task completed or threw * exception. otherwise, the caught runtime exception * will have been thrown by afterexecute itself, in * Which case we don't want to call it again. */Boolean ran = false; beforeexecute (thread, task); try {// execute a task. run (); ran = true; afterexecute (task, null); ++ completedtasks;} catch (runtimeexception ex) {If (! Ran) afterexecute (task, ex); throw ex ;}} finally {runlock. unlock () ;}/ *** main run loop */Public void run () {try {hasrun = true; runnable task = firsttask; firsttask = NULL; // determine whether the task to be executed exists while (task! = NULL | (task = gettask ())! = NULL) {runtask (task); task = NULL ;}} finally {// if not, remove the worker thread, if poolsize is 0, try to close the thread pool workerdone (this );}}} /* utilities for worker thread control * // *** gets the next task for a worker thread to run. the general * approach is similar to execute () in that worker threads trying * to get a task to run do so on the basis of prevailing state * accessed outside of locks. this may cause them to Cho Ose the * "wrong" action, such as trying to exit because no tasks * appear to be available, or entering a take when the pool is in * The process of being shut down. these potential problems are * countered by (1) rechecking pool State (in workercanexit) * before giving up, and (2) interrupting other workers upon * shutdown, so they can recheck state. all other user-based state * changes (to allowc Orethreadtimeout etc) are OK even when * completed MED asynchronously WRT gettask. ** @ return the task */runnable gettask () {for (;) {try {int state = runstate; If (State> shutdown) return NULL; runnable R; if (State = shutdown) // help drain queue r = workqueue. poll (); // when the thread pool is greater than corepoolsize and there is a execution timeout, wait for the corresponding time and take out the queue thread else if (poolsize> corepoolsize | allowcorethreadtimeout) R = workqueue. po LL (KeepAliveTime, timeunit. nanoseconds); else // The New thread r = workqueue. Take (); If (R! = NULL) return r; // determines the running status of the thread pool. If the value is greater than corepoolsize or the thread queue is empty, or, a worker thread whose thread pool is terminated can destroy if (workercanexit () {If (runstate> = shutdown) // wake up others interruptidleworkers (); return NULL ;} // else retry} catch (interruptedexception IE) {// on interruption, re-check runstate }}/ *** performs bookkeeping for an exiting worker thread. * @ Param w the worker * // record the number of execution tasks and remove the working thread. If the poolsize is 0, try to disable the void workerdone (worker W) thread pool) {final reentrantlock mainlock = This. mainlock; mainlock. lock (); try {completedtaskcount + = W. completedtasks; workers. remove (w); If (-- poolsize = 0) tryterminate ();} finally {mainlock. unlock ();}}
The code above summarizes the usage of the Four keywords.
- Corepoolsize Core Thread Count
Number of threads, total number of threads permanently stored in the thread pool
- Maximumpoolsize maximum number of threads
The maximum number of threads. the maximum number of threads allowed by this attribute cannot exceed. When the number of threads exceeds the total number of threads, a new thread is started to meet the thread execution requirements.
Gets the timeout time of tasks in the queue. When the thread cannot be obtained within the threshold, the processing thread is destroyed, provided that the number of threads is greater than corepoolsize.
The execution queue is the cache of tasks. When a task is submitted to the thread pool, it is pushed to the execution queue. Therefore, it is best to set the upper limit of the queue to prevent overflow.
Several implementations of threadpoolexecuter
public static ExecutorService newCachedThreadPool() { return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>()); }
- The cachedthreadpool execution thread is not fixed,
Advantage: all new tasks can be cached together. disadvantage: Only tasks completed in a short time can be used. (operations that take a long time can result in an infinite increase in the number of threads and depletion of system resources)
public static ExecutorService newSingleThreadExecutor() { return new FinalizableDelegatedExecutorService (new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>())); }
Benefits: for a single CPU, a single thread avoids the disadvantage of system resource theft: when multiple CPUs are used in multiple threads, the CPU resources cannot be fully utilized.
public static ExecutorService newFixedThreadPool(int nThreads) { return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(), threadFactory); }
Advantage: the number of threads is fixed, and there is no advantage of repeated thread initialization: the queue size is not limited. After the thread is initialized, thread resources cannot be recycled.
Principles and usage of threadpoolexecutor