Learn Java thread pool from use to principle

Source: Internet
Author: User
Tags terminates

Source:Silenceduthttp://www.codeceo.com/article/java-threadpool-learn.htmlTechnical background of thread pool

In object-oriented programming, creating and destroying objects is time-consuming, because creating an object takes memory resources or more resources. More so in Java, virtual machines will attempt to track each object so that it can be garbage collected after the object is destroyed.

So one way to improve the efficiency of the service process is to minimize the number of objects created and destroyed, especially the resource-intensive object creation and destruction. How to use the existing object to serve is a key problem that needs to be solved, in fact, this is the reason that some "pooling resources" technology produces.

For example, many common components commonly found in Android can not be separated from the concept of "pool", such as various image loading libraries, network request library, even if the Android messaging mechanism of Meaasge when using Meaasge.obtain () is used in the Meaasge pool objects, So the concept is important. The thread pooling technique described in this article also conforms to this idea.

benefits of the thread pool :

    • reusing threads in a thread pool, reducing the performance overhead associated with object creation and destruction;
    • Can effectively control the maximum number of concurrent threads, improve system resource utilization, while avoiding excessive resource competition, avoid clogging;
    • can be multi-threaded simple management, so that the use of threads simple, efficient.
Thread Pool FrameworkExecutor

The thread pool in Java is implemented through the Executor framework, and the Executor framework includes classes: Executor,executors,executorservice,threadpoolexecutor, callable and future, The use of futuretask and so on.

Executor: There is only one method for all thread pool interfaces.

Executor {          execute(Runnable command);} 

Executorservice: Increasing the behavior of executor is the most direct interface to the executor implementation class.

Executors: Provides a series of factory methods for creating a first thread pool, and the returned thread pool implements the Executorservice interface.

threadpoolexecutor: The specific implementation class of the thread pool, the various thread pools used in general are based on this class implementation.
The construction method is as follows:

Threadpoolexecutor(int corepoolsize, this                              (corepoolsize, maximumpoolsize, KeepAliveTime, Unit, WorkQueue, Executors.defaultthreadfactory (), DefaultHandler);}  
    • corepoolsize: The number of core threads in the thread pool, the number of threads that are running in the threads pools will never exceed corepoolsize, and will survive by default. You can set Allowcorethreadtimeout to true, at which point the number of core threads is 0, at which time the KeepAliveTime controls the timeout for all threads.
    • maximumpoolsize: The maximum number of threads allowed by the thread pool;
    • KeepAliveTime: Refers to the time-out of idle thread end;
    • Unit: Is an enumeration that represents the units of the KeepAliveTime;
    • WorkQueue: Represents the blockingqueue<runnable queue that holds the task.
    • Blockingqueue: The blocking queue (Blockingqueue) is a tool that is primarily used to control thread synchronization under Java.util.concurrent. If the blockqueue is empty, the operation from the Blockingqueue will be blocked into the waiting state until the blockingqueue enters something to be awakened. Similarly, if the blockingqueue is full, any attempt to store something in it will be blocked into the waiting state until there is space in the blockingqueue to be woken up to continue operation.
      Blocking queues are often used for producer and consumer scenarios, where the producer is the thread that adds elements to the queue, and the consumer is the thread that takes the elements from the queue. The blocking queue is the container where the producer stores the elements, and the consumer only takes the elements from the container. The concrete realization class has linkedblockingqueue,arrayblockingqueued and so on. In general, the internal is through lock and condition (lock) and condition learning and use to achieve blocking and wake-up.

The thread pool works as follows:

    1. When a line Chengchigang is created, there is no thread inside. The task queue is passed in as a parameter. However, even if there are tasks in the queue, the thread pool does not execute them immediately.
    2. When the Execute () method is called to add a task, the thread pool makes the following judgment:
      • If the number of threads running is less than corepoolsize, then create the thread to run the task immediately;
      • If the number of threads running is greater than or equal to Corepoolsize, the task is placed in the queue;
      • If the queue is full and the number of threads running is less than maximumpoolsize, then create a non-core thread to run the task immediately;
      • If the queue is full and the number of threads running is greater than or equal to maximumpoolsize, the thread pool throws an exception rejectexecutionexception.
    3. When a thread finishes a task, it takes the next task from the queue to execute.
    4. When a thread has nothing to do, more than a certain amount of time (KeepAliveTime), the thread pool will determine if the number of threads currently running is greater than corepoolsize, then the threads will be stopped. So when all the thread pool tasks are completed, it eventually shrinks to the size of the corepoolsize.
Creation and use of thread pools

The build thread pool takes a static approach to the tool class executors, and here are a few common thread pools.

singlethreadexecutor: Single background thread (its buffer queue is unbounded)

Newsinglethreadexecutor() {            new Threadpoolexecutor (new Linkedblockingqueue<runnable> ())); 

Creates a single threaded pool of threads. This thread pool has only one core thread at work, which is equivalent to single thread serial execution of all tasks. If this unique thread ends because of an exception, a new thread will replace it. This thread pool guarantees that the order in which all tasks are executed is performed in the order in which the tasks are submitted.

Fixedthreadpool: Only the thread pool of the core thread, fixed size (its buffer queue is unbounded).

Newfixedthreadpool(new linkedblockingqueue<runnable> ());} 

Creates a fixed-size thread pool. Each time a task is committed, a thread is created until the thread reaches the maximum size of the threads pool. Once the maximum size of the thread pool is reached, the thread pool will be replenished with a new thread if it ends up executing an exception.

cachedthreadpool: No boundary pool, automatic thread recovery is possible.

Newcachedthreadpool() {             new Threadpoolexecutor (new synchronousqueue<runnable> ());}  

If the size of the thread pool exceeds the thread required to process the task, then a partially idle (60 second non-performing task) thread is reclaimed, and when the number of tasks increases, the thread pool can intelligently add new threads to handle the task. This thread pool does not limit the size of the thread pool, and the thread pool size is entirely dependent on the maximum thread size that the operating system (or JVM) can create. The Synchronousqueue is a blocking queue with a buffer of 1.

Scheduledthreadpool: The core thread pool is fixed, and the size is unlimited. This thread pool supports the need to schedule and periodically perform tasks.

Newscheduledthreadpool(new Delayedworkqueue ());} 

Create a thread pool that periodically executes the task. If idle, the non-core thread pool is reclaimed within default_keepalivemillis time.

There are two methods for submitting tasks that are most commonly used by the thread pool:

Execute:

Executorservice. Execute (runable); 

Submit:

Futuretask task = Executorservice.submit (Runnable Runnable); Futuretask<t> task = Executorservice.submit (Runnable runnable,t Result); Futuretask<t> task = Executorservice.submit (callable<t> callable);     

Submit (callable Callable) implementation, submit (Runnable Runnable) similarly.

Submit(callable<t> Task) {    return ftask;} 

You can see that the submit opens a task with a return result, which returns a Futuretask object so that the result can be obtained through the Get () method. The final invocation of the submit is also execute (Runnable runable), which simply encapsulates the callable object or Runnable into a Futuretask object, because Futuretask is a Runnable, So it can be executed in execute. For callable objects and how runnable are encapsulated into Futuretask objects, see Callable and future, futuretask use.

The principle of thread pool implementation

If only the use of the thread pool, then this blog is not much value, at best, is familiar with the process of executor related APIs. The thread pool implementation process does not use the Synchronized keyword, with volatile,lock and synchronous (blocking) queues, atomic related classes, Futuretask, and so on, because the latter performs better. The process of understanding can be very good to learn the idea of concurrency control in the source code.

The advantages of the thread pool mentioned at the outset are summarized in the following three points:

    1. Thread re-use
    2. Control the maximum number of concurrent
    3. Managing Threads
1. Thread Reuse Process

Understanding the thread reuse principle should first understand the thread life cycle.

In the life cycle of a thread, it undergoes 5 states of new (new), Ready (Runnable), run (Running), blocking (Blocked), and Death (Dead).

Thread creates new threads through new, which is the process of initializing some thread information, such as the thread name, ID, group, etc., which can be considered just a normal object. Call the thread's start () after the Java virtual opportunity creates a method call stack and a program counter for it, while Hasbeenstarted is true, and then the Start method is called with an exception.

The thread in this state does not start running, just means that the thread is ready to run. As to when the thread starts running, it depends on the scheduling of the thread scheduler in the JVM. When the thread acquires the CPU, the run () method is called. Don't call the thread's run () method yourself. Then, according to the CPU scheduling in ready-run-blocking between the switch, until the end of the run () method or other way to stop the thread, into the dead state.

So the principle of thread reuse should be to keep threads alive (ready, running, or blocking). Next look at how Threadpoolexecutor is implemented for thread reuse.

The Threadpoolexecutor main worker class is used to control the reuse of threads. Look at the simplified code of the worker class, so it's easy to understand:

PrivateFinalClassWorkerImplementsRunnable {final thread thread; Runnable Firsttask; Worker (Runnable firsttask) {this.firsttask = Firsttask; This.thread = Getthreadfactory (). Newthread (this);} public void run () {Runworker (this);} final void runworker (Worker W) {Runnable task = W.firsttask;w.firsttask = Span class= "Hljs-keyword" >null; while (Task! = null | | (Task = Gettask ())! = null) {Task.run ();}}        

The worker is a runnable and has a thread, which is the threading to be opened, creates a new thread object while creating a new worker object, and passes the worker itself as a parameter to the TThread. So when the start () method of the thread is called, it is actually the worker's run () method, and then to Runworker (), there is a while loop, which is executed sequentially from the Gettask () Runnable object. Gettask () How to get Runnable object?

is still the simplified code:

Gettask() {    if (some special case) {        return r;}  

This workqueue is the Blockingqueue queue that holds the task when the Threadpoolexecutor is initialized, and this queue is the runnable task that will be executed. Because Blockingqueue is a blocking queue, blockingqueue.take () gets into a wait state until Blockingqueue has a new object joined and wakes up the blocked thread if it is empty. So the general thread of the run () method will not end, but continue to carry out the runnable task from the Workqueue, which has reached the principle of thread reuse.

2. Control the maximum number of concurrent

When was the runnable put into workqueue? When is the worker created, and when does the thread in the worker call Start () to open a new thread to execute the worker's run () method? The above analysis shows that the Runworker () in the worker is executed one after the other, serially, and how does concurrency manifest?

It is easy to think that some of the above tasks will be done when execute (Runnable Runnable). See how it's done in execute.

Execute

Simplified code

PublicvoidExecuteRunnable command) {if (Command = =NullThrowNew NullPointerException ();int c = ctl.Get ();Current number of threads < corepoolsizeif (Workercountof (c) < corepoolsize) {Start a new thread directly.if (Addworker (command,true))Return c = ctl.Get (); }Number of active threads >= corepoolsizeRunstate for running && queue not fullif (isrunning (c) && workqueue.offer (command)) { int recheck = ctl.  Get (); //re-verify if the running status //non-running status removes the task from Workqueue and rejects if (!isrunning (recheck) && Remove (command)) Reject (command); / /Use a thread pool-specific policy to reject tasks //Two cases: //1. Non-running state rejects new task //2. Queue full start new thread failed (Workcount > Maximumpoolsize)} 
                                 
                                  else 
                                  if (!addworker (command, false)) reject (command);}     
                                  

Addworker:

Simplified code

Addworkerfinal Thread t = w.thread; T.start ();}

Follow the code to see the additional tasks that are mentioned in the thread pool work process:

* If the number of threads running is less than corepoolsize, then create a thread to run the task immediately; * If the   number of threads running is greater than or equal to corepoolsize, put the task in the queue;* If the queue is full, and the number of threads running is less than maximumpoolsize, then create a non-core thread to run the task immediately;* If the queue is full and the number of threads running is greater than or equal to Maximumpoolsize, Then the thread pool throws an exception rejectexecutionexception. 

This is why Android Asynctask in parallel execution is the reason why the maximum number of tasks is thrown rejectexecutionexception, see Asynctask Source code interpretation and Asynctask's dark side based on the latest version

By Addworker if the successful creation of a new thread succeeds, the new thread is opened through start () and the Firsttask is the first task executed in the worker's Run ().

Although the task for each worker is serial processing, if more than one worker is created, because a common workqueue is used, it is processed in parallel.

Therefore, the maximum concurrency is controlled according to Corepoolsize and Maximumpoolsize. The approximate process can be expressed.

The above explanations and graphs can be well understood in this process.

If you are doing Android development and are familiar with the handler principle, you may find this diagram quite familiar, some of which are similar to Handler,looper,meaasge in use. Handler.send (Message) is equivalent to execute (runnuble), and the Meaasge queue maintained in Looper is equivalent to Blockingqueue, except that it is necessary to maintain the queue itself through synchronization, and the loop in Looper ( The same is true of the function loop taking meaasge from the Meaasge queue and runwork () in the worker from Blockingqueue.

3. Managing Threads

Through the thread pool can be good to manage the reuse of threads, control the number of concurrency, as well as the destruction of the process, thread reuse and control concurrency has been mentioned above, and the thread management process has been interspersed in it, but also very well understood.

There is a Atomicinteger variable for the CTL in Threadpoolexecutor. Two items are saved through this variable:

    • The number of all threads
    • The state in which each thread is located

The low 29-bit memory threads, the high 3-bit runstate, and the bitwise operation to obtain different values.

PrivateFinal Atomicinteger ctl =New Atomicinteger (Ctlof (RUNNING,0));Get the state of a threadPrivateStaticint runstateof ( int c) {return C & ~capacity;} //get the number of worker private static int workercountof (int c) {return C & capacity;} //determine if the thread is running private static boolean isrunning  (int c) {return C < SHUTDOWN;}   

This is mainly through shutdown and shutdownnow () to analyze the thread pool shutdown process. First, the thread pool has five states to control task additions and executions. The following three types are mainly introduced:

    • Running state: The thread pool is functioning properly, can accept new tasks and handle tasks in the queue;
    • Shutdown status: No longer accepts new tasks, but performs tasks in the queue;
    • Stop status: New tasks are no longer accepted and tasks in the queue are not processed

Shutdown This method resets runstate to shutdown, terminates all idle threads, and the threads that are still working are unaffected, so the task person in the queue is executed. The Shutdownnow method resets runstate to stop. And the shutdown method, this method terminates all threads, so the tasks in the queue are not executed.

Summarize

Through the analysis of Threadpoolexecutor source code, from the overall understanding of the thread pool creation, task addition, execution and other processes, familiar with these processes, the use of the thread pool will be more relaxed.

Some of the lessons learned from concurrency control, as well as the use of producer-consumer model task processing, can be of great help in understanding or solving other related problems in the future. such as Android in the handler mechanism, and looper in the Messager queue with a blookqueue to deal with the same can be, this write is read the source of the Harvest bar.

Learn Java thread pool from use to principle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.