Java thread pool Detailed and instance code _java

Source: Internet
Author: User

The technical background of the thread pool

In object-oriented programming, it is time-consuming to create and destroy objects because creating an object takes up memory resources or other additional resources. This is especially true in Java, where the virtual machine will attempt to track each object so that it can be garbage collected after the object is destroyed.

So one way to improve the efficiency of the service program is to minimize the number of objects created and destroyed, especially the creation and destruction of resource-consuming objects. How to use the existing objects to serve is a key problem to be solved, in fact, this is the reason why some "pooling resources" technology arises.

For example, many common components commonly found in Android can not be separated from the concept of "pool", such as a variety of image loading library, network request library, even if the Android message-passing mechanism of MEAASGE when using Meaasge.obtain () is the use of the Meaasge pool of objects, So this concept is very important. This article will introduce the thread pooling technology that is also consistent with this idea.

Advantages of the thread pool:

1. Reuse the threads in the thread pool to reduce the performance overhead associated with object creation and destruction;

2. Can effectively control the maximum number of concurrent threads, improve system resources utilization, while avoiding excessive resource competition, avoid congestion;

3. Multithreading can be simple to manage, so that the use of simple and efficient threading.

Thread Pool Framework Executor

The thread pools in Java are implemented through the Executor framework, and the Executor framework includes classes: Executor,executors,executorservice,threadpoolexecutor, callable and future, The use of futuretask and so on.

Executor: interface of all thread pools, only one method.

Public interface Executor {  
 void execute (Runnable command);  
}

Executorservice: Increasing the behavior of executor is the most direct interface of executor implementation class.

Executors: A series of factory methods are provided for the first thread pool, and the returned thread pool implements the Executorservice interface.

Threadpoolexecutor: Thread pool of the implementation of the class, the general use of various thread pools are based on this class implementation. The construction method is as follows:

Public threadpoolexecutor (int corepoolsize,
        int maximumpoolsize,
        long KeepAliveTime,
        timeunit Unit,
        blockingqueue<runnable> Workqueue) {This is
(Corepoolsize, maximumpoolsize, KeepAliveTime, Unit, Workqueue,

executors.defaultthreadfactory (), DefaultHandler);

}

Corepoolsize: The number of core threads in the thread pool, and the number of threads running in the threads pools will never exceed corepoolsize, which can survive by default. You can set Allowcorethreadtimeout to true, at which point the number of core threads is 0, at which time KeepAliveTime controls the timeout for all threads.

Maximumpoolsize: The maximum number of threads allowed by the thread pool;

KeepAliveTime: Refers to the idle thread end of the time-out period;

Unit: Is an enumeration that represents the units of a keepalivetime;

Workqueue: Represents the blockingqueue<runnable queue that holds the task.

Blockingqueue: A blocking queue (Blockingqueue) is a tool that is used primarily to control thread synchronization under Java.util.concurrent. If the blockqueue is empty, the operation of fetching from Blockingqueue will be blocked into the waiting state until the blockingqueue is awakened. Similarly, if the blockingqueue is full, any attempt to store things will also be blocked into the waiting state until there is room in the blockingqueue to be awakened to continue the operation. Blocking queues are often used by producers and consumers, and producers are threads that add elements to the queue, and consumers are the threads that take elements from the queue. A blocking queue is a container in which the producer stores elements, and the consumer takes only the elements from the container. The specific implementation class has linkedblockingqueue,arrayblockingqueued and so on. Generally, the interior is blocked and awakened by lock and condition (display lock) and condition learning and use.

The thread pool works as follows:

When a line Chengchigang is created, there is no thread inside it. The task queue is passed in as a parameter. However, even if there are tasks in the queue, the thread pool will not execute them immediately.

When the Execute () method is invoked to add a task, the thread pool makes the following judgments:

If the number of threads running is less than corepoolsize, create a thread to run the task immediately;

If the number of threads running is greater than or equal to corepoolsize, put the task in the queue;

If the queue is full and the number of threads running is less than maximumpoolsize, create a Non-core thread to run the task immediately;

If the queue is full and the number of threads running is greater than or equal to maximumpoolsize, the thread pool throws an exception rejectexecutionexception.

When a thread completes a task, it takes the next task from the queue to execute.

When a thread has nothing to do, more than a certain amount of time (KeepAliveTime), the thread pool judges that if the number of threads currently running is greater than corepoolsize, then the threads are stopped. So when all the tasks of the thread pool are complete, it eventually shrinks to the size of the corepoolsize.

Creation and use of thread pools

The build thread pool takes a static approach to the tool class executors, and the following are several common thread pools.

Singlethreadexecutor: A single background thread (its buffer queue is unbounded)

public static Executorservice Newsinglethreadexecutor () {return  
 new Finalizabledelegatedexecutorservice (
  New Threadpoolexecutor (1, 1,         
  0L, Timeunit.milliseconds,         
  new Linkedblockingqueue<runnable> ())); 

Create a single-threaded thread pool. This thread pool has only one core thread working, which is equivalent to single-threaded serial execution of all tasks. If this unique thread ends because of an exception, then there is a new thread to replace it. This thread pool guarantees that the order in which all tasks are performed is performed in the order in which they are submitted.

Fixedthreadpool: A thread pool with only core threads, fixed in size (its buffer queues are unbounded).

public static Executorservice newfixedthreadpool (int nthreads) {
Return to New Threadpoolexecutor (Nthreads, Nthreads,
0L, Timeunit.milliseconds,
New linkedblockingqueue<runnable> ());
}
Create a fixed size thread pool. Create a thread each time a task is committed, until the thread reaches the maximum size of the thread pool. The size of the thread pool will remain unchanged once it reaches its maximum value, and if a thread ends up executing an exception, the thread pool complements a new thread.

Cachedthreadpool: No boundary pool, can be automatic thread recycling.

public static Executorservice Newcachedthreadpool () {return   
 new Threadpoolexecutor (0,integer.max_value,           
   60L , Timeunit.seconds,          
   new synchronousqueue<runnable> ());  

If the thread pool is larger than the threads needed to process the task, a thread that is partially idle (60 seconds without a task) is reclaimed, and when the number of tasks increases, the thread pool can intelligently add new threads to handle the task. This thread pool does not limit the thread pool size, and the thread pool size is entirely dependent on the maximum threading size that the operating system (or JVM) can create. Synchronousqueue is a blocking queue with a buffer of 1.

Scheduledthreadpool: The core thread pool is fixed, the size of the infinite threads pools. This thread pool supports the need for timing and periodic execution of tasks.

public static Executorservice newscheduledthreadpool (int corepoolsize) {return   
 new Scheduledthreadpool ( Corepoolsize, 
    Integer.max_value,             
    default_keepalive_millis, milliseconds,             
    new Delayedworkqueue ()); 

Create a thread pool that periodically executes the task. If idle, the Non-core thread pool is recycled within default_keepalivemillis time.

There are two ways to submit tasks that are most commonly used by the thread pool:

Execute

Executorservice.execute (Runnable runable);

Submit

Futuretask task = Executorservice.submit (Runnable Runnable);
futuretask<t> task = Executorservice.submit (Runnable runnable,t result);

futuretask<t> task = Executorservice.submit (callable<t> callable);

The implementation of submit (callable callable), submit (Runnable Runnable) in the same vein.

Public <T> future<t> Submit (callable<t> Task) {
 if (task = null) throw new NullPointerException ();
 futuretask<t> ftask = newtaskfor (Task);
 Execute (ftask);
 return ftask;
}

You can see that a submit is a task that returns a result, and a Futuretask object is returned so that the result can be obtained through the Get () method. The final call to submit is also execute (Runnable runable), and submit simply encapsulates the callable object or Runnable as a Futuretask object, because Futuretask is a Runnable, So it can be executed in execute. About callable objects and how runnable are encapsulated into Futuretask objects, see Callable and future, and futuretask use.

Principle of thread pooling implementation

If you only talk about the use of the thread pool, then this blog is not of great value, at best, is familiar with the executor related API process. The thread pool implementation process does not use the Synchronized keyword, which is volatile,lock and synchronous (blocking) queues, atomic related classes, Futuretask, and so on, because the latter performance is better. Understanding of the process can be very good to learn the source of concurrency control ideas.

The advantage of the thread pool mentioned at the outset is that it can be summed up in the following three points:

Thread Reuse

Controlling the maximum number of concurrent numbers

Manage Threads

1. Thread Reuse Process

Understanding the thread reuse principle should first understand the thread lifecycle.

In the life cycle of a thread, it passes through the new (new), Ready (Runnable), running (Running), blocking (Blocked), and Death (Dead) 5 states.

Thread creates a new thread through new, which is to initialize some thread information, such as the thread name, ID, thread-owning group, and so on, and can be considered just an ordinary object. The Java virtual opportunity creates a method call stack and program counter for it after the start () of the thread is invoked, and the hasbeenstarted is true, and then the Start method is called with an exception.

A thread in this state does not start running, just means that the thread can run. As to when the thread starts to run, it depends on the scheduling of the thread scheduler in the JVM. When the thread acquires the CPU, the run () method is invoked. Do not call the thread's run () method yourself. Then, according to the scheduling of the CPU in the ready-run-blocking between the switch until the run () method end or other way to stop the thread, into the dead state.

So the principle of thread reuse is to keep the thread alive (ready, running, or blocking). Let's look at how Threadpoolexecutor implements thread reuse.

The main worker class in Threadpoolexecutor is used to control the reuse of threads. Look at the simplified code for the worker class to make it easier to understand:

Private Final class Worker implements Runnable {
final thread thread;

Runnable Firsttask;

Worker (Runnable firsttask) {

this.firsttask = firsttask;

This.thread = Getthreadfactory (). Newthread (this);

}

public void Run () {

runworker (this);

}

final void Runworker (Worker w) {

Runnable task = W.firsttask;

W.firsttask = null;

while (Task!= null | | (Task = Gettask ())!= null) {

task.run ();

}

}

A worker is a runnable and has a thread that is the thread that is to be opened, creates a new thread object while creating a new worker object, and passes the worker itself as a parameter to the TThread. So when the thread's start () method is invoked, it is actually the worker's run () method, and then to Runworker (), there is a while loop that gets the Runnable object from the Gettask () and executes sequentially. How did Gettask () get the Runnable object?

is still the simplified code:

Private Runnable Gettask () {
 if (some special case) {return
  null;
 }
Runnable r = Workqueue.take ();

return r;

}

This workqueue is the Blockingqueue queue where the task is initialized Threadpoolexecutor, and the queue is the runnable task to be performed. Because the Blockingqueue is a blocking queue, blockingqueue.take () Gets the wait state until Blockingqueue a new object is joined to wake up the blocked thread if it is empty. So the general thread of the run () method will not end, but the continuous implementation of the runnable task from Workqueue, which has reached the principle of thread reuse.

2. Control maximum concurrent number

When did the runnable put in Workqueue? When was the worker created, and when the thread in the worker called Start () to open a new thread to execute the worker's run () method? The above analysis to see the Runworker () in the work of the task is one after another, serial, that concurrency is how to embody it?

It's easy to think of some of the above tasks when execute (Runnable Runnable). See how it's done in execute.

Execute

Simplified code

public void Execute (Runnable command) {
 if (command = = null)
  throw new NullPointerException ();
int c = Ctl.get ();

Current number of threads < corepoolsize

if (Workercountof (c) < corepoolsize) {

//start a new thread directly.

if (addworker (command, True)) return

;

c = Ctl.get ();

}

Number of active threads >= corepoolsize

//runstate is running && queue is not full

if (isrunning (c) && Workqueue.offer ( command) {

int recheck = Ctl.get ();

Re-verify that the running state

//Running state removes the task from Workqueue and rejects

if (!isrunning (recheck) && Remove (command)

reject (command)//The policy rejection task specified by the thread pool

//Two:

//1. The running state rejects the new task

//2. The queue is full. Start a new thread failed (workcount > maximumpoolsize)

} else if (!addworker (command, false))

reject (command);

}

Addworker:

Simplified code

Private Boolean Addworker (Runnable Firsttask, Boolean core) {
int WC = Workercountof (c);

if (WC >= (core? corepoolsize:maximumpoolsize)) {return

false;

}

w = new Worker (firsttask);

Final Thread t = w.thread;

T.start ();

}

Follow the code to see the addition of tasks in the thread pool work mentioned above:

* If the number of threads running is less than corepoolsize, create a thread to run the task immediately;
* If the number of threads running is greater than or equal to corepoolsize, put this task into the queue;
* If the queue is full and the number of threads running is less than maximumpoolsize, create a Non-core thread to run the task immediately;
* If the queue is full and the number of threads running is greater than or equal to maximumpoolsize, then the thread pool throws an exception rejectexecutionexception.

That's why Android Asynctask in parallel execution is the reason for throwing rejectexecutionexception out of the maximum number of tasks, as detailed in the latest version of Asynctask source code and Asynctask's dark side.

By Addworker if the successful creation of a new thread succeeds, the new thread is opened through start (), and the Firsttask is the first task performed in run () in this worker.

Although the task of each worker is serial processing, if more than one worker is created because it is shared with a workqueue, it is processed in parallel.

Therefore, the maximum concurrency is controlled according to Corepoolsize and Maximumpoolsize. The approximate process can be represented in the following figure.

The above explanation and the diagram can be well understood in this process.

If you are doing Android development and are familiar with the handler principle, you may find this image familiar, and some of these processes are similar to those used in Handler,looper,meaasge. Handler.send (message) is equivalent to execute (runnuble), the Meaasge queue maintained in Looper is equivalent to Blockingqueue, but it is necessary to maintain the queue itself by synchronizing, Looper in the loop ( function loops from the Meaasge queue Meaasge and the Runwork () from the worker () continue to take runnable from Blockingqueue is the same truth.

3. Managing Threads

Thread pooling can be a good management of the reuse of threads, control the number of concurrent, as well as the destruction of the process, the thread of reuse and control concurrency has been said above, and the thread management process has been interspersed in which, but also very good understanding.

There is a Atomicinteger variable for the CTL in Threadpoolexecutor. Two contents were saved through this variable:

The number of all threads each thread is in a state where the number of low 29-bit stored threads, high 3-bit stored runstate, by bitwise operation to get different values.

Private final Atomicinteger ctl = new Atomicinteger (Ctlof (RUNNING, 0));
Get the state of the thread

private static int runstateof (int c) {return

C & ~capacity;

}

Number of Get worker

private static int workercountof (int c) {return

C & CAPACITY;

}

To determine if the thread is running

private static Boolean isrunning (int c) {return

C < SHUTDOWN;

}

This is mainly done by shutdown and Shutdownnow () to analyze the shutdown process of the thread pool. First, there are five states of the thread pool to control task addition and execution. Mainly introduces the following three kinds:

Running state: The thread pool is functioning properly, accepting new tasks and processing the tasks in the queue;

Shutdown Status: The new task is no longer accepted, but the tasks in the queue are executed;

Stop status: No longer accept new tasks, do not process tasks in queues shutdown This method will runstate to shutdown, terminating all idle threads, while the still working thread is unaffected, so the task person in the queue is executed.

The Shutdownnow method runstate the stop. And the shutdown method, this method terminates all threads, so the tasks in the queue are not executed.

Summarize
Through the analysis of Threadpoolexecutor source code, from the overall understanding of the thread pool creation, task addition, execution and other processes, familiar with these processes, the use of thread pool will be easier.

and the use of concurrency control, and producer-consumer model task processing, can help to understand or solve other related problems later. For example, Android in the handler mechanism, and looper in the Messager queue with a blookqueue to deal with the same can be, this write is to read the harvest of the source bar.

The above is the Java thread pool data collation, follow-up continue to supplement the relevant information, thank you for your support of this site!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.