From usage to principle learning Java thread pool, principle java Thread Pool

Source: Internet
Author: User

From usage to principle learning Java thread pool, principle java Thread Pool
Source: technical background of SilenceDuthttp: // www.codeceo.com/article/java-threadpool-learn.htmlthread pool

In object-oriented programming, it takes a lot of time to create and destroy objects, because creating an object requires obtaining memory resources or other resources. Even more so in Java, virtual machines will try to track every object so that garbage collection can be performed after the object is destroyed.

Therefore, one way to improve service program efficiency is to minimize the number of times objects are created and destroyed, especially the creation and destruction of resource-consuming objects. How to use existing objects to serve is a key problem that needs to be solved. In fact, this is the reason why some "pooled resources" technologies are generated.

For example, many common components commonly seen in Android are inseparable from the concept of "pool", such as various image loading libraries and network request libraries, even if Meaasge In the Android message passing mechanism uses Meaasge. obtain () is the object in the Meaasge pool used, so this concept is very important. The thread pool technology introduced in this article is also in line with this idea.

Advantages of Thread Pool:

  • Reuse threads in the thread pool to reduce the performance overhead caused by object creation and destruction;
  • It can effectively control the maximum number of concurrent threads, improve system resource utilization, and avoid excessive resource competition and congestion;
  • It can be easily managed by multiple threads to make the use of threads simple and efficient.
Thread Pool framework Executor

The thread pool in java is implemented through the Executor framework. The Executor framework includes classes: Executor, Executors, ExecutorService, ThreadPoolExecutor, Callable, Future, and FutureTask.

Executor: Interfaces of all thread pools. There is only one method.

public interface Executor {          void execute(Runnable command);      }

ExecutorService: The Executor action is added, which is the most direct interface of the Executor implementation class.

Executors: A series of factory methods are provided to create a thread pool, and the returned thread pool implements the ExecutorService interface.

ThreadPoolExecutor: The specific implementation class of the thread pool. Generally, various thread pools are implemented based on this class.
The constructor is as follows:

public ThreadPoolExecutor(int corePoolSize,                              int maximumPoolSize,                              long keepAliveTime,                              TimeUnit unit,                              BlockingQueue<Runnable> workQueue) {        this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,             Executors.defaultThreadFactory(), defaultHandler);}
  • CorePoolSize: Number of core threads in the thread pool. The number of running threads in the thread pool will never exceed the corePoolSize. By default, the number of running threads in the thread pool can survive. You can set allowCoreThreadTimeOut to True.The number of core threads is 0.In this case, keepAliveTime controls the timeout time of all threads.
  • MaximumPoolSize: Maximum number of threads allowed by the thread pool;
  • KeepAliveTime: Refers to the timeout time for Idle threads to end;
  • Unit: An enumeration, indicating the unit of keepAliveTime;
  • WorkQueue: Indicates the BlockingQueue <Runnable queue that stores the task.
  • BlockingQueue: BlockingQueue is a main tool used to control thread synchronization in java. util. concurrent. If BlockQueue is empty, the operations to retrieve things from BlockingQueue will be blocked and enter the waiting state until BlockingQueue is awakened. Similarly, if BlockingQueue is full, any operation that tries to store in will be blocked and enters the waiting state until BlockingQueue has space to wake up and continue the operation.
    Blocking queues are often used in scenarios where producers and consumers add elements to queues, while consumers take elements from queues. The blocking queue is the container where the producer stores elements, and the consumer only takes elements from the container. Specific implementation classes include LinkedBlockingQueue and ArrayBlockingQueued. Generally, Lock and Condition are used internally to implement blocking and wakeup.

The thread pool process is as follows:

Create and use a thread pool

The generated thread pool adopts the tool class Executors static method. Below are several common thread pools.

SingleThreadExecutor: A single backend thread (whose Buffer Queue is unbounded)

public static ExecutorService newSingleThreadExecutor() {            return new FinalizableDelegatedExecutorService (        new ThreadPoolExecutor(1, 1,                                            0L, TimeUnit.MILLISECONDS,                                            new LinkedBlockingQueue<Runnable>()));   }

Creates a single-threaded thread pool. This thread pool only has one Core Thread working, which is equivalent to a single thread serial execution of all tasks. If this unique thread ends due to an exception, a new thread will replace it. This thread pool ensures that all tasks are executed in the order they are submitted.

FixedThreadPool: Only the thread pool of the Core Thread is fixed in size (its Buffer Queue is unbounded ).

public static ExecutorService newFixedThreadPool(int nThreads) {                 return new ThreadPoolExecutor(nThreads, nThreads,                                                   0L, TimeUnit.MILLISECONDS,                                                     new LinkedBlockingQueue<Runnable>());     }

Create a fixed thread pool. Each time a task is submitted, a thread is created until the thread reaches the maximum size of the thread pool. The size of the thread pool remains unchanged once it reaches the maximum value. If a thread ends due to an execution exception, the thread pool will add a new thread.

CachedThreadPool: Unbounded thread pool, which can be automatically recycled.

public static ExecutorService newCachedThreadPool() {             return new ThreadPoolExecutor(0,Integer.MAX_VALUE,                                                      60L, TimeUnit.SECONDS,                                                  new SynchronousQueue<Runnable>());     }

If the thread pool size exceeds the thread required to process the task, some Idle threads (not executed in 60 seconds) will be reclaimed. When the number of tasks increases, this thread pool can intelligently add new threads to process tasks. This thread pool does not limit the thread pool size. The thread pool size depends entirely on the maximum thread size that can be created by the operating system (or JVM. SynchronousQueue is a blocking queue with a buffer of 1.

ScheduledThreadPool: A thread pool with fixed Core Thread pools and unlimited sizes. This thread pool supports regular and periodic task execution requirements.

public static ExecutorService newScheduledThreadPool(int corePoolSize) {             return new ScheduledThreadPool(corePoolSize,               Integer.MAX_VALUE,                                                                DEFAULT_KEEPALIVE_MILLIS, MILLISECONDS,                                                                  new DelayedWorkQueue());    }

Create a thread pool for periodic task execution. If idle, the non-core thread pool will be reclaimed within the DEFAULT_KEEPALIVEMILLIS time.

The most common methods for submitting tasks in a thread pool are as follows:

Execute:

ExecutorService.execute(Runnable runable);

Submit:

FutureTask task = ExecutorService.submit(Runnable runnable);FutureTask<T> task = ExecutorService.submit(Runnable runnable,T Result);FutureTask<T> task = ExecutorService.submit(Callable<T> callable);

The implementation of submit (Callable callable) is the same as that of submit (Runnable runnable.

public <T> Future<T> submit(Callable<T> task) {    if (task == null) throw new NullPointerException();    FutureTask<T> ftask = newTaskFor(task);    execute(ftask);    return ftask;}

It can be seen that the submit enables a task with returned results, and a FutureTask object will be returned, so that the results can be obtained through the get () method. The submit finally calls execute (Runnable runable). The submit only encapsulates the Callable object or Runnable into a FutureTask object. Because FutureTask is a Runnable, it can be executed in execute. For details about how to encapsulate Callable objects and Runnable objects into FutureTask objects, see Callable, Future, and FutureTask.

Principle of thread pool implementation

If you only talk about the use of the thread pool, this blog has no great value. At best, it is familiar with the process of Executor related APIs. The implementation process of the thread pool does not use the Synchronized keyword, but uses Volatile, Lock and synchronization (blocking) queues, Atomic-related classes, and FutureTask, because the latter has better performance. The process of understanding can well learn the idea of concurrency control in the source code.

The advantage of the thread pool mentioned in the beginning is that it can be summarized into the following three points:

1. Thread reuse Process

To understand the thread Reuse Principle, first understand the thread lifecycle.

In the life cycle of a thread, it goes through 5 states: New, ready, Running, Blocked, and Dead.

Thread creates a new Thread through new. This process initializes some Thread information, such as the Thread name, id, and group to which the Thread belongs. It can be considered as a common object. After Thread start () is called, the Java Virtual Machine creates a method call stack and a program counter for it, and sets hasBeenStarted to true. Then, an exception occurs when the start method is called.

The thread in this status does not start running, but indicates that the thread can run. The time when the thread starts to run depends on the scheduling of the thread scheduler in the JVM. After the thread obtains the cpu, the run () method is called. Do not call the run () method of Thread by yourself. Then switch between ready-run-blocking according to CPU scheduling until the run () method ends or the thread is stopped in other ways, and the dead state is entered.

So the principle of thread reuse should be to keep the thread alive (ready, running or blocked ). Next, let's take a look at how ThreadPoolExecutor achieves thread reuse.

The main Worker class in ThreadPoolExecutor controls the reuse of threads. Take a look at the simplified code of the Worker class for easy understanding:

private final class Worker implements Runnable {final Thread thread;Runnable firstTask;Worker(Runnable firstTask) {this.firstTask = firstTask;this.thread = getThreadFactory().newThread(this);}public void run() {runWorker(this);}final void runWorker(Worker w) {Runnable task = w.firstTask;w.firstTask = null;while (task != null || (task = getTask()) != null){task.run();}}

Worker is a Runnable and has a thread. This thread is the Thread to be enabled. When a Worker object is created, a new thread object is created, and the Worker itself is passed as a parameter to TThread, in this way, when the start () method of Thread is called, it is actually the run () method of the Worker, and then to the runWorker (), There Is A while LOOP, always from getTask () and get the Runnable object in sequence. How does getTask () Get the Runnable object?

Simplified code:

Private Runnable getTask () {if (in some special cases) {return null;} Runnable r = workQueue. take (); return r ;}

This workQueue is the BlockingQueue queue that stores tasks during ThreadPoolExecutor initialization. All tasks in this queue are Runnable tasks to be executed. Because BlockingQueue is a blocking queue, if BlockingQueue. take () is obtained, it enters the waiting state until BlockingQueue has a new object to be added to wake up the blocked thread. In general, the Thread's run () method will not end, but will continue to execute the Runnable task from the workQueue, which achieves the Thread Reuse Principle.

2. Control the maximum number of concurrent jobs

When Will Runnable be put into workQueue? When is the Worker created? When is the Thread in the Worker called start () to open a new Thread to execute the Worker run () method? From the above analysis, we can see that runWorker () in a Worker executes tasks one by one in a serial manner. How does the concurrency reflect this?

It is easy to think that some of the above tasks will be performed in execute (Runnable runnable. See how execute works.

Execute:

Simplified code

Public void execute (Runnable command) {if (command = null) throw new NullPointerException (); int c = ctl. get (); // current number of threads <corePoolSize if (workerCountOf (c) <corePoolSize) {// start a new thread directly. If (addWorker (command, true) return; c = ctl. get () ;}// number of active threads >= corePoolSize // runState is RUNNING & queue is not full if (isRunning (c) & workQueue. offer (command) {int recheck = ctl. get (); // check whether the task is in the RUNNING status again. // if (! IsRunning (recheck) & remove (command) reject (command); // use the policy specified by the thread pool to reject tasks. // two cases: // 1. reject new tasks in non-RUNNING status // 2. failed to start a new thread when the queue is full (workCount> maximumPoolSize)} else if (! AddWorker (command, false) reject (command );}

AddWorker:

Simplified code

private boolean addWorker(Runnable firstTask, boolean core) {    int wc = workerCountOf(c);    if (wc >= (core ? corePoolSize : maximumPoolSize)) {        return false;    }    w = new Worker(firstTask);    final Thread t = w.thread;    t.start();}

Based on the code, let's take a look at the process of adding tasks to the thread pool mentioned above:

* If the number of running threads is smaller than corePoolSize, create a thread to run the task immediately. * if the number of running threads is greater than or equal to corePoolSize, put the task into the queue; * If the queue is full at this time and the number of running threads is smaller than maximumPoolSize, you must create a non-core thread to run the task immediately. * If the queue is full, besides, if the number of running threads is greater than or equal to maximumPoolSize, the thread pool throws an exception RejectExecutionException.

This is why the parallel execution of Android AsyncTask exceeds the maximum number of tasks and throws a RejectExecutionException. For details, see the source code interpretation of AsyncTask based on the latest version and the dark side of AsyncTask.

If a new thread is successfully created through addWorker, start () is used to start the new thread and firstTask is used as the first task executed in run () of the Worker.

Although each Worker task is serialized, if multiple workers are created and one workQueue is shared, they are processed in parallel.

Therefore, the maximum number of concurrent tasks is controlled based on corePoolSize and maximumPoolSize. The general process can be expressed.

The process described above and the diagram can be well understood.

If you are doing Android development and are familiar with the Handler principle, you may feel familiar with this figure. Some of the processes are similar to those used by Handler, logoff, and Meaasge. Handler. send (Message) is equivalent to execute (Runnuble). The Meaasge queue maintained in loads is equivalent to BlockingQueue, but you need to maintain this queue through synchronization () the function loop uses the Meaasge and Worker runWork () from the Meaasge queue to get Runnable from BlockingQueue.

3. Management thread

The thread pool can be used to manage thread reuse, control the number of concurrent threads, and destroy processes. Thread reuse and control concurrency have been discussed above, the thread management process has been interspersed with it and is also easy to understand.

In ThreadPoolExecutor, there is a ctl AtomicInteger variable. This variable saves two contents:

  • Number of all threads
  • Status of each thread

Among them, there are 29-bit memory threads and 3-Bit Memory runState. Different values are obtained through bit operations.

Private final AtomicInteger ctl = new AtomicInteger (ctlOf (RUNNING, 0); // obtain the state of the thread private static int runStateOf (int c) {return c &~ CAPACITY;} // obtain the number of Worker private static int workerCountOf (int c) {return c & CAPACITY;} // determine whether the thread is running private static boolean isRunning (int c) {return c <SHUTDOWN ;}

Shutdown and shutdownNow () are used to analyze the shutdown Process of the thread pool. The thread pool has five States to control task addition and execution. This article mainly introduces the following three types:

  • RUNNING status: the thread pool runs normally. It can accept new tasks and process tasks in the queue;
  • SHUTDOWN status: new tasks are no longer accepted, but tasks in the queue are executed;
  • STOP status: new tasks are no longer accepted and tasks in the queue are not processed.

Shutdown this method will set the runState to SHUTDOWN, it will terminate all Idle threads, but still working threads will not be affected, so the tasks in the queue will be executed. The shutdownNow method sets runState to STOP. Unlike the shutdown method, this method terminates all threads, so tasks in the queue will not be executed.

Summary

By analyzing the source code of ThreadPoolExecutor, we have learned about the process of creating a thread pool, adding and executing tasks, and become familiar with these processes. It is easier to use the thread pool.

Some of the concurrency control and the use of producer-consumer model task processing will be of great help in understanding or solving other related problems in the future. For example, the Handler mechanism in Android, while the Messager queue in logoff can be processed with a BlookQueue. This is the result of reading the source code.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.