Principles and usage of java Thread Pool

Source: Internet
Author: User
Tags keep alive

Principles and usage of java Thread Pool

Multithreading is a very important technical point in both java and android development. For example, each item in listview has a download function. At this time, it is unreasonable to download a new Thread every time, this will definitely cause a huge loss to memory and performance. If several threads can be enabled, when one download is complete and the other is downloaded, instead of opening the thread separately, does it greatly reduce the memory usage and improve the program performance? Congratulations, java has provided us with a technical solution, that is, using the thread pool,

1 Introduction

The use of threads plays an extremely important role in java. in jdk and earlier versions, the use of thread pools is extremely simple. This situation has been greatly improved since jdk1.5. After JDK, the java. util. concurrent package is added. This package mainly introduces the use of threads and thread pools in java. It provides great help for us to handle thread issues during development.

Role of a Two-Thread Pool

The thread pool is used to limit the number of execution threads in the system. (If 100 tasks are enabled for threads, the memory consumption is very high, and time slice switching between threads also takes time, which is a waste of system resources, this affects the system performance)
Depending on the system environment, you can automatically or manually set the number of threads to achieve the best running effect. This reduces the waste of system resources, resulting in low system congestion efficiency. Use the thread pool to control the number of threads and wait in queue for other threads. After a task is executed, the task starts from the top of the queue. If no process is waiting in the queue, this resource in the thread pool is waiting. When a new task needs to run, if there is a waiting working thread in the thread pool, it can start to run; otherwise, it enters the waiting queue.

3. Why thread pool?

1. reduces the number of threads created and destroyed. Each worker thread can be reused to execute multiple tasks.

 

2. you can adjust the number of worker threads in the thread pool according to the system's capacity to prevent servers from getting tired due to excessive memory consumption (each thread requires about 1 MB of memory, the more threads open, the larger the memory consumed, and the machine crashes ).

Principle of Four-Thread Pool

Let's take a look at the example below to explain the principle of the thread pool:

ThreadPool. java is a thread pool class. Thread MANAGER: Creates threads, executes tasks, destroys threads, and obtains basic thread information.

 

Package cn.kge.com. thread; import java. util. using list; import java. util. list;/*** thread pool class, thread MANAGER: Create thread, execute task, destroy thread, obtain basic thread information **/public class ThreadPool {// The default number of threads in the thread pool is 5 private static int worker_num = 5; // The work thread private WorkThread [] workThrads; // unprocessed task private static volatile int finished_task = 0; // task queue, as a buffer, the List thread is not secure private List
 
  
TaskQueue = new queue list
  
   
(); Private static ThreadPool threadPool; // create a thread pool private ThreadPool () {this (5) ;}// create a thread pool, worker_num is the number of worker threads in the thread pool. private ThreadPool (int worker_num) {ThreadPool. worker_num = worker_num; workThrads = new WorkThread [worker_num]; for (int I = 0; I <worker_num; I ++) {workThrads [I] = new WorkThread (); workThrads [I]. start (); // enable the thread} in the thread pool. // obtain the public static ThreadPool getThreadPool () {return getThreadPool (ThreadPool) with the default number of threads. worker_num);} // in single-State mode, obtain a thread pool with the specified number of threads. worker_num (> 0) create a default number of worker threads for the thread pool // worker_num <= 0 public static ThreadPool getThreadPool (int worker_num1) {if (worker_num1 <= 0) worker_num1 = ThreadPool. worker_num; if (threadPool = null) threadPool = new ThreadPool (worker_num1); return threadPool;} // execute the task. In fact, it only adds the task to the task queue, when to execute the public void execute (Runnable task) {synchronized (taskQueue) {taskQueue. add (task); taskQueue. notify () ;}// execute tasks in batches. In fact, the task is only added to the task queue. When will the thread pool manager execute the public void execute (Runnable [] task) {synchronized (taskQueue) {for (Runnable t: task) taskQueue. add (t); taskQueue. notify () ;}// execute tasks in batches. In fact, the task is added to the task queue. When will the thread pool manager execute the public void execute (List
   
    
Task) {synchronized (taskQueue) {for (Runnable t: task) taskQueue. add (t); taskQueue. Y () ;}// destroy the thread pool. This method ensures that all threads are destroyed only when all tasks are completed. Otherwise, the public void destroy () is destroyed only after the task is completed () {while (! TaskQueue. isEmpty () {// if there are still tasks not completed, go to bed first. try {Thread. sleep (10);} catch (InterruptedException e) {e. printStackTrace () ;}// the worker thread stops working and is set to null for (int I = 0; I <worker_num; I ++) {workThrads [I]. stopWorker (); workThrads [I] = null;} threadPool = null; taskQueue. clear (); // clear the task queue} // return the number of worker threads. public int getWorkThreadNumber () {return worker_num;} // return the number of completed tasks, the number of completed tasks is only the number of tasks in the task queue. It is possible that the task is not actually completed. Public int getFinishedTasknumber () {return finished_task;} // returns the length of the task queue, that is, the number of tasks not processed. public int getWaitTasknumber () {return taskQueue. size () ;}// overwrite the toString method and return the thread pool information: number of working threads and number of completed tasks @ Override public String toString () {return "WorkThread number: "+ worker_num +" finished task number: "+ finished_task +" wait task number: "+ getWaitTasknumber () ;}/ *** internal class, working thread */private class WorkThread ex Tends Thread {// whether the worker Thread is valid. It is used to end the worker Thread private boolean isRunning = true;/** the key is located. If the task queue is not empty, the task is taken out for execution, if the task queue is empty, wait */@ Override public void run () {Runnable r = null; while (isRunning) {// note that if the thread is invalid, the run method is naturally ended, synchronized (taskQueue) {while (isRunning & taskQueue. isEmpty () {// The queue is empty. try {taskQueue. wait (20);} catch (InterruptedException e) {e. printStackTrace () ;}} if (! TaskQueue. isEmpty () r = taskQueue. remove (0); // retrieve task} if (r! = Null) {r. run (); // execute the task} finished_task ++; r = null ;}/// stop the job, so that the thread naturally runs the run method and ends the public void stopWorker () {isRunning = false ;}}}
   
  
 

Test. java Test class

 

 

Package cn.kge.com. thread; public class Test {public static void main (String [] args) {// create ThreadPool t = ThreadPool for three threads. getThreadPool (3); t.exe cute (new Runnable [] {new Task (), new Task (), new Task ()}); t.exe cute (new Runnable [] {new Task (), new Task (), new Task ()}); System. out. println (t); t. destroy (); // All threads are executed to destory System. out. println (t);} // Task class static class Task implements Runnable {private static volatile int I = 1; @ Override public void run () {// execute the Task System. out. println ("task" + (I ++) + "finished ");}}}

In fact, the thread pool must be implemented by yourself, and java has provided us with it. Now let's look at the thread pool that comes with the java sdk.

 

The top-level interface of the thread pool in Java is Executor, but in a strict sense, Executor is not a thread pool, but a tool for executing the thread. The real thread pool interface is ExecutorService

Now let's take a look at several important classes.


It is complicated to configure a thread pool, especially when the thread pool principle is not clear, it is very likely that the thread pool configured is not optimal, therefore, some static factories are provided in the Executors class to generate some common thread pools.

 

1. newSingleThreadExecutor

Creates a single-threaded thread pool. This thread pool only has one thread working, which is equivalent to a single thread serial execution of all tasks. If this unique thread ends due to an exception, a new thread will replace it. This thread pool ensures that all tasks are executed in the order they are submitted.

2.NewFixedThreadPool

Create a fixed thread pool. Each time a task is submitted, a thread is created until the thread reaches the maximum size of the thread pool. The size of the thread pool remains unchanged once it reaches the maximum value. If a thread ends due to an execution exception, the thread pool will add a new thread.

3. newCachedThreadPool

Create a cacheable thread pool. If the thread pool size exceeds the thread required for processing the task,

Then, some Idle threads (which do not execute tasks in 60 seconds) will be reclaimed. When the number of tasks increases, the thread pool can intelligently add new threads to process the tasks. This thread pool does not limit the thread pool size. The thread pool size depends entirely on the maximum thread size that can be created by the operating system (or JVM.

4.NewScheduledThreadPool

Creates an infinite thread pool. This thread pool supports regular and periodic task execution requirements.

 

5. ThreadPoolExecutor the complete construction method signature of ThreadPoolExecutor is: ThreadPoolExecutor (int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)

CorePoolSize-The number of threads stored in the pool, including Idle threads.

MaximumPoolSize-Maximum number of threads allowed in the pool.

KeepAliveTime-When the number of threads is greater than the core, this is the maximum time for Idle threads to wait for new tasks before termination.

Unit-The time unit of the keepAliveTime parameter.

WorkQueue-The queue used to keep the task before execution. This queue only keeps Runnable tasks submitted by the execute method.

ThreadFactory-The factory used by the execution program to create a new thread.

Handler-The processing program used when the execution is blocked because it exceeds the thread range and queue capacity.

ThreadPoolExecutor is the underlying implementation of the Executors class.

In the JDK help document, there is a saying:

"It is strongly recommended that programmers use it more conveniently.ExecutorsFactory methodExecutors.newCachedThreadPool()(The unbounded thread pool can be used for automatic thread recovery ),Executors.newFixedThreadPool(int)(Fixed size thread pool)Executors.newSingleThreadExecutor()(Single background thread)

They all have predefined settings for most application scenarios ."

The following describes the source code of several classes:

ExecutorService newFixedThreadPool (int nThreads): fixed-size thread pool.

As you can see, corePoolSize and maximumPoolSize are the same (in fact, we will introduce that the maximumPoolSize parameter is meaningless if unbounded queue is used ), what are the names of the keepAliveTime and unit value tables? -This implementation does not require keep alive! The final BlockingQueue selects LinkedBlockingQueue, which has a feature that is unbounded.

1.     public static ExecutorService newFixedThreadPool(int nThreads) {   2.             return new ThreadPoolExecutor(nThreads, nThreads,   3.                                           0L, TimeUnit.MILLISECONDS,   4.                                           new LinkedBlockingQueue
 
  ());   5.         }
 

ExecutorService newSingleThreadExecutor (): single thread
1.     public static ExecutorService newSingleThreadExecutor() {   2.             return new FinalizableDelegatedExecutorService   3.                 (new ThreadPoolExecutor(1, 1,   4.                                         0L, TimeUnit.MILLISECONDS,   5.                                         new LinkedBlockingQueue
 
  ()));   6.         }
 

ExecutorService newCachedThreadPool (): unbounded thread pool, which can be automatically recycled

This implementation is interesting. The first is the unbounded thread pool, so we can find that maximumPoolSize is big. Next, use SynchronousQueue for BlockingQueue selection. This BlockingQueue may be unfamiliar. Simply put, in this QUEUE, each insert operation must wait for the corresponding removal operation of another thread.

1.     public static ExecutorService newCachedThreadPool() {   2.             return new ThreadPoolExecutor(0, Integer.MAX_VALUE,   3.                                           60L, TimeUnit.SECONDS,   4.                                           new SynchronousQueue
 
  ());       }
 

First from BlockingQueue Start with the input parameter workQueue. In JDK, it is clear that there are three types of queue.

All BlockingQueue can be used to transfer and maintain submitted tasks. You can use this queue to interact with the pool size:

If the number of running threads is less than corePoolSize, Executor always prefers to add new threads without queuing. (If the current running thread is smaller than corePoolSize, the task will not be stored and added to the queue. Instead, it will directly copy the guy (thread) to start running)

If the running thread is equal to or greater than corePoolSize, Executor always prefers to add requests to the queue,Without adding a new thread.

If the request cannot be added to the queue, a new thread is created, unless the creation of this thread exceeds the maximumPoolSize. In this case, the task is denied.

Three Types on queue.

 

There are three common queuing policies:

Submit directly.The default Job Queue option is SynchronousQueue, which directly submits tasks to the thread without holding them. If there is no thread that can be used to run the task immediately, trying to add the task to the queue will fail, so a new thread will be constructed. This policy prevents locks when processing requests that may have internal dependencies. Direct submission usually requires unbounded maximumPoolSizes to avoid rejecting new tasks. This policy allows unbounded threads to grow when the command arrives continuously beyond the average number that the queue can handle.

Unbounded queues.The use of unbounded queues (for example, blockingqueue with no predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. In this way, the created thread will not exceed the corePoolSize. (Therefore, the value of maximumPoolSize is invalid .) When each task is completely independent from other tasks, that is, the task execution does not affect each other, it is applicable to the use of unbounded queues; for example, in the Web Page Server. This kind of queuing can be used to handle transient bursts of requests. This policy allows unbounded threads to grow when the command arrives continuously beyond the average number that the queue can handle.

Bounded queue.When a limited number of maximumPoolSizes are used, a bounded Queue (such as ArrayBlockingQueue) helps prevent resource depletion, but may be difficult to adjust and control. The queue size and the maximum pool size may need to be compromised: using large queues and small pools can minimize CPU usage, operating system resources, and context switching overhead, but may cause manual throughput reduction. If the tasks are frequently congested (for example, if they are I/O boundaries), the system may schedule a longer time than you permit for more threads. Using a small queue usually requires a large pool size, and the CPU usage is high, but it may encounter unacceptable scheduling overhead, which will also reduce the throughput.

Select BlockingQueue.

Example 1: Use the direct submission policy, that is, SynchronousQueue.

First, SynchronousQueue is unbounded. That is to say, there is no limit on its ability to store data tasks. However, due to the characteristics of this Queue,After an element is added, you must wait for another thread to remove it before adding it again.. Here, neither the Core Thread nor the new thread is created, but we can imagine the following scenario.

We use the following parameter to construct ThreadPoolExecutor:

1. new ThreadPoolExecutor (

2. 2, 3, 30, TimeUnit. SECONDS,

3. new SynchronousQueue (),

4. new RecorderThreadFactory ("CookieRecorderPool "),

  1. New ThreadPoolExecutor. CallerRunsPolicy ());

    New ThreadPoolExecutor (

    2, 3, 30, TimeUnit. SECONDS,

    New SynchronousQueue (),

    New RecorderThreadFactory ("CookieRecorderPool "),

    New ThreadPoolExecutor. CallerRunsPolicy ());

    When two core threads are running.

    1. In this case, A task (A) continues. According to the previous introduction, if the running thread is equal to or greater than corePoolSize, Executor always prefers to add requests to the queue,Without adding a new thread.", Therefore, A is added to the queue. Another task (B) is coming, and the core two threads are not busy yet. OK. Next we will try to describe in 1 first. However, due to the SynchronousQueue used, we cannot add it. In this case, the "if the request cannot be added to the queue, create a new thread, unless the thread is created beyond the maximumPoolSize, in which case the task will be rejected .", Therefore, a new thread is required to run this task. Yes, but if these three tasks are not completed yet, two consecutive tasks are coming. The first one is added to the queue, and the other one? The queue cannot be inserted, and the number of threads reaches maximumPoolSize, so we have to execute the exception policy.

      Therefore, SynchronousQueue usually requires that maximumPoolSize be unbounded, so as to avoid the above situation (if you want to limit it, use a bounded queue directly ). The role of SynchronousQueue is clearly stated in jdk: this policy can avoid locks when processing requests that may have internal dependencies.

      What does it mean? If your task A1 and A2 have an internal Association and A1 needs to be run first, submit A1 first and then A2. When SynchronousQueue is used, we can ensure that A1 must be executed first, before A1 is executed, A2 cannot be added to the queue.

      Example 2: Use the unbounded queue policy, that is, define blockingqueue

      Take this.NewFixedThreadPoolAccording to the rules mentioned above:

      If the number of running threads is less than corePoolSize, Executor always prefers to add new threads without queuing. What will happen when the task continues to increase?

      If the running thread is equal to or greater than corePoolSize, Executor always prefers to add requests to the queue without adding new threads. OK. Now the task is added to the queue. When will the new thread be added?

      If the request cannot be added to the queue, a new thread is created, unless the creation of this thread exceeds the maximumPoolSize. In this case, the task is denied. This is very interesting. May I be unable to join the queue? Unlike SynchronousQueue, SynchronousQueue has its own characteristics. For unbounded queues, SynchronousQueue can always be added (resource depletion, of course, another theory ). In other words, it will never trigger new threads! The number of threads with corePoolSize will be running all the time. After the current thread is busy, the task will be taken from the queue to start running. Therefore, it is necessary to prevent the task from being too long. For example, the execution of the task is relatively long, and the speed of adding the task is far greater than the time for processing the task, and it will continue to increase, and it will soon burst.

      Example 3: Use ArrayBlockingQueue for bounded queues.

      This is the most complex application, so JDK is not recommended. Compared with the above, the biggest feature is to prevent resource depletion.

      For example, see the following constructor:

      1. new ThreadPoolExecutor (

      2. 2, 4, 30, TimeUnit. SECONDS,

      3. new ArrayBlockingQueue (2 ),

      4. new RecorderThreadFactory ("CookieRecorderPool "),

      5. new ThreadPoolExecutor. CallerRunsPolicy ());

      New ThreadPoolExecutor (

      2, 4, 30, TimeUnit. SECONDS,

      New ArrayBlockingQueue (2 ),

      New RecorderThreadFactory ("CookieRecorderPool "),

      New ThreadPoolExecutor. CallerRunsPolicy ());

      Assume that all tasks cannot be completed.

      For first-come A and B, run directly. Next, if C and D come, they will be put into queue. If E and F come next, the thread will be added to run E, f. However, if a task is executed again, the queue cannot accept it again, and the number of threads reaches the maximum limit. Therefore, a denial policy is used to process the task.

      KeepAliveTime

      The jdk explains that when the number of threads is greater than the core, this is the maximum time for Idle threads to wait for new tasks before termination.

      It is a bit difficult to understand. In applications that use a "pool", most of them have similar parameters that need to be configured. For example, the database connection pool, maxIdle and minIdle parameters in DBCP.

      What does it mean? Then the above explanation showed that the workers sent to the boss had always been "borrowed". As the saying goes, "there will be a pay-as-you-go", but the problem here is when I will pay back, if the borrowed worker just finished a task and then found that the task still exists, wouldn't he have to borrow it again? As a result, the boss will surely die.

       

      Reasonable strategy: If you borrow it, you need to borrow it for a while. After a period of time, you can return it if you find that you can no longer use these workers. A certain period of time is the meaning of keepAliveTime, and TimeUnit is the measurement of keepAliveTime value.

       

      RejectedExecutionHandler

      In another case, even if a worker is lent to the boss, the task continues and the team is still too busy to accept it.

      The RejectedExecutionHandler interface provides the opportunity to customize methods for rejecting tasks. The ThreadPoolExecutor contains the 4 policy by default, because the source code is very simple, which is directly posted here.

      CallerRunsPolicy: The thread calls the execute itself that runs the task. This policy provides a simple feedback control mechanism to speed down the submission of new tasks.

      1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      2. if (! E. isShutdown ()){

      3. r. run ();

      4 .}

      5 .}

      Public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      If (! E. isShutdown ()){

      R. run ();

      }

      }

      This policy obviously does not want to discard the task. However, since there are no resources in the pool, you can directly use the thread that calls the execute to execute it.

      AbortPolicy:If the handler is rejected, the running RejectedExecutionException will be thrown.

      1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      2. throw new RejectedExecutionException ();

      3 .}

      Public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      Throw new RejectedExecutionException ();

      }

      This policy directly throws an exception and discards the task.

      DiscardPolicy:Tasks that cannot be executed will be deleted

      1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      2 .}

      Public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      }

      This policy is almost the same as AbortPolicy. It also discards tasks, but does not throw an exception.

      DiscardOldestPolicy:If the execution program is not closed, the task in the Job Queue header will be deleted and the execution program will be retried (if the execution fails again, the process will be repeated)

      1. public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

      2. if (! E. isShutdown ()){

      3. e. getQueue (). poll ();

      4. e.exe cute (r );

      5 .}

      1. }

        Public void rejectedExecution (Runnable r, ThreadPoolExecutor e ){

        If (! E. isShutdown ()){

        E. getQueue (). poll ();

        E.exe cute (r );

        }

        }

        This policy is a little more complex. If the pool is not closed, first discard the earliest task cached in the queue, and then try to run the task again. This policy requires proper care.

        Imagine: if other threads are still running, the new task will kill the old task and cache it in the queue. Another task will kill the oldest task in the queue.

        Summary:

        The type of keepAliveTime is related to maximumPoolSize and BlockingQueue. If BlockingQueue is unbounded, maximumPoolSize will never be triggered, and keepAliveTime is meaningless.

        Conversely, if the number of cores is small, the number of bounded BlockingQueue is small, and The keepAliveTime is set to a small value, if the task is frequent, the system will frequently request to recycle the thread.

         

        Public static ExecutorService newFixedThreadPool (int nThreads ){

        Return new ThreadPoolExecutor (nThreads, nThreads,

        0L, TimeUnit. MILLISECONDS,

        New LinkedBlockingQueue ());

        }



         

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.