Java comes with thread pools and queues for detailed explanation

Source: Internet
Author: User
Tags cpu usage keep alive

A Brief introduction

The use of threads is of paramount importance in Java, where the use of thread pools is very rudimentary in the JDK version of jdk1.4. The situation has changed a lot since the jdk1.5. Jdk1.5 later joined the Java.util.concurrent package, which focuses on Java threads and the use of thread pools. Provides a great deal of help with the problem of threading in development.

Two: Thread pool

The role of the thread pool:

The thread pool function is to limit the number of threads executing in the system.
According to the environment of the system, the number of threads can be set automatically or manually to achieve the best performance; less waste of system resources, more caused by the system congestion efficiency is not high. Use the thread pool to control the number of threads and other threads to wait. A task is completed, and then the first task from the queue is executed. If there is no waiting process in the queue, this resource of the thread pool is waiting. When a new task needs to run, it can start running if there are waiting worker threads in the thread pool, otherwise enter the wait queue.

Why use a thread pool:

1. Reduces the number of times a thread is created and destroyed, and each worker thread can be reused to perform multiple tasks.

2. You can adjust the number of threads in the thread pool according to the endurance of the system, prevent the server from being exhausted because it consumes too much memory (approximately 1MB of memory per thread, the more threads open, the more memory is consumed and the last crashes).

The top interface of the thread pool in Java is executor, but strictly speaking, executor is not a thread pool, but a tool for executing threads. The real thread pool interface is executorservice.

Some of the more important classes:

Executorservice

A true thread pool interface.

Scheduledexecutorservice

can be similar to Timer/timertask to solve problems that require repetitive execution of tasks.

Threadpoolexecutor

The default implementation of Executorservice.

Scheduledthreadpoolexecutor

Inheriting Threadpoolexecutor's Scheduledexecutorservice interface implementation, the class implementation of periodic task scheduling.

To configure a thread pool is more complex, especially if the thread pool principle is not very clear, it is likely that the thread pool configured is not superior, so there are some static factories in the executors class that generate some common thread pools.

1. Newsinglethreadexecutor

Creates a single threaded pool of threads. This thread pool has only one thread at work, which is equivalent to single-threaded serial execution of all tasks. If this unique thread ends because of an exception, a new thread will replace it. This thread pool guarantees that the order in which all tasks are executed is performed in the order in which the tasks are submitted.

Newsinglethreadexecutor instances:

 Public classThreadhandler { Public Static voidMain (string[] args) {Executorservice pool= Executors.newsinglethreadexecutor ();//Create a fixed number of thread pools         for(inti = 0; I < 20; i++) {Pool.execute (myrunable); }    }        StaticRunnable myrunable =NewRunnable () {@Override Public voidrun () {Try{Thread.Sleep (1000); System.out.println (Thread.CurrentThread (). GetName ()+ "+" is executing ... "); } Catch(interruptedexception e) {e.printstacktrace ();    }        }    }; }

From the execution result it can be seen that the number of threads from start to finish is only one

2.Newfixedthreadpool

Creates a fixed-size thread pool. Each time a task is committed, a thread is created until the thread reaches the maximum size of the threads pool. Once the maximum size of the thread pool is reached, the thread pool will be replenished with a new thread if it ends up executing an exception.

Newfixedthreadpool instances:

 Public classThreadhandler { Public Static voidMain (string[] args) {Executorservice pool= Executors.newscheduledthreadpool (10);//Create a fixed number of thread pools         for(inti = 0; I < 20; i++) {Pool.execute (myrunable); }    }       StaticRunnable myrunable =NewRunnable () {@Override Public voidrun () {Try{Thread.Sleep (1000); System.out.println (Thread.CurrentThread (). GetName ()+ "+" is executing ... "); } Catch(interruptedexception e) {e.printstacktrace (); }        }    }; }

Execution results show that the total number of threads does not exceed 10

3. Newcachedthreadpool

Creates a cacheable pool of threads. If the size of the thread pool exceeds the thread that is required to process the task,

Then a partially idle (60 second non-performing) thread is reclaimed, and when the number of tasks increases, the thread pool can intelligently add new threads to handle the task. This thread pool does not limit the size of the thread pool, and the thread pool size is entirely dependent on the maximum thread size that the operating system (or JVM) can create.

4.Newscheduledthreadpool

Create a thread pool of unlimited size. This thread pool supports the need to schedule and periodically perform tasks.

Newschedduledthreadpool instances:

 Public classThreadhandler { Public Static voidMain (string[] args) {scheduledexecutorservice ses= Executors.newscheduledthreadpool (10); Ses.scheduleatfixedrate (MyRunable1,1000, 3000, Timeunit.milliseconds); Ses.scheduleatfixedrate (MyRunable2,1000, 3000, Timeunit.milliseconds); }        StaticRunnable MyRunable1 =NewRunnable () { Public voidrun () {System.out.println ("----------------------");    };        }; StaticRunnable MyRunable2 =NewRunnable () { Public voidrun () {System.out.println (NewDate (2014, 9, 9). toString ());    };        }; }


Three: Threadpoolexecutor detailed

The signature of Threadpoolexecutor's complete construction method is:threadpoolexecutor (int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) .

corepoolsize -The number of threads that are saved in the pool, including idle threads.

maximumpoolsize-The maximum number of threads allowed in the pool.

KeepAliveTime -When the number of threads is greater than the core, this is the maximum time to wait for a new task before terminating the extra idle thread.

Unit-the time units of the KeepAliveTime parameter.

WorkQueue -the queue used to hold the task before execution. This queue keeps only the runnable tasks that are submitted by the Execute method.

Threadfactory -The factory that the executor uses when it creates a new thread.

Handler -the handler that is used when execution is blocked because the thread range and queue capacity are exceeded.

Threadpoolexecutor is the underlying implementation of the Executors class.

In the JDK help documentation, there is a passage:

"It is strongly recommended that programmers use a more convenient Executors factory approach Executors.newCachedThreadPool() (no thread pool, automatic thread recycling), Executors.newFixedThreadPool(int) (fixed-size thread pool) Executors.newSingleThreadExecutor() (a single background thread)

They all pre-defined settings for most usage scenarios. ”

Here are a few classes of source code:

Executorservice newfixedthreadpool (int nthreads): fixed-size thread pool.

As you can see, the size of the corepoolsize and Maximumpoolsize is the same (in fact, the maximumpoolsize parameter is meaningless if you use the unbounded queue), What is the value table name for KeepAliveTime and unit? -That's the implementation don't want to keep alive! The last Blockingqueue chose Linkedblockingqueue, the queue has a characteristic that he is unbounded.

 Public Static Executorservice newfixedthreadpool (int  nthreads) {    returnnew  Threadpoolexecutor (Nthreads, Nthreads,         timeunit.milliseconds,         new Linkedblockingqueue <Runnable>());}

Executorservice Newcachedthreadpool (): No boundary pool, automatic thread recovery

This implementation is interesting. The first is the unbounded thread pool, so we can find maximumpoolsize as big big. followed by the Blockingqueue selection on the use of Synchronousqueue. May be a little strange for the blockingqueue, simply said: In the queue, each insert operation must wait for the corresponding removal of another thread.

 Public Static Executorservice Newcachedthreadpool () {       returnnew threadpoolexecutor (0, Integer.max_value,           60L, timeunit.seconds,           new synchronousqueue<runnable> ());   }

Start with the Blockingqueue<runnable> workqueue the first entry. In the JDK, it has been made clear that there are three types of queue.

All blockingqueue can be used to transfer and maintain submitted tasks. You can use this queue to interact with the pool size:

If you run fewer threads than corepoolsize, executor always prefers to add new threads without queuing. (if the currently running thread is less than corepoolsize, the task will not be stored at all, added to the queue, but directly to the guy (thread) to start running)

If you are running a thread that is equal to or more than corepoolsize, executor always prefers to join the request to the queue without adding a new thread .

If the request cannot be queued, a new thread is created unless the thread is created beyond maximumpoolsize, in which case the task is rejected.

Three types on the queue.

There are three common strategies for queuing:

submit directly. The default option for the work queue is synchronousqueue, which will submit tasks directly to the thread without maintaining them. Here, if there is no thread available to run the task immediately, attempting to join the task to the queue will fail, and a new thread will be constructed. This policy avoids locking when processing a set of requests that may have internal dependencies. Direct submissions typically require unbounded maximumpoolsizes to avoid rejecting newly submitted tasks. This policy allows the possibility of an increase in the number of lines that are allowed to continue when the command arrives in a row that exceeds the average that the queue can handle.

unbounded queues. using unbounded queues (for example, linkedblockingqueue that do not have a predefined capacity) will cause all corepoolsize threads to be busy while the newer tasks are waiting in the queue. This way, the created thread will not exceed corepoolsize. (therefore, the value of the maximumpoolsize is not valid.) When each task is completely independent of other tasks, that is, when task execution does not affect each other, it is appropriate to use a unbounded queue, for example, in a Web page server. This queueing can be used to handle transient burst requests, which allow the possibility of an increase in the number of lines that are allowed to occur when the command reaches an average of more than the queue can handle.

bounded queues. when using limited maximumpoolsizes, bounded queues (such as arrayblockingqueue) help prevent resource exhaustion, but may be difficult to adjust and control. The queue size and maximum pool size may need to be compromised: using large queues and small pools minimizes CPU usage, operating system resources, and context switching overhead, but can result in artificially reduced throughput. If tasks are frequently blocked (for example, if they are I/O boundaries), the system may schedule more threads than you permit. Using small queues typically requires a large pool size, high CPU utilization, but may encounter unacceptable scheduling overhead, which also reduces throughput.

Blockingqueue's Choice.

Example one: Use the direct commit policy, also known as Synchronousqueue.

First of all, Synchronousqueue is unbounded, that is, the ability to save several tasks is unlimited, but because of the nature of the queue itself, you must wait for another thread to continue to add after you have added the element . Not the core thread here is the newly created thread, but let's just imagine the next scenario.

We use the parameters to construct the Threadpoolexecutor:

New Threadpoolexecutor (    2, 3,timeunit.seconds       ,new synchronousqueue< Runnable>(),       new recorderthreadfactory ("Cookierecorderpool"),       New Threadpoolexecutor.callerrunspolicy ());

When a core thread already has 2 running.

    1. A task continues at this point (a), according to the "if running thread equals or more than corepoolsize," executor always prefers to join the request to the queue without adding a new thread . ", so a is added to the queue.
    2. Another task (B), and the Core 2 threads are not finished, OK, the next first try 1 described, but because of the use of synchronousqueue, so must not be added.
    3. This will satisfy the above mentioned "if the request cannot be queued, create a new thread, unless this thread is created beyond maximumpoolsize, in which case the task will be rejected." , so a new thread must be created to run the task.
    4. For the time being, but if these three tasks are still unfinished, there are two tasks in a row, the first one added to the queue, and the latter one? The queue cannot be inserted, and the number of threads reaches Maximumpoolsize, so you have to execute the exception policy.

Therefore, the use of synchronousqueue usually requires maximumpoolsize to be unbounded, so that this can be avoided (if you want to limit the direct use of bounded queues). It is well written in the JDK that uses synchronousqueue: This policy avoids locking when processing a set of requests that may have internal dependencies.

What do you mean? If your task A1,A2 has internal Association, A1 need to run first, then submit A1, and then submit A2, when using synchronousqueue we can guarantee that A1 must be executed first, before A1 is executed, A2 cannot add to the queue.

Example two: Using the unbounded queue policy, i.e. Linkedblockingqueue

This takes Newfixedthreadpool , according to the rules mentioned earlier:

If you run fewer threads than Corepoolsize, Executor always prefers to add new threads without queuing. So what happens when the task continues to increase?

If you are running a thread that is equal to or more than corepoolsize, Executor always prefers to join the request to the queue without adding a new thread. OK, now that the task is in the queue, when will the new thread be added?

If the request cannot be queued, a new thread is created unless the thread is created beyond maximumpoolsize, in which case the task is rejected. It's interesting here, could it be impossible to join the queue? Unlike Synchronousqueue, which has its own characteristics, for unbounded queues, it is always possible to join (resource exhaustion, of course, another). In other words, there will never be a trigger to create a new thread! The number of threads in the corepoolsize will run all the time, running out of the queue to get the task started. So to prevent the task of soaring, such as the implementation of the task is relatively long, and the speed of adding tasks far more than the time of processing tasks, but also continue to increase, soon burst.

Example three: bounded queues, using Arrayblockingqueue.

This is the most complex use, so the JDK does not recommend the use of some reason. Compared with the above, the most important feature is to prevent the depletion of the situation occurs.

For example, consider the following construction method:

New Threadpoolexecutor (   2, 3,timeunit.seconds     ,new arrayblockingqueue< Runnable> (2),        new recorderthreadfactory ("Cookierecorderpool"),     New Threadpoolexecutor.callerrunspolicy ());

Assume that all tasks are never done.

For the first, a, b to run directly, then, if the c,d, they will be placed in the queue, if the next e,f, then increase the thread run e,f. However, if you come back to the task, the queue is no longer acceptable, and the number of threads reaches the maximum limit, so a deny policy is used to handle it.

KeepAliveTime

The explanation in the JDK is that when the number of threads is greater than the core, this is the maximum amount of time that the extra idle thread waits for a new task before terminating.

A bit of a mouthful, in fact, this is not difficult to understand, in the use of "pool" in the application, most of them have similar parameters need to be configured. such as the database connection pool, the Maxidle,minidle parameter in DBCP.

What do you mean? Then the above explanation, and then sent to the boss of the workers are always "borrowed", as the saying goes, "There is still", but the problem here is when, if the borrowed workers just completed a task to return, and later found that the task is still, it is not to borrow? This one, the boss must have died.

Reasonable strategy: Since borrowed, then borrow more for a while. It was not until after "a certain period" that the workers could not be used again, and they could return. Some time here is the meaning of KeepAliveTime, Timeunit is the measure of KeepAliveTime value.

Rejectedexecutionhandler

In another case, even if the boss borrowed the workers, but the task continued to come, or not busy, then the whole team had to refuse to accept.

The Rejectedexecutionhandler interface provides the opportunity for a custom method to reject the processing of a task. The 4 policy is already included by default in Threadpoolexecutor because the source code is very simple and is posted directly here.

callerrunspolicy: The thread invokes the execute itself that runs the task. This strategy provides a simple feedback control mechanism that can slow down the submission of new tasks.

 Public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {    if (!  E.isshutdown ()) {        r.run ();    }}

This strategy obviously does not want to abandon the task. However, since there are no resources in the pool, it is executed directly using the thread itself that invoked the execute.

AbortPolicy: handler rejection will throw runtime rejectedexecutionexception

 Public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {    thrownew  Rejectedexecutionexception ();}

This strategy throws an exception directly, discarding the task.

Discardpolicy: The task that cannot be performed will be deleted

 Public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {}

This strategy is almost as much a abortpolicy as a drop-off task, except that he does not throw an exception.

Discardoldestpolicy: If the execution program has not been closed, the task in the head of the work queue is deleted, and then the execution of the program is retried (repeat this process if it fails again)

 Public void Rejectedexecution (Runnable R, Threadpoolexecutor e) {    if (!  E.isshutdown ()) {        e.getqueue (). poll ();        E.execute (R);    }}

The strategy is a little more complicated, with the pool not shutting down and first losing the oldest task cached in the queue, and then trying to run the task again. This strategy requires proper care.

Imagine: If other threads are still running, then the new task kicks off the old task, slows down the queue, and a task kicks off the oldest task in the queue.

Summarize:

KeepAliveTime are related to the types of maximumpoolsize and Blockingqueue. If the blockingqueue is unbounded, then the maximumpoolsize will never be triggered, and the natural keepalivetime will have no meaning.

Conversely, if the number of cores is small, bounded blockingqueue values are smaller, and the KeepAliveTime is small, if the task is frequent, then the system will frequently apply for recycling threads.

 Public Static Executorservice newfixedthreadpool (int  nthreads) {    returnnew  Threadpoolexecutor (Nthreads, nthreads,       0L, timeunit.milliseconds,       new Linkedblockingqueue<runnable>());}


Original link: http://blog.csdn.net/sd0902/article/details/8395677

Java comes with thread pools and queues for detailed explanation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.