Apply thread pool Threadpoolexecutor

Source: Internet
Author: User
Tags connection pooling

The size of the thread pool

The size of the thread pool is the first consideration when configuring and resizing the application thread pool.

The reasonable size of the thread pool depends on the future types of tasks submitted and the characteristics of the deployed system. When customizing the thread pool, you need to avoid the extreme situations where the thread pool length is "too large" or "too small".

thread pooling is too large: the contention of a thread on scarce CPU and memory resources can lead to high memory usage and possibly depletion of resources. the thread pool is too small: because there are a lot of available processor resources that are not working, it can cause a loss of throughput.
It is difficult to calculate the exact size of the thread pool, and we typically estimate a reasonable thread pool size. for compute-intensive tasks, a system with n processing cores can use a thread pool of n+1 threads. (so that when a thread pauses because of an error, just one thread is filling up)in a task system that contains I/O and other blocking operations, not all threads are dispatched at all times, so a larger thread pool is required. You also need to estimate the ratio of the time the task is spent waiting to the time it was calculated.
in Java, you can get the number of cores processed in the CPU by the following codeint cpu_num = Runtime.getruntime (). Availableprocessors ();
of course, the core of processing is not the only factor that affects the size of the thread pool, and if each thread in the threads pools is to use pooled resources (such as database connection pooling), the size of the database connection pool must be taken into account when specifying the thread pool size.

Configure Threadpoolexecutor

usually we use the relevant methods in tool class executors (e.g. Newcachedthreadpool, Newfixedthreadpool, Newscheduledthreadpool) to build a thread pool, By looking at the relevant source code of the Executors class, we can see that these methods have instantiated a Threadpoolexectuor object, but the arguments passed into the constructor method are different. Threadpoolexecutor is an implementation of the inheritance and abstraction class Abstractexecutorservice, which contains several overloaded construction methods that are used to construct different thread pools. from the construction method of Threadpoolexecutor, we can get some parameters to construct a thread pool.
  • corepoolsize : The number of core threads that are saved in the thread pool, even if these core threads are idling.
  • maximumpoolsize : Maximum thread pool size
  • Keealivetime : The maximum time the idle thread survives when the number of threads is greater than the number of core threads.
  • Timeutil : KeepAliveTime unit of Time
  • WorkQueue : The queue (waiting queue) that was used to save the task before execution.
  • Handler : When a task exceeds the maximum thread pool size and the length of the waiting queue, the processing strategy for the subsequent arrival task is also called the saturation strategy.

the number of core threads (corepoolsize), maximum number of threads (maximumpoolsize), and Survival Time (KeepAliveTime) together manage the creation and destruction of threads.
when a threadpoolexecutor is initially created, all core threads do not start immediately, but wait until a task is committed, unless you call prestartallcorethreads. when the number of submitted tasks reaches the coolpoolsize size, then the submitted tasks are saved to workqueue instead of creating new threads to execute them. When the workqueue is filled, it will create a new thread, but the total number of threads will not be greater than maximumpoolsize. when the current number of threads is greater than corepoolsize, the idle thread will be destroyed if the idle thread waits longer than KeepAliveTime, of course, if the current number of threads does not exceed corepoolsize, Then this keepalivetime is not working. by adjusting the core size and survival time, you can promote the thread pool to return the resources that are owned by the idle thread, and use those resources for other useful work, and of course you must weigh the number and frequency of the total system tasks, as frequent creation and destruction of threads can lead to greater overhead.
1. See how executors is building a fixed-size thread pool through Newfixedthreadpool
public static Executorservice newfixedthreadpool (int nthreads, threadfactory threadfactory) {        return new Threadpoolexecutor (Nthreads, Nthreads,                                      0L, Timeunit.milliseconds,                                      new linkedblockingqueue<runnable> (),                                      threadfactory);    }

You can see that you are directly creating an Threadpoolexecutor instance where the core thread pool size of Fixedthreadpool is the same as the maximum thread pool size, and KeepAliveTime is set to 0, which means that it never times out. With Linkedblockingqueue as its waiting queue, it is an infinite queue, and if the current thread is greater than nthread, then subsequent tasks will be saved in linkedblockingqueue until there are threads available. If it arrives too fast and gets no threading, it can cause the waiting queue to swell, causing the memory to run out.
2. See how executors is building the cache thread pool through Newcachedthreadpool
public static Executorservice Newcachedthreadpool (Threadfactory threadfactory) {        return new Threadpoolexecutor (0, Integer.max_value,                                      60L, Timeunit.seconds,                                      new synchronousqueue<runnable> (),                                      threadfactory);    }

Cachedthreadpool core thread pool size is 0, the maximum thread pool size is integer.max_value, that is infinite, survival time 60 seconds, using the Synchronousqueue as a work queue, This queue is not really a queue, because it does not have internal space to store elements, but a mechanism to manage the transfer of information between direct threads, in order to put a task into the synchronousqueue, there must be another thread from the synchronous to take the task, So that every time we submit a task to Cachedthreadpool, Threadpoolexecutor will create a new thread to fetch the submitted task to execute.
by observing the two thread pool created above, we can also use Threadpoolexecutor to create a custom thread pool, but there are some issues to be aware of.
    • Work Queue (WorkQueue) selection
    • Saturation strategy (handler)
    • Customization of thread factories (threadfactory)
first, the selection of the work queue: In the previous two examples, you can see that Fixedthreadpool chooses Linkedblockingqueue as the work queue, and Cachedthreadpool chooses Synchronousqueue as the work queue, Different types of work queues bring up the same thread pool attributes. using the thread pool instead of per-task per thread (Thread-per-task) is for the convenience of thread management and, on the other hand, the performance issues associated with unrestricted creation of the thread lock. When using the thread pool, requests will wait in the waiting queue if the new request arrives more frequently than the thread pool can handle them, but if the request is too fast, there is still a risk of draining the resource. Threadpoolexecutor allows you to provide a blockingqueue to save the tasks awaiting execution. The selected queue has the following 3 types
    • Infinite Queue
    • Limited queue
    • Synchronous handover Queue
where the Blockingqueue interface is implemented such as:
For example, Newfixedthreadpool and Newsinglethreaexecutor are chosen unlimited linkedblockingqueue, allowing for unlimited growth in the number of tasks, which is not safe to a certain extent. A prudent resource management strategy is to use limited queues, such as arrayblockingqueue or limited linkedblockingqueue or priorityblockingqueue, to prevent tasks from growing too fast and draining resources. But the use of bounded queues brings new problems, and when the boundary value is reached, there are still new tasks coming, what should I do? This is the problem of saturation strategy processing. There is also a synchronous handover queue, such as the synchronousqueue used in the above Cachethreadpool, is used on a particularly large or infinite thread pool, which is equivalent to completely bypassing the queue, the task directly to the execution of the thread, which tends to be more efficient.
using a FIFO queue such as Linkedblockingqueue or arrayblockingqueue causes the task to execute in the order in which they arrive, which is a fair execution strategy. If you want to control the order in which tasks are executed, you can block queue Priorityblockingqueue with priority. Note : The use of finite threads or limited queues is justified only when the tasks are independent of each other. If the tasks depend on each other, a finite thread pool or a limited queue can cause a thread starvation deadlock. For example, thread A in the thread pool performs a dependent task of 1 o'clock, which requires the execution of Task 2, and Task 2 because no execution thread is waiting in the queue, so that the executing thread waits for the queue's task, while the waiting queue task waits for the executing thread to end. This type of problem can be avoided by using an infinite thread pool.
second, saturation strategy: Task saturation, the processing strategy of the successor task. when a bounded queue is filled (both the thread pool and the waiting queue are filled), the saturation strategy starts to work. We can build the thread pool through the threadpoolexecutor, the saturation strategy is passed in, that is, the parameters described above handler, it is Rejectedexecutionhandler type, The JDK class library provides implementations of several saturation policies for the Rejectedexecutionhandler interface, which are present as static inner classes of the Threadpoolexecutor, in the following ways:
    • AbortPolicy: Direct throw rejectedexecutionexecption
    • callerrunspolicy: Let callers perform tasks
    • discardpolicy: Direct discarding of successor tasks
    • Discardoldestpolicy: The oldest task will be discarded.
The default "abort" policy throws an unchecked exception for the caller to write their own processing logic after capturing. The "Discard" policy silently discards the task that arrives. The "discardoldest" policy selectively discards tasks, discards the oldest (longest waiting) tasks, which are the tasks that should be performed next, and then tries to resubmit the task. The "caller-runs" policy neither discards the task nor throws an exception, it pushes the task back to the caller to relieve the burden, and it does not perform the most recently submitted task in the thread pool. in an infinite queue, we can also use blocking methods to control the rate at which a task submits to the thread pool, for example, to specify a semaphore based on the size of the thread pool to achieve control task injection rate. add: About the "caller-runs" strategy, such as the continuous submission of tasks to the thread pool in the main thread, when saturation is reached, the newly submitted task is pushed back to the main thread to execute, so that the main thread executes the commit task instead of submitting a new task to the thread pool. The thread pool also has time to handle the waiting task, and in a WebService program, the main thread will not accept the new Web request when it handles the push-back submission task, so that the new request cannot reach the application, waits at the TCP layer, and decides the processing strategy of the successor task by the TCP layer protocol. In this way, the load gradually from the application thread pool to the main thread to the TCP layer, so that the server in the case of high load can be flat degradation.
third, the Thread factory customization: The thread pool creates a new thread that is created by the threading factory, and the Threadfactory interface has only one unique method: Newthread, which creates a new thread, the default thread factory creates a new, non-background thread. You can implement the custom threading factory by implementing the Threadfactory interface.
Extended ThreadpoolexecutorThe decision Threadpoolexecutor is extensible, and it provides some of the non-implemented hooks for subclasses to implement, with BeforeExecute, AfterExecute, and terminate. The threads that perform the task call these methods and use them to add logs, timings, monitors, or statistical information collection. It's like using AOP to implement some aspect logic, threadpoolexecutor let us implement those facets we've already provided ourselves. The following code shows a custom thread pool that joins the log and statistics collection capabilities of thread pooling during task execution by using the BeforeExecute, AfterExecute, and terminated methods.
Import Java.util.concurrent.blockingqueue;import Java.util.concurrent.threadpoolexecutor;import Java.util.concurrent.timeunit;import Java.util.concurrent.atomic.atomiclong;import Java.util.logging.Logger; public class Timingthreadpool extends Threadpoolexecutor{private final threadlocal<long> startTime = new Threadlocal<long> ();p rivate final Logger log = Logger.getlogger ("Timingthreadpool");p rivate final Atomiclong Numtasks = new Atomiclong ();p rivate final Atomiclong totaltime = new Atomiclong ();p ublic timingthreadpool (int corepoolsiz E, int maximumpoolsize,long keepalivetime, timeunit unit,blockingqueue<runnable> workQueue) {super (corePoolSize , Maximumpoolsize, KeepAliveTime, Unit, workQueue);//TODO auto-generated constructor stub} @Overrideprotected void BeforeExecute (Thread T, Runnable R) {//TODO auto-generated method Stubsuper.beforeexecute (T, R); String str = String.Format ("Thread%s:start%s", T, R);//system.out.println (str); log.fine (str); Starttime.set ( System.nanOtime ());} @Overrideprotected void AfterExecute (Runnable R, Throwable t) {//TODO auto-generated method Stubtry {Long endTime = Syste M.nanotime (); Long tasktime = Endtime-starttime.get (); Numtasks.incrementandget (); Totaltime.addandget (TaskTime); String str = String.Format ("Thread%s:end%s, Time=%dns", T, R, Tasktime);//system.out.println (str); log.fine (str);} finally {Super.afterexecute (r, t);}} @Overrideprotected void terminated () {//TODO auto-generated method stubtry {String str = String.Format ("Terminated:avg Time =%dns ", Totaltime.get ()/Numtasks.get ());//system.out.println (str); log.info (str);} finally {super.terminated ();}} Public Logger GetLog () {return log;}}

In this way, the log information in the thread pool execution is added to the logger.



Apply thread pool Threadpoolexecutor

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.