Threadpoolexecutor thread Pool

Source: Internet
Author: User
Tags connection pooling thread class

One: Class inheritance structure

Inheritance relationship

Two: constructor function

constructor function

(1) The size of the thread pool, in addition to the restrictions shown, may also have some implicit limitations due to constraints on other resources. such as JDBC connection pooling.

(2) long-running tasks.

If the task is blocked for too long, the thread pool's responsiveness can get worse, even without a deadlock. A task that takes longer to execute will not only cause the thread pool to block, but will even increase execution time. If the number of threads in the thread pool is much smaller than the number of long-running tasks in a stable state, then all of the threads in the end may run longer tasks, affecting overall responsiveness.

There is a technology that can mitigate the impact of a longer time-to-work task, which is to limit the time that a task waits for resources, rather than waiting indefinitely. In most blocking methods of the Platform class Library, both time-limited and infinite-time versions are defined, for example: Thread.join,blockingqueue.put,countdownlatch.await,, and so on. If you wait for a timeout, you can identify the task as failed, and then abort any or put the task back into the queue for subsequent execution. If the thread pool is always filled with a chant-blocking task, it may also indicate that the size of the wire pool is too small.

(3) Setting the size of the thread pool

(3.1) The ideal size of the thread pool depends on the type of task being submitted and the characteristics of the deployed system. The size of the thread pool is not fixed in code, but should be provided through some configuration mechanism, or dynamically computed according to Runtime.availableprocessors.

(3.2) It is not difficult to set the size of the thread pool, just avoid extreme situations such as "too big" or "too small". If the settings are too large, then a large number of threads will compete on relatively few CPU and memory resources, which will not only result in higher memory usage, but may also deplete resources. If the setting is too small, it will cause many idle processors to fail to perform their work, thereby reducing throughput.

(3.3) To set the thread pool size correctly, you must analyze the characteristics of the computing environment, resource budgets, and tasks. How many CPUs are there in the deployed system? How much memory is it? are calculations computationally intensive, I/O intensive, or both? Do they need scarce resources like JDBC connections? If you need to perform different categories of tasks and behave differently between them, you should consider using multiple thread pools so that each thread pool can be adjusted to its own workload.

(3.4) For computationally intensive tasks, on systems with NCPU processors, when the thread pool size is ncpu+1, optimal utilization is usually achieved. (This "extra" thread also ensures that the CPU's clock cycle is not wasted when the compute-intensive thread occasionally pauses due to a missing page failure or other reason)

(3.5) For tasks that contain I/O operations or other blocking operations, you must estimate the ratio of the wait time to the calculation time of the task. This estimate does not need to be precise and can be obtained through a number of analysis or monitoring tools. You can also adjust the size of the thread pool in a different way: under a certain benchmark load, set the line pool to run the application separately, and observe the level of CPU utilization. Given as defined below:

NCPU = number of CPUs

UCPU = target CPU utilization,0 <= ucpu <= 1

w/c = ratio of wait time to compute time

To achieve the desired usage rate for the processor, the optimal size of the thread pool is equal to:

Nthreads = ncpu * UCPU * (1 + w/c)

You can get the number of CPUs by runtime:

int n_cpus = Runtime.getruntime (). Availableprocessors ();

(4) Parameter resolution

    • Corepoolsize

The base size of the thread pool (the target size of the thread pool), which is always alive by default, is not restricted by KeepAliveTime even if it is idle, unless Allowcorethreadtimeout is set to true.

    • Maximumpoolsize

The maximum thread pool size, which represents the upper bound of the number of threads that can be active concurrently.

    • KeepAliveTime

If a thread's idle time exceeds the time of survival, it is marked as recyclable and is terminated when the current size of the thread pool exceeds the base size.

Analysis: The basic size of the thread pool (corepoolsize), maximum size (maximumpoolsize), and survival time are all responsible for thread creation and destruction. By adjusting the base size and survival time of the thread pool, you can help the thread pool reclaim the resources that are consumed by the idle threads, so that these resources can be used to perform other work.

Three: Basic realization

(1) Newcachedthreadpool ()

    publicstatic  executorservice Newcachedthreadpool () {        returnNew threadpoolexecutor (0, Integer.max_value,                                      60L, timeunit.seconds,                                      new Synchronousqueue<runnable>());    }

The base size of the thread pool is set to zero, the maximum size is set to Integer.max_value, the thread pool can be infinitely expanded, the demand is reduced automatically, and the maximum size setting is too large in some cases is a disadvantage.

(2) newfixedthreadpool (int nthreads)

     Public Static Executorservice newfixedthreadpool (int  nthreads) {        returnnew  Threadpoolexecutor (Nthreads, nthreads,                                      0L, timeunit.milliseconds,                                      new Linkedblockingqueue<runnable>());    }

The disadvantage is that linkedblockingqueue are unbounded queues, and in some cases there are a lot of queued tasks.

(3) Newscheduledthreadexecutor ()

/*** Creates a thread pool that can schedule commands to run after a * given delay, or to execute periodically. * @paramcorepoolsize the number of threads to keep in the pool, * even if they is idle *@returna newly created scheduled thread pool *@throwsillegalargumentexception If {@codecorepoolsize < 0}*/     Public StaticScheduledexecutorservice Newscheduledthreadpool (intcorepoolsize) {        return NewScheduledthreadpoolexecutor (corepoolsize); }/*** Creates a new {@codeScheduledthreadpoolexecutor} with the * given core pool size. *     * @paramcorepoolsize the number of threads to keep in the pool, even * if they is idle, unless {@codeAllowcorethreadtimeout} is set *@throwsillegalargumentexception If {@codecorepoolsize < 0}*/     PublicScheduledthreadpoolexecutor (intcorepoolsize) {        Super(Corepoolsize, Integer.max_value, 0, nanoseconds,Newdelayedworkqueue ()); }

(4) Newsinglethreadexecutor ()

 Public Static Executorservice Newsinglethreadexecutor () {        returnnew  Finalizabledelegatedexecutorservice            (new threadpoolexecutor (1, 1,                                    0L, Timeunit.milliseconds,                                    new linkedblockingqueue<runnable>()));    }

Summary: All of the static methods of the Executors class are provided uniformly, such as Executors.newcachedthreadpool (), the bottom layer through the threadpoolexecutor to achieve. Threadpoolexecutor provides a number of constructors, which are a flexible, stable thread pool that allows for a variety of customizations.

Four: Manage Queue tasks

(1) Single-threaded executor is a notable exception: they ensure that no tasks are executed concurrently because they are thread-gated for thread safety.

(2) If you create a thread without restriction, it will cause instability. You can solve this problem by using a fixed-size thread pool instead of creating a new thread every request you receive. However, the scheme is not complete. In the case of high loads, the application may still run out of resources, and the probability of a knowledge problem is small. If the arrival of the new request exceeds the processing efficiency of the thread pool, the new incoming request will accumulate. In a thread pool, these requests are waiting in a runable queue managed by executor, rather than competing for CPU resources like a thread, representing a waiting task through a runnable and a list node, which is of course much less expensive than using a thread. However, if a customer submits a request to a server that is more efficient than the server's processing, it may still run out of resources.

(3) Threadpoolexecutor allows you to provide a blockingqueue to save the tasks awaiting execution. There are 3 types of basic task queuing methods: unbounded queues, bounded queues, and synchronous handover (synchronous Handoff). The selection of the queue is related to other configuration parameters, such as the size of the thread pool.

(4) Newfixedthreadpool and Newsinglethreadexecutor will use an unbounded linkedblockingqueue in the default case. If all worker threads are busy, the task waits in the queue. If the task continues to arrive quickly and exceeds the speed at which the thread pool processes them, the queue increases indefinitely.

(5) A more prudent resource management strategy is the use of bounded queues, such as Arrayblockingqueue, bounded Linkedblockqueue, Priorityblockingqueue. Bounded queues help to avoid resource exhaustion, but it brings new problems: what happens to new tasks when the queue fills up? (There are many saturation strategies that can solve this problem) when using bounded

Work queue, the size of the queue and the size of the thread pool must be adjusted together. If the thread pool is small and the queue is large, it can help reduce memory usage, reduce CPU usage, and reduce context switching, but at the cost of potentially limiting throughput.

(6) for very large or unbounded thread pools, you can avoid task queuing by using synchronousqueue and transfer tasks directly from the producer to the worker thread. Synchronousqueue is not a real queue, but rather a mechanism for handoff between threads. To put an element into Synchronousqueue, another thread must be waiting for the element to be accepted.

(7) when using FIFO (first-in-out) queues such as Linkedblockingqueue or arrayblockingqueue, tasks are executed in the same order as they arrive. If you want to further control the order of task execution, you can also use Priorityblockingqueue, which will schedule tasks based on priority.

(8) It is reasonable to set the bounds for the thread pool or work queue only when the tasks are independent of each other. If there is a dependency between the tasks, then a bounded thread pool or queue can cause a thread "starvation" deadlock problem. An unbounded thread pool, such as Newcachethreadpool, should be used at this time.

V: Saturation strategy

(1) When the bounded queue is filled, the saturation strategy starts to work. Threadpoolexecutor's saturation strategy can be modified by calling Setrejectedexecutionhander. (A saturation policy is also used if a task is committed to a executor that has been closed.) The JDK provides several different rejectedexecutionhandler implementations, each of which contains different saturation strategies: abortpolicy, Callerrunspolicy, Discardpolicy and Discardoldestpolicy.

(2) The Abort (abort) policy is the default saturation policy, which throws an unchecked rejectedexecutionexception. The caller can catch the exception and write their own processing code as needed.

(3) When a newly submitted task cannot be saved to the queue for execution, the discard (Discard) policy silently discards the task.

(4) The "Discard oldest (Discard-oldest)" strategy discards the next task that will be executed and then attempts to resubmit the new task. (If the work queue is a priority queue, the "discard oldest" policy will result in discarding the highest priority tasks, so it's best not to put the "discard oldest" saturation policy and the priority queue together.) )

(5) The "Caller Run (caller-runs)" policy implements a scheduling mechanism that neither discards the task nor throws an exception, but instead rolls some tasks back to the caller, reducing the traffic to the new task. Instead of executing the newly committed task in a thread in the pool, it executes the task in a thread that invokes execute. If a bounded queue and a "caller run" saturation policy are used, when all threads in the thread pool are occupied and the work queue is filled, the next task executes in the main thread when the Execute is invoked. Because the task takes a certain amount of time, the main thread cannot commit any tasks for at least a period of time, allowing the worker thread to have time to process the tasks that are being performed. During this time, the main thread does not call accept, so the incoming request is saved in the TCP layer's queue instead of in the application's queue. If it continues to overload, then the TCP layer will eventually discover that its request queue is filled, so it will also start discarding the request. When the server is overloaded, this overload will gradually spread out-from the thread pool to the work queue to the application to the TCP layer, eventually reaching the client, causing the server to achieve a flat performance degradation under high load.

You can set the saturation policy in the following ways:

Executor.setrejectedexecutionhandler (New Threadpoolexecutor.callerrunspolicy ());

VI: Thread Factory

(1) Each time a thread pool needs to create a thread, it is done through the thread factory method. The default thread factory method creates a new, non-daemon thread and does not contain special configuration information. By specifying a thread factory method, you can customize the configuration information for the thread pool. Only one method newthread is defined in Threadfactory, which is called whenever a thread pool needs to create a new thread.

(2) In many cases, a custom threading factory approach is required. For example, you would want to specify a uncaughtexceptionhandler for a thread in the thread pool, or instantiate a custom thread class to perform the logging of debug information. You may also want to modify the priority of the thread (which is usually not a good idea) or the daemon (again, this is not a good idea). Perhaps you just want to give the thread a more meaningful name to explain the thread's dump information and error log.

 Public InterfaceThreadfactory {/*** Constructs a new {@codeThread}. Implementations may also initialize * priority, name, daemon status, {@codeThreadgroup}, etc. *     * @paramR A runnable to is executed by new thread instance *@returnconstructed thread, or {@codeNULL} If the request to * Create a thread is rejected*/Thread Newthread (Runnable R);}

By implementing this interface, you can customize your own thread pool factory methods.

Seven: Call the constructor and then customize Threadpoolexecutor

(1) After calling the Threadpoolexecutor constructor, you can still modify the parameters of most of the constructors passed to it (such as the base size of the thread pool, maximum size, survival time, Thread factory, and deny execution of the processor) by setting the function (setter). If executor was created from a factory method (except Newsinglethreadexecutor) in executors, you can convert the type of the result to Threadpoolexecutor to access the setup, as follows:

Executorservice exec = Executors.newcachedthreadpool ();

if (exec instanceof Threadpoolexecutor)

((threadpoolexecutor) exec). Setcorepoolsize (10);


throw new Assertionerror ("Oops, Bad Assumption");

VIII: Extended Threadpoolexecutor

(1) Threadpoolexecutor is extensible and provides several methods that can be rewritten in subclasses: BeforeExecute, AfterExecute, and terminated, These methods can be used to extend the behavior of Threadpoolexecutor.

(2) methods such as BeforeExecute and AfterExecute will be called in the thread that performs the task, and the functions of logging, timing, monitoring, or statistics collection can also be added in these methods. The AfterExecute is called regardless of whether the task returns normally from run or throws an exception. (if any, with an error after the call is complete, AfterExecute is not called.) If BeforeExecute throws a runtimeexception, the task will not be executed and AfterExecute will not be invoked.

(3) When the online pool completes the shutdown operation, call terminated, that is, after all tasks have been completed and all worker threads have been closed. Terminated can be used to release various resources allocated by executor during its life cycle, and can also perform operations such as sending notifications, logging logs, or collecting finalize statistics.

Threadpoolexecutor thread Pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.