Thread pool principle Detailed and Java code example __java

Source: Internet
Author: User
Why use a thread pool

For server-side programs, it is often the client incoming short (short execution time, relatively single work content) task, requiring the server to quickly process and return results. If the server receives one task at a time, creating a thread and executing it is a good choice in the prototype phase, but it is not a good choice to create a Wange thread if you are using one thread in the face of thousands of tasks when submitting to the server. Because this will make the operating system frequent thread context switching, increase the load of the system without any reason, and the creation and extinction of the thread need to consume the system resources, and waste the system resources. The thread pool solves the problem very well. How the thread pool works

A thread pool is a predetermined number of threads created by the system when it is started. The thread pool creates a certain number of threads to be placed in the idle queue when no task requests are made. These threads are all in a sleep state, are started, do not consume the CPU, but occupy a small amount of space memory. When the request arrives, the buffer pool assigns an idle thread to the request, runs the request into the thread, and processes it. When a predesigned thread is running, a prefab thread is not enough, the thread pool can create a certain number of new threads to handle more requests. When the system compares space, you can also remove a part of the thread that is inactive.
The advantage of this is that, on the one hand, the system resource overhead of frequently creating and disappearing threads is eliminated, while the submission of excessive tasks can be gently degraded. code Example

The public interface Threadpool<job extends runnable> {
    //executes a job that needs to implement Runnable
    void execute (Job job); c5/>//closes the thread pool
    void shutdown ();
    Increase worker thread
    void addworkers (int num);
    Reduce worker thread
    void removeworker (int num);
    Get the number of tasks that are waiting to be performed
    int getjobsize ();
}

The

Client can commit the job to the thread pool execution through the Execute (Job) method, and the client itself does not have to wait for the execution of the job to complete. In addition to the Execute (Job) method, the thread pool interface provides a way to increase/decrease worker threads and to close the thread pool. The worker thread here represents a thread that repeats the job, and every job submitted by the client enters a work queue to wait for worker threads to process.

Package com.thread;
Import java.util.ArrayList;
Import java.util.Collections;
Import java.util.LinkedList;
Import java.util.List;
Import Java.util.concurrent.atomic.AtomicLong; public class Defaultthreadpool<job extends runnable> implements threadpool<job>{//thread pool max Limit number private
    static final int max_worker_numbers = 10;
    The default number of thread pools private static final int default_worker_numbers = 5;
    Minimum number of thread pools private static final int min_worker_numbers = 1;
    This is a list of jobs that will be inserted into the work private final linkedlist<job> jobs = new linkedlist<job> ();
    Worker List Private final list<worker> workers = collections.synchronizedlist (new arraylist<worker> ());
    Number of worker threads private int workernum = Default_worker_numbers;

    Thread number generation private Atomiclong threadnum = new Atomiclong ();
    Public Defaultthreadpool () {initializewokers (default_worker_numbers); public defaultthreadpool (int num) {workernum = num > MAx_worker_numbers? Max_worker_numbers:num<min_worker_numbers?
        Min_worker_numbers:num;
    Initializewokers (Workernum); /** * Initialize a certain number of threads * @param defaultworkernumbers/private void initializewokers (int Defaultworke
            Rnumbers) {for (int i=0;i<defaultworkernumbers;i++) {worker worker = new Worker ();
            Workers.add (worker);
            Thread thread = new Thread (worker, "threadpool-worker-" +threadnum.incrementandget ());
        Thread.Start ();
            @Override public void execute (Job job) {if (Job!= null) {jobs.addlast (Job);
        Jobs.notify ();
        @Override public void Shutdown () {for (Worker worker:workers) {worker.shutdown (); @Override public void addworkers (int num) {synchronized (jobs) {//Limit number of new worker numbers
              Quantity cannot exceed maximum if (num + this.workernum > Max_worker_numbers) {  num = Max_worker_numbers-this.workernum;
            } initializewokers (num);
        This.workernum+=num; @Override public void removeworker (int num) {synchronized (jobs) {if (num > THIS.W)
            Orkernum) {throw new IllegalArgumentException ("Beyond Worknum");
            int count = 0;
                while (Count < num) {worker worker = Workers.get (count);
                    if (Workers.remove (worker)) {Worker.shutdown ();
                count++;
        } This.workernum-= count;
    @Override public int getjobsize () {Return jobs = null?0:jobs.size ();
        Class worker implements runnable{//whether at work private volatile Boolean running = true;
                @Override public void Run () {while (running) {job job = null; Synchronized (JoBS) {//If the worker list is empty, wait while (Jobs.isempty ()) {try {
                        Jobs.wait (); catch (Interruptedexception e) {//Perceive external interrupt to workerthread, return THR
                            Ead.currentthread (). interrupt ();
                        Return
                }///Remove a job Job = Jobs.removefirst ();
                    } if (Job!= null) {try {job.run ();
                    catch (Exception e) {e.printstacktrace ();
        }}} public void Shutdown () {running = false; }
    }

}

As you can see from the thread pool implementation, when a client invokes the Execute (Job) method, the job is continually added to the task list jobs, and each worker thread is constantly taking out a job from jobs for execution, and when jobs is empty, the worker line
The process enters the wait state. When a job is added, the Notify () method is invoked on the work queue, rather than the Notifyall () method, because the worker thread is determined to wake up, and the Notify () method will be compared to the notifyall () Method gets a smaller cost (avoid moving all threads in the wait queue to the blocking queue). As you can see, the essence of the thread pool is to use a thread-safe work queue to connect worker and client threads, and the client thread returns the task after it has been placed in the work queue, while the worker thread continues to remove work from the work queue and execute it. When the team column is empty, all worker threads are waiting on the work queue, and when a client submits a task, it notifies any worker thread, and more worker threads are awakened as a large number of tasks are committed. risk of thread pool usage

Applications built with thread pools are susceptible to all the concurrent risks that any other multithreaded application can suffer, such as sync errors and deadlocks, and are vulnerable to a few other risks specific to the thread pool, such as pool-related deadlocks, resource shortages, and thread leaks. dead Lock

Any multithreaded application has a deadlock risk. When each of a group of processes or threads waits for an event that only another process in the group can cause, we say that the set of processes or threads are deadlocked. The simplest scenario for a deadlock is that thread A holds an exclusive lock on object X and waits for the lock of object Y, while thread B holds an exclusive lock on object Y but waits for the lock of object X. Unless there is some way to break the wait for the lock (which is not supported by the Java Lock), the deadlock thread will wait forever. Although there is a risk of deadlock in any multithreaded program, but the thread pool introduces another deadlock, in which case all the pool threads are performing a task that is blocking the execution of another task in the queue, but the task cannot run because there are no unused threads. When a thread pool is used to simulate simulations involving many interacting objects, the impersonated object can send a query to each other, which is then executed as a queued task, and the query object synchronously waits for the response to occur. Insufficient Resources

One advantage of thread pooling is that they usually perform well relative to other alternative scheduling mechanisms, some of which we have already discussed. However, this is true only if the thread pool size is properly adjusted. Threads consume large amounts of resources, including memory and other system resources. In addition to the memory required by the thread object, each thread requires two potentially large execution call stacks. In addition, the JVM may create a native thread for each Java thread that consumes additional system resources. Finally, although the scheduling overhead of switching between threads is small, if there are many threads, environment switching can also seriously affect program performance.
If the thread pool is too large, the resources consumed by those threads can severely affect system performance. Switching between threads can be a waste of time, and using more threads than you actually need may cause resource-poor problems because pool threads are consuming resources that might be used more efficiently by other tasks. In addition to the resources used by the thread itself, the work done by the service request may require additional resources, such as a JDBC connection, a socket, or a file. These are also limited resources, and too many concurrent requests can cause failures, such as the inability to allocate JDBC connections. Concurrency Error

The thread pool and other queuing mechanisms rely on using the wait () and notify () methods, both of which are difficult to use. If the encoding is incorrect, the notification may be lost, causing the thread to remain idle, even though there is work to be done in the queue. You must be careful when using these methods, even experts can make mistakes on them. It is best to use existing implementations that are already known to work, such as util.concurrent packages. thread Leaks

A serious risk in the various types of thread pools is a thread leak, which occurs when a thread is dropped from the pool to perform a task, and the thread does not return to the pool after the task completes. A case of thread leakage occurs when a task throws a runtimeexception or an error. If the pool class does not capture them, the thread will only exit and the size of the thread pool will be permanently reduced by one. When this happens, the thread pool is eventually empty, and the system stops because no threads are available to handle the task.
Some tasks may wait forever for some resources or input from the user, which is not guaranteed to become available, the user may have gone home, and such tasks will be permanently stopped, and these stopped tasks will cause and thread leaks the same problem. If a thread is permanently consumed by such a task, it is actually removed from the pool. For such tasks, you should either give them only their own threads or just wait for a limited amount of time. Request Overload

It is possible that only requests will overwhelm the server. In this case, we may not want to queue each incoming request to our work queue, because the task queued for execution may consume too much system resources and cause a lack of resources. It is up to you to decide what to do in this situation, in some cases you can simply discard the request, rely on a higher level of protocol to retry the request later, or you can reject the request with a response that indicates that the server is temporarily busy. thread pool Usage Guidelines

1. Do not queue tasks that wait for the results of other tasks to be synchronized. This may lead to the form of deadlock described above, in which all threads are occupied by tasks that wait for the results of queued tasks, which cannot be executed because all threads are busy.
2. Be careful when using a shared thread in a long-running operation. If your program must wait for a resource such as I/O to complete, specify the maximum wait time, and then disable or queue the task for later execution. This ensures that some progress is eventually achieved by releasing a thread to a potentially successful task
3, understand the task. To effectively adjust the thread pool size, you need to understand the tasks that are queued and what they are doing. Are they CPU-limited (cpu-bound)? Are they the I/O limit (i/o-bound)? Your answer will affect how you adjust your application. If you have different task classes that have distinct characteristics, it might make sense to set up multiple work queues for different task classes, so you can adjust each pool accordingly. usage scenarios for thread pools

1. Multiple timed tasks (no sequential dependencies between tasks)
2. Concurrent Testing
3, log (premise is SSD, if HDD has only one head, write disk is performance bottleneck)
I'll show you the following articles for a few common thread pools

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.