In layman's Java Concurrency (28): Thread pool Part 1 introduction [GO]

Source: Internet
Author: User

From this section, formally enter the part of the thread pool. In fact, the whole system has been dragged for a long time, so the later chapters will speed up, or even just a semi-finished or simplified, later time to supplement and perfect.

In fact, the thread pool is a very important part of the concurrent package, in fact, it is also used a lot of important components.

Describes a portion of the thread pool API. The full thread pool in its broadest sense may also include parts such as Thread/runnable, Timer/timertask, and so on. Only the main and advanced APIs, as well as the architecture and rationale, are described here.

Most concurrent applications are managed around an execution task (task). The so-called task is an abstract, discrete unit of work. Separating the work of an application into a task simplifies the management of the program, which also divides the natural dividing line between different things, which can facilitate the recovery of the program in the event of an error, and this separation can also provide a natural structure for parallel work, which helps to improve the concurrency of the program. [1]

One important prerequisite for concurrent execution of tasks is the splitting of tasks. Splitting a large process or task into small units of work, each of which may or may not be relevant, can take full advantage of the concurrency of the CPU's characteristics to improve concurrency (performance, response time, throughput, and so on).

The so-called task splitting is to determine the boundaries of each execution task (unit of work). Ideally, independent units of work have the maximum throughput, and these units do not depend on the status, results, or other resources of other work units. Therefore, it is advantageous to split the task into separate working units to improve the concurrency of the program.

Work units that have dependencies and resource contention involve scheduling and load balancing of tasks. Work units associated with the state, results, or other resources of a unit of work need to have an overall dispatcher to coordinate resources and order of execution. Also in the case of limited resources, a large number of tasks also require a dispatcher to coordinate the various work units. This involves policy issues with task execution.

The execution strategy for a task includes the 4W3H section:

    • What the task is doing in the (what) thread
    • What is the task (what) Order execution (fifo/lifo/priority, etc.)
    • How many (how many) tasks are concurrently executed
    • How many (how many) tasks are allowed to enter the execution queue
    • Choose which (which) task to discard when the system is overloaded, how to notify the application of this action
    • What to do with the start and end of task execution

The following chapters will detail how these policies are implemented. Let's start by answering briefly how we can meet the above conditions.

    1. First, it is clear that the startup thread class that can be called by the user in Java is thread. So runnable or timer/timertask, etc. are to rely on thread to start, so in ThreadPool also rely on thread to start multithreading.
    2. By default, the Runnable interface does not get execution results after execution, so a callable interface is defined in ThreadPool to handle the execution results.
    3. In order to get the result of asynchronous blocking, the future can help the calling thread get the execution result.
    4. executor resolves the ingress problem of submitting tasks to the thread pool, while scheduledexecutorservice resolves the issue of how to invoke tasks repeatedly.
    5. Completionservice solves the problem of how to get results in the order in which they are executed, which in some cases increases the concurrency of task execution, and the calling thread does not have to wait for too much time on long tasks.
    6. Obviously the number of threads is limited, and not too much, so the right task queue is essential, and the blockingqueue capacity just solves this problem.
    7. Fixed task capacity means that a certain strategy is needed to handle too many tasks (new tasks) after the capacity is full, and Rejectedexecutionhandler just solves this problem.
    8. Blocking in a certain amount of time means there is a timeout, so timeoutexception is to describe the phenomenon. Timeunit is a time-cell enumeration class that is convenient for describing time-outs.
    9. Having the problem above means that it is very complicated to configure a suitable thread pool, so some of the executors default thread pool configurations can reduce this operation.

The basic strategy of the thread pool is roughly that, starting with the rationale and execution methods of the thread pool from the next section.

[1] Java Concurrency in practice

In layman's Java Concurrency (28): Thread pool Part 1 introduction [GO]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.