Java. util. concurrent package source code reading 15 thread pool series ScheduledThreadPoolExecutor part 2, threadpoolexecutor

Source: Internet
Author: User

Java. util. concurrent package source code reading 15 thread pool series ScheduledThreadPoolExecutor part 2, threadpoolexecutor

This article focuses on DelayedWorkQueue.

In ScheduledThreadPoolExecutor, DelayedWorkQueue is used to store the tasks to be executed, because these tasks are delayed, and each execution takes the first task for execution, therefore, in DelayedWorkQueue, tasks must be sorted by latency from short to long.

DelayedWorkQueue is implemented using heap.

Like the implementation class of BlockingQueue, the offer method is basically a logic for adding elements to the heap.

Public boolean offer (Runnable x) {if (x = null) throw new NullPointerException (); RunnableScheduledFuture e = (RunnableScheduledFuture) x; final ReentrantLock = this. lock; lock. lock (); try {int I = size; // because elements are stored in an array, when the heap increases, when the array storage is insufficient, if (I> = queue. length) grow (); size = I + 1; // if the original queue is empty if (I = 0) {queue [0] = e; // This I is the heapIndex setIndex (e, 0) used by RunnableScheduledFuture;} else {// Add the element to the heap siftUp (I, e );} // If the queue was originally empty, there may be threads waiting for elements. At this time, since metadata is added, you need to use Condition to notify these threads if (queue [0] = e) {// because the elements are newly added, the first waiting thread can end the waiting, so here // Delete the first waiting thread leader = null; available. signal () ;}finally {lock. unlock ();} return true ;}

Here, let's take a look at siftUp. Friends who are familiar with heap implementation should easily understand that this is an algorithm to add elements to existing heap.

        private void siftUp(int k, RunnableScheduledFuture key) {            while (k > 0) {                int parent = (k - 1) >>> 1;                RunnableScheduledFuture e = queue[parent];                if (key.compareTo(e) >= 0)                    break;                queue[k] = e;                setIndex(e, k);                k = parent;            }            queue[k] = key;            setIndex(key, k);        }

Then let's look at poll:

Public RunnableScheduledFuture poll () {final ReentrantLock lock = this. lock; lock. lock (); try {// because even if the task is obtained, the thread still needs to wait, the waiting process is completed by the queue. // Therefore, the poll method can only return the RunnableScheduledFuture first = queue [0] tasks that have reached the execution time point. if (first = null | first. getDelay (TimeUnit. NANOSECONDS)> 0) return null; else return finishPoll (first);} finally {lock. unlock ();}}

Because the poll method can only return tasks that have reached the execution time point, it does not make sense for us to understand how the queue implements delayed execution. Therefore, we should focus on the take method:

Public RunnableScheduledFuture take () throws InterruptedException {final ReentrantLock lock = this. lock; lock. lockInterruptibly (); try {for (;) {// try to obtain the first element. If the queue is empty, enter the queue waiting for RunnableScheduledFuture first = queue [0]; if (first = null) available. await (); else {// get the task execution delay time long delay = first. getDelay (TimeUnit. NANOSECONDS); // if the task does not have to wait, immediately return the task to the thread if (delay <= 0) // remove the task from the heap return finishPoll (first); // If the task needs to wait, and the previous thread has waited for the task to be executed (the leader thread // has obtained the task, but the execution time is not up, the delay time must be the shortest), // the thread that executes take will certainly continue to wait, else if (leader! = Null) available. await (); // if the delay time of the current thread is the shortest, update the leader thread // use Condition to wait until the time is reached, awakened or interrupted else {Thread thisThread = Thread. currentThread (); leader = thisThread; try {available. awaitNanos (delay);} finally {// reset the leader thread for the next loop if (leader = thisThread) leader = null ;}}}}} finally {// The queue is not empty and signal is easy to understand, the condition that the leader thread does not exist here is that the leader thread is waiting for the arrival of the execution time point when the leader thread exists. IF signal is issued at this time, awaitNano will be triggered. S returns if (leader = null & queue [0]! = Null) available. signal (); lock. unlock ();}}

The focus of the take method is the leader thread, because there is a delay time, even if the task is obtained, the thread still needs to wait, the leader thread is the first thread to execute the task.

Because the thread still needs to wait for a delay of execution after obtaining the task, it is a bit interesting for the poll method of timeout wait:

Public RunnableScheduledFuture poll (long timeout, TimeUnit unit) throws InterruptedException {long nanos = unit. toNanos (timeout); final ReentrantLock lock = this. lock; lock. lockInterruptibly (); try {for (;) {RunnableScheduledFuture first = queue [0]; // if the task queue is empty, if (first = null) {// there are two possibilities for nanos to be less than or equal to 0: // 1. parameter Value Setting // 2. wait for timeout if (nanos <= 0) return null; else // wait for a period of time, return the remaining wait time nanos = availa Ble. awaitNanos (nanos);} else {long delay = first. getDelay (TimeUnit. NANOSECONDS); if (delay <= 0) return finishPoll (first); if (nanos <= 0) return null; // when the leader thread exists and the nanos is greater than delay, // still waiting for the nanos for such a long time, do not worry that it will exceed the time point set by delay, because the leader thread will issue signal // wake up the thread after the time, at that time, if (nanos <delay | leader! = Null) nanos = available. awaitNanos (nanos); else {Thread thisThread = Thread. currentThread (); leader = thisThread; try {long timeLeft = available. awaitNanos (delay); // remaining timeout nanos-= delay-timeLeft;} finally {if (leader = thisThread) leader = null ;}}}}} finally {if (leader = null & queue [0]! = Null) available. signal (); lock. unlock ();}}

After analyzing the above Code, the manager has clearly understood the principle of DelayedWorkQueue for delayed execution:

1. Store tasks to the heap in the order of execution latency from short to long;

2. Use the leader thread to wait until the specified time to run the task;

 




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.