In layman's Java Concurrency (10): Lock mechanism Part 5 lockout (countdownlatch) [Turn]

Source: Internet
Author: User

This section describes several useful tools related to locks.

Latching (Latch)

Latching (Latch): A synchronous method that delays the progress of a thread until the thread reaches an endpoint state. In layman's words, a latch is equivalent to a door, all threads are blocked before the door opens, and once the door opens all threads will pass, but once the door opens and all threads pass through, then the state of the lock fails, and the state of the door cannot be changed, only the state of the Open. This means that the latching state is a one-time, which ensures that all specific activities need to be completed before latching is opened.

Countdownlatch is an implementation of a latch inside a JDK 5+ that allows one or more threads to wait for an event to occur. Countdownlatch has a positive counter,countdown method to reduce the counter operation,await method waits for the counter to reach 0. All await threads will block until the counter is 0 or wait for the thread to break or time out.

The API for Countdownlatch is as follows.

    • public void await () throws Interruptedexception
    • Public boolean await (long timeout, timeunit unit) throws Interruptedexception
    • public void Countdown ()
    • Public long GetCount ()

where GetCount () describes the current count, which is typically used for debugging purposes.

Two common uses of latching are described in the following example.

Package xylz.study.concurrency.lock;

Import Java.util.concurrent.CountDownLatch;

public class Performancetesttool {

Public long timecost (final int times, final Runnable task) throws Interruptedexception {
if (Times <= 0) throw new IllegalArgumentException ();
Final Countdownlatch Startlatch = new Countdownlatch (1);
Final Countdownlatch Overlatch = new Countdownlatch (times);
for (int i = 0; I < times; i++) {
New Thread (New Runnable () {
public void Run () {
try {
Startlatch.await ();
//
Task.run ();
} catch (Interruptedexception ex) {
Thread.CurrentThread (). interrupt ();
} finally {
Overlatch.countdown ();
}
}
}). Start ();
}
//
Long start = System.nanotime ();
Startlatch.countdown ();
Overlatch.await ();
Return System.nanotime ()-Start;
}

}

In the example above, two latching is used, the first latch ensures that all the preparations have been completed before all threads begin to perform the task, and once the preparation is complete, call Startlatch.countdown () to open the latch and all threads begin execution. The second lockout is to ensure that the main thread can continue after all the tasks have been completed, which ensures that the main thread waits for all task threads to complete before they can get the desired results. In the second latch, the initialization of an N-time counter, each task after the completion of the counter will be reduced by one, all tasks completed after the counter becomes 0, so that the main thread lockout overlatch to get this signal can continue to execute.

According to the previous Happend-before rule, the latch has the following characteristics:

Memory Consistency effect: an action that countDown() precedes a call in a thread happen-before immediately after the corresponding await() successful return from another thread.

In the example above, the second latch is equivalent to splitting a task into n parts, each of which completes the task independently and the main thread waits for all tasks to complete before it can continue execution. This feature will be used in the framework of the thread pool behind, in fact Futuretask can be seen as a latching. The following chapters will also specifically analyze the futuretask .

Also based on the spirit of exploration, still need to "spy" under the countdownlatch in the end is how to achieve await* and Countdown .

First, the await () method is studied. AQS 's acquiresharedinterruptibly (1)was called directly inside.

Public final void acquiresharedinterruptibly (int arg) throws Interruptedexception {
if (thread.interrupted ())
throw new Interruptedexception ();
if (tryacquireshared (ARG) < 0)
Doacquiresharedinterruptibly (ARG);
}

All the previous mentions are exclusive locks (lock, mutex), and now use another kind of lock, shared lock.

The so-called shared lock is that all shared-lock threads share the same resource, and once any one thread gets a shared resource, all the threads have the same resource. That is, normally a shared lock is just a flag, all threads wait for the identity to be satisfied, and once all the threads are satisfied, the same is true for all threads to get the lock. The latching Countdownlatch here is based on the implementation of shared locks.

The implementation of the tryacquireshared on AQS in latching is the following code ( Java.util.concurrent.CountDownLatch.Sync.tryAcquireShared):

public int tryacquireshared (int acquires) {
return getState () = = 0? 1:-1;
}

In this logic, the tryacquireshared should always be 1 for the first await of the latch, because the value of state for the latching Countdownlatch is the initialized count values. This also explains why the lockout count is always >0 before the countdown call.

private void doacquiresharedinterruptibly (int arg)
Throws Interruptedexception {
Final node node = addwaiter (node.shared);
try {
for (;;) {
Final Node p = node.predecessor ();
if (p = = head) {
int r = tryacquireshared (ARG);
if (r >= 0) {
Setheadandpropagate (node, r);
P.next = null; Help GC
Return
}
}
if (Shouldparkafterfailedacquire (p, node) &&
Parkandcheckinterrupt ())
Break
}
} catch (RuntimeException ex) {
Cancelacquire (node);
Throw ex;
}
Arrive here only if interrupted
Cancelacquire (node);
throw new Interruptedexception ();
}

The above logic shows how to concatenate and suspend all threads through an await until they are awakened or the condition is met or interrupted. The whole process is this:

      1. Add the current thread node to the AQS CLH Queue in shared mode (related concepts refer here and here). Proceed to 2.
      2. Checks the predecessor node of the current node, if it is a head node and the current lockout count is 0, sets the current node as the head node, wakes up the successor node, and returns (ends the thread block). Otherwise proceed to 3.
      3. Check if the thread is blocking, if it should, Block (park) until it wakes up (Unpark). Repeat 2.
      4. Throws an exception if 2, 3 has an exception (end thread blocking).

Here's a little bit worth explaining, setting up the head node and waking up the successor Setheadandpropagate. Since the front tryacquireshared always returns 1 or-1, while entering setheadandpropagate is always propagate>=0, so here Propagate==1. The subsequent wake-up succession node operation is familiar.

private void Setheadandpropagate (node node, int propagate) {
Sethead (node);
if (Propagate > 0 && node.waitstatus! = 0) {
Node s = node.next;
if (s = = NULL | | s.isshared ())
Unparksuccessor (node);
}
}

As you can see from all the above logic, Countdown should wake up the head node (one of the longest nodes) when the condition is met (count 0), and the head node will wake up the entire list of nodes (if any) based on the FIFO queue.

As seen from the Countdown Code of Countdownlatch , the direct call is AQS releaseshared (1), referring to the previous knowledge, This confirms the above statement.

In tryreleaseshared , the CAS operation is used to reduce the count (minus-1 each time).

public boolean tryreleaseshared (int releases) {
for (;;) {
int c = GetState ();
if (c = = 0)
return false;
int NEXTC = c-1;
if (Compareandsetstate (c, NEXTC))
return NEXTC = = 0;
}
}

The whole Countdownlatch is like this. In fact, with the previous atomic operation and AQS principle and implementation, analysis of Countdownlatch is still relatively easy.

In layman's Java Concurrency (10): Lock mechanism Part 5 lockout (countdownlatch) [Turn]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.