Talk about high concurrency (29) Parsing java.util.concurrent components (11) and see Reentrantreadwritelock re-entry read-write lock

Source: Internet
Author: User
Tags semaphore

Talk about high concurrency (28) parsing java.util.concurrent components (10) Understanding Reentrantreadwritelock Read-write lock the basic situation and Main method of Reentrant read-write lock are discussed, Shows how to implement a lock downgrade. But here are a few questions that are not clear, this article adds

1. The priority issue when releasing a lock is whether the lock is first obtained or first read

2. Do you want to allow read threads to queue

3. Whether to allow the write thread to jump in line, because read-write locks are generally used in a large number of read, small write, if the write thread does not have a priority, then it may cause the write thread of hunger


There are two cases where a lock is released first, or the read lock is first obtained.

1. After releasing the lock, the thread requesting the write lock is not in the Aqs queue

2. After releasing the lock, the thread requesting the write lock has aqs the queue


If it is the first case, then the implementation of the non-fair lock, the thread that acquires the write lock will try to compete the lock directly without AQS the first thread inside. The thread that gets the read lock only determines if it is already wired to get a write lock (a node with a head node that is an exclusive mode), and if not, then it is not necessary to aqs the thread that was first prepared to acquire the read lock.

Static final class Nonfairsync extends Sync {        private static final long serialversionuid = -8159625535654395037l;        Final Boolean writershouldblock () {            return false;//writers can always barge        }        final Boolean Readershouldblo CK () {            return apparentlyfirstqueuedisexclusive ();        }    }

In the case of a fair lock, the thread that acquires the read and write locks determines whether the thread that has been or has come before is waiting, and if so, enters the Aqs queue.

Static final class Fairsync extends Sync {        private static final long serialversionuid = -2274990926593161451l;        Final Boolean writershouldblock () {            return hasqueuedpredecessors ();        }        Final Boolean readershouldblock () {            return hasqueuedpredecessors ();        }    }

In the second case, if the thread preparing to acquire the write lock waits in the Aqs queue, then the actual first-come-first-served fairness is followed, because the Aqs queue is the FIFO queue. So the order of the thread that acquires the lock is related to its position in the Aqs synchronization queue.

The following diagram simulates the waiting thread node in the Aqs queue

1. The head node is always the thread that is currently getting the lock

2. Non-head node after the competition lock fails, the acquire method will constantly poll, and in the spin difference, the thread in the Aqs polling process is blocked waiting.

So to understand Aqs release action is not to let the subsequent node directly acquire the lock, but to wake the subsequent node unparksuccessor (). The place where the lock is actually acquired is still in the acquire method, the thread that is awakened by release continues to poll the state, and if its predecessor is head, and Tryacquire acquires the resource successfully, then it acquires the lock

Public final Boolean release (int arg) {        if (Tryrelease (ARG)) {  & nbsp;         Node h = head;             if (h! = NULL && H.waitstatus! = 0)         & nbsp;       unparksuccessor (h);             return true;       }         Return false;   }final boolean acquirequeued (Final node node, int arg) {Boolean failed = tr        Ue            try {Boolean interrupted = false; for (;;)                {final Node P = node.predecessor ();                    if (p = = head && tryacquire (ARG)) {Sethead (node); P.next = null;             Help GC failed = false;       return interrupted;                    } if (Shouldparkafterfailedacquire (p, node) && parkandcheckinterrupt ())            interrupted = true;        }} finally {if (failed) cancelacquire (node); }    }


3. After the head of the figure there are 3 threads ready to acquire a read lock, and finally 1 threads ready to get a write lock.

So if the node in the Aqs queue acquires the lock

The situation is that the first read lock node acquires the lock, and when it acquires the lock, it attempts to release a read lock in shared mode, and if the release succeeds, the next read lock node is also awakened by Unparksuccessor, and then the lock is acquired.

If the release fails, it marks the state of propagate, and when it is released, it tries to wake up the next read lock node again.

If the subsequent node is a write lock, then it does not wake

private void doacquireshared (int arg) {final node node = addwaiter (node.shared);        Boolean failed = true;            try {Boolean interrupted = false; for (;;)                {final Node P = node.predecessor ();                    if (p = = head) {int r = tryacquireshared (ARG);                        if (r >= 0) {setheadandpropagate (node, r); P.next = null;                        Help GC if (interrupted) selfinterrupt ();                        Failed = false;                    Return }} if (Shouldparkafterfailedacquire (p, node) && Parkandcheckinte            Rrupt ()) interrupted = true;        }} finally {if (failed) cancelacquire (node); }} private void Setheadandpropagate (node node, int propagate) {        Node h = head; Record Old head for check below        sethead (node);                 if (Propagate > 0 | | h = = NULL | | H.waitstatus < 0) {& nbsp;           Node s = node.next;             if (s = = NULL | | s.isshared ())                  doreleaseshared ();       }    }private void doreleaseshared () {        for (;;) {            Node h = head;             if (h! = NULL && h! = tail) {       & nbsp;        int ws = H.Waitstatus;                if (ws = = node.signal) {                     if (!compareandsetwaitstatus (H, node.signal, 0))                          continue;            //Loop to recheck cases                     unparksuccessor (h);                }                 else if (ws = = 0 &&                          !compareandsetwaitstatus (H, 0, node.propagate))                      continue;                //Loop on failed cas            }            if (h = = Head)                    //Loop if head changed                 break;       }   }

The FIFO queue of the Aqs guarantees that the write lock is not hungry in the case of a large number of read locks and a small number of write locks.


On the question of whether the lock can jump queue, the non-fairness of sync provides the possibility of queue jumping, but the premise is that it was successful in Tryacquire, if the tryacquire failed, it will have to enter the Aqs queue, there will be no write lock hunger situation.


About the write lock can not jump in the situation, as well as read the lock, the non-fair sync provides the possibility of queue jumping, if the tryacquire failed to get into the Aqs wait.


Finally, why Semaphore and Reentrantlock in the Tryacquirexx method to achieve the non-fairness and fairness, But reentrantreadwritelock to abstract out Readershouldblock and Writershouldblock methods to deal with fairness alone.

Abstract Boolean readershouldblock (); abstract Boolean writershouldblock ();

The reason is that semaphore only supports shared mode, so it only needs to implement tryacquireshared methods in Nonfairsync and fairsync to achieve fairness and unfairness.

Reentrantlock only supports exclusive mode, so it only needs to implement Tryacquire method in Nonfairsync and fairsync to achieve fairness and unfairness.


And Reentrantreadwritelock is to support sharing and exclusive mode, but also to support fairness and unfairness, so it in the base class sync with the Tryacquire and tryacquireshared method to distinguish between exclusive and shared mode,

Unfairness and fairness are achieved in the Readershouldblock and Writershouldblock of Nonfairsync and Fairsync.






Talk about high concurrency (29) Parsing java.util.concurrent components (11) and see Reentrantreadwritelock re-entry read-write lock

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.