Java Concurrency Programming Summary 3--aqs, Reentrantlock, Reentrantreadwritelock

Source: Internet
Author: User

This article mainly summarizes the lock in the 5th chapter--java of the Art of Java concurrent programming.

First, AQS

Abstractqueuedsynchronizer (AQS), queue Synchronizer, is the underlying framework used to build locks or other synchronous builds. This category mainly includes:

1, mode, divided into sharing and exclusive.

2,volatile int state, used to indicate the status of the lock.

3, FIFO bidirectional queue, used to maintain the thread waiting to acquire the lock.

Aqs part of the code and instructions are as follows:

 Public Abstract classAbstractqueuedsynchronizerextendsAbstractownablesynchronizerImplementsjava.io.Serializable {Static Final classNode {/**shared mode, indicating that a lock can be acquired by multiple threads, such as a read lock in a read-write lock*/        Static FinalNode SHARED =NewNode (); /**exclusive mode, which means that only one thread acquires a lock at the same time, such as a write lock in a read-write lock*/        Static FinalNode EXCLUSIVE =NULL; volatileNode prev; volatileNode Next; volatilethread thread; }     /**  Aqs class internally maintains a FIFO bidirectional queue, which is responsible for the management of the synchronization state, when the current thread gets the synchronization state fails, the Synchronizer constructs the current thread and the wait state into a node and joins the synchronization queue, and when the synchronization state is released, Wakes the thread of the first node and tries to synchronize the state again.       Private transient volatileNode Head; Private transient volatileNode Tail; The  /**  State, which is used primarily to determine whether lock is already occupied, and in Reentrantlock, state=0 means that the lock is idle, >0 indicates that the lock is occupied, can be customized, overwrite tryacquire (int Acquires), etc.  *  /    Private volatile intState ;}

Here the main instructions under the two-way queue, by looking at the source code analysis, the queue is like this:

Node3 (tail), Node2, Node1, head

Note: Head is an empty node (the so-called null node means that there is no specific thread information in the node), so actually head->next (that is, Node1) is the first available node in the queue.

Aqs's design is based on the template method pattern, which allows the user to implement different functions of the lock by inheriting the Aqs class and overriding the specified method. The overriding methods are:

Second, the use of learning Aqs through Reentrantlock

1, the acquisition of fair Lock

/*** Sync object for fair locks*/Static Final classFairsyncextendsSync {Private Static Final LongSerialversionuid = -3000897897090466540l; Final voidLock () {Acquire (1); }    /*** First attempt to acquire the lock, if Tryacquire (ARG) returns True, gets the lock succeeded; * If it fails, call acquirequeued (Addwaiter (node.exclusive), Arg) to encapsulate the current thread as Node Node joins to the end of the synchronization queue and then blocks the current thread*/     Public Final voidAcquire (intArg) {         If (!tryacquire (ARG) && acquirequeued (Addwaiter (node.exclusive), arg)) selfinterrupt (); }    /*** Gets the value of state, if equal to 0 means the lock is idle, can try to get; * see if the current thread is the first available node in the FIFO queue, and if it is the first one, try to obtain the lock by CAS, which guarantees that the lock must be acquired the longest waiting time*/    protected Final BooleanTryacquire (intacquires) {        FinalThread current =Thread.CurrentThread (); intc =getState (); if(c = = 0) {             If (!hasqueuedpredecessors () && compareandsetstate (0 , acquires))                {Setexclusiveownerthread (current); return true; }        }        Else if(Current = =Getexclusiveownerthread ()) {            intNEXTC = C +acquires; if(NEXTC < 0)                Throw NewError ("Maximum Lock count Exceeded");            SetState (NEXTC); return true; }        return false; }}

2, the release of fair lock

Update the status value state, and then remove the first available node from the synchronization queue.

Third, fair lock and non-fair lock

Reentrantlock The default lock is a non-fair lock, the main reason is: compared with the fair lock, can avoid a lot of thread switching, greatly improve performance.

Let's look at an example of a non-fair lock:

 Public classAQS2 {PrivateReentrantlock lock =NewReentrantlock (false); Privatethread[] Threads =NewThread[3];  PublicAQS2 () { for(inti = 0; I < 3; i++) {Threads[i]=NewThread (NewRunnable () { Public voidrun () { for(inti = 0; I < 2; i++) {                        Try{lock.lock (); Thread.Sleep (100);                        System.out.println (Thread.CurrentThread (). GetName ()); } Catch(interruptedexception e) {e.printstacktrace (); } finally{lock.unlock ();        }                    }                }            }); }    }     Public voidstartthreads () { for(Thread thread:threads) {Thread.Start (); }    }     Public Static voidMain (string[] args) {AQS2 aqs2=NewAQS2 ();    Aqs2.startthreads (); }}

The result of the operation is:

The result of this code (2 times per thread acquires lock/release lock) I didn't think about it at first, I thought it before:

THREAD0 acquires the lock first, then the sleep 100ms, then the synchronization queue waiting to acquire the lock is:

Thread2, Thread1, thread0, Thread2, Thread1, head, and so on.

From the running results, the second acquisition of the lock or THREAD0, but the release of the lock (int args) is always starting from the first available node of the synchronization queue, then the thread1 removed from the queue, the logic is obviously wrong.

Later I looked at the code and compared the difference between an unfair lock and a fair lock before we finally understood it.

The biggest difference between an unfair lock acquisition lock is that the thread can ignore the Sync Sync queue and jump in! once the queue is successful and the lock is acquired, the thread will of course not be queued. So the synchronization queue for the above program should be:

Head, Thread1, Thread2.

The main differences between the non-fair lock source code are 2 points:

Static Final classNonfairsyncextendsSync {Private Static Final LongSerialversionuid = 7316153563782823691L; //different points 1    Final voidLock () { If (compareandsetstate (0, 1)) Setexclusiveownerthread (Thread.CurrentThread ()); ElseAcquire (1); }    protected Final BooleanTryacquire (intacquires) {        returnNonfairtryacquire (acquires); }    Final BooleanNonfairtryacquire (intacquires) {        FinalThread current =Thread.CurrentThread (); intc =getState (); if(c = = 0) {             If (compareandsetstate (0, acquires)) {        //different points 2Setexclusiveownerthread (current); return true; }        }        Else if(Current = =Getexclusiveownerthread ()) {            intNEXTC = C +acquires; if(NEXTC < 0)//Overflow                Throw NewError ("Maximum Lock count Exceeded");            SetState (NEXTC); return true; }        return false; }}

Thread0 the lock is released for the first time, it will continue to attempt to acquire the lock immediately through the Lock.lock () operation. The lock () method of the unfair lock will attempt to acquire the lock directly, ignoring the synchronization queue, so a large probability of acquiring the lock again, if it fails, then the Nonfairtryacquire (int acquires) method is executed, and the method and Tryacquire (int Acquires) The biggest difference is the lack of hasqueuedpredecessors () judgment, that is, do not need to determine whether the current thread is the first available node of the synchronization queue, and even do not need to determine whether the current thread in the synchronization queue, directly try to acquire the lock.

Iv. Reentrantreadwritelnock

After understanding the principle of Aqs, read-write locks are not difficult to understand. The read-write lock is divided into 2 locks, read locks and write locks. Read locks allow multiple threads to access at the same time by overwriting the int tryacquireshared (int arg) and the boolean tryreleaseshared (int arg) method; write locks are exclusive locks, by overwriting the Boolean Tryacquire (int arg) and the boolean tryrelease (int arg) method.

Since AQS only provides an int state to represent the status of a lock, how do you read and write 2 locks? The workaround is that the first 16 bits represent a read lock, and the last 16 bits represent a write lock. Since the lock has a state of only 16 bits, either for a read lock or a write-lock, it has a status of 65525, which is the total number of locks taken by all the locked threads (because it is a re-entry lock, so that each thread can get n locks) of no more than 65536. Since the main application scenario for read and write locks is to read and write less, so if you feel that the 65525 reading lock is not enough, you can rewrite the read-write lock yourself, for example, the first 24 bits of the assigned int state are read locks, and the last 8 bits are write locks.

The read-write lock also provides new methods such as final int getreadholdcount (), which returns the number of read locks the current thread has acquired. Since the read state holds the sum of the number of read locks for all threads acquiring read locks, each thread's own number of read locks needs to be saved separately, introduced threadlocal, and maintained by the thread itself.

Java Concurrency Programming Summary 3--aqs, Reentrantlock, Reentrantreadwritelock

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.