ReentrantLock source code analysis for Java concurrent Series

Source: Internet
Author: User
Tags finally block

ReentrantLock source code analysis for Java concurrent Series

Before Java5.0, only synchronized and volatile can be used to coordinate access to shared objects. We know that the synchronized keyword implements the built-in lock, while the volatile keyword ensures the multi-thread memory visibility. In most cases, these mechanisms work well, but cannot implement some more advanced functions. For example, they cannot interrupt a thread waiting to obtain the lock, the lock Acquisition Mechanism with a limited time cannot be implemented, and non-blocking structure locking rules cannot be implemented. These more flexible locking mechanisms generally provide better activity or performance. Therefore, a new mechanism is added in Java5.0: ReentrantLock. The ReentrantLock class implements the Lock interface and provides the same mutual exclusion and memory visibility as synchronized. Its underlying layer implements multi-thread synchronization through AQS. Compared with built-in locks, ReentrantLock not only provides a richer locking mechanism, but also has no inferior performance to built-in locks (or even better than built-in locks in earlier versions ). After talking about the advantages of ReentrantLock, let's unveil its source code to see its specific implementation.

1. Introduction to the synchronized keyword

Java provides built-in locks to support multi-thread synchronization. JVM identifies the synchronization code block based on the synchronized keyword. When a thread enters the synchronization code block, the lock is automatically obtained and the lock is automatically released when the thread exits the synchronization code block, after a thread acquires the lock, other threads will be blocked. Each Java object can be used as a synchronization lock. the synchronized keyword can be used to modify object methods, static methods, and code blocks, when the object method and static method are modified, the lock is the object where the method is located and the Class object. When the code block is modified, additional objects must be provided as the lock. Every Java object can be used as a lock because a monitor object (pipe program) is associated in the object header. When a thread enters the synchronization code block, it will automatically hold the monitor object, the monitor object is automatically released when you exit. When the monitor object is held, other threads will be blocked. Of course, these synchronization operations are implemented by the JVM underlying layer, but the methods modified with the synchronized keyword and the code block are somewhat different in the underlying implementation. The method for modifying the synchronized keyword is implicitly synchronized, that is, it does not need to be controlled by bytecode instructions. JVM can identify whether a method is a synchronous method based on the ACC_SYNCHRONIZED access flag in the method table; the code block modified by the synchronized keyword is explicitly synchronized. It uses the monitorenter and monitorexit bytecode commands to control the thread's possession and release of the pipe. The monitor object holds the _ count field. If the value of _ count is 0, it indicates that the pipe process is not held. If the value of _ count is greater than 0, it indicates that the pipe process is held. When the value of _ count is retained, 1 is added, every time the holding thread exits, _ count will be reduced by 1, which is the implementation principle of the built-in lock re-entry. In addition, there are two queues _ EntryList and _ WaitSet in the monitor object, which correspond to the synchronization queue and condition queue of AQS. When the thread fails to obtain the lock, it will be blocked in _ EntryList, when the wait method of the lock object is called, the thread will enter _ WaitSet to wait. This is the implementation principle of built-in lock thread synchronization and conditional wait.

2. Comparison of ReentrantLock and Synchronized

The synchronized keyword is the built-in lock mechanism provided by Java. Its synchronization operations are implemented by the underlying JVM, while ReentrantLock is java. util. the explicit lock provided by the concurrent package. The synchronization operation is supported by the AQS synchronization. ReentrantLock provides the same semantics as built-in locks in locking and memory. In addition, it provides some other functions, including timed lock waits, interruptible lock waits, and fair locks, and implement non-block structure locking. In addition, in earlier JDK versions, ReentrantLock still has some performance advantages. Since ReentrantLock has so many advantages, why should we use the synchronized keyword? In fact, many people use ReentrantLock to replace the lock operation of the synchronized keyword. However, the built-in lock still has its unique advantages. The built-in lock is familiar to many developers and is more concise and compact, because the explicit lock must manually call unlock in the finally block, therefore, using built-in locks is more secure. At the same time, synchronized, instead of ReentrantLock, may be improved in the future. Because synchronized is the built-in attribute of JVM, it can execute some optimizations, such as the optimization of the lock elimination of thread-closed lock objects, and eliminate the synchronization of built-in locks by increasing the lock granularity, however, it is unlikely that these functions are implemented through class library-based locks. Therefore, ReentrantLock should be used only when some advanced functions are required. These functions include: timed, round-robin and interrupt lock acquisition operations, fair queues, and non-block structure locks. Otherwise, synchronized should be used first.

3. Obtain and release locks

First, let's take a look at the sample code for locking with ReentrantLock.

Public void doSomething () {// The default value is to obtain an unfair lock ReentrantLock lock = new ReentrantLock (); try {// lock the lock before execution. lock (); // execute the operation ...} finally {// finally release the lock. unlock ();}}

The following APIs are used to obtain and release locks.

// Obtain the lock Operation public void lock () {sync. lock ();} // release the lock Operation public void unlock () {sync. release (1 );}

You can see that the lock method and the release method delegated to the Sync object are obtained and released respectively.

Public class ReentrantLock implements Lock, java. io. serializable {private final Sync sync; abstract static class Sync extends AbstractQueuedSynchronizer {abstract void lock ();} // synchronize static final class NonfairSync extends Sync {final void lock () {...} implementing an unfair lock (){...}} // synchronized static final class FairSync extends Sync {final void lock () {...} implementing fair lock (){...}}}

Each ReentrantLock object holds a reference of the Sync type. This Sync class is an abstract internal class that inherits from AbstractQueuedSynchronizer, and the lock method in it is an abstract method. The member variable sync of ReentrantLock is assigned a value during construction. Next let's take a look at what the two constructor methods of ReentrantLock have done?

// By default, the parameter-free constructor public ReentrantLock () {sync = new NonfairSync ();} // The parameter constructor public ReentrantLock (boolean fair) {sync = fair? New FairSync (): new NonfairSync ();}

By default, the non-parameter constructor will assign the NonfairSync instance to sync. The lock is an unfair lock. The parameter constructor allows you to specify whether the FairSync instance or NonfairSync instance is assigned to sync. NonfairSync and FairSync both inherit from the Sync class and override the lock () method. Therefore, there are some differences between fair locks and non-fair locks in the way they are obtained. This will be discussed below. Let's take a look at the lock release operation. Every time we call the unlock () method, we only execute sync. release (1) operation. This step calls the release () method of the AbstractQueuedSynchronizer class. Let's review it.

// Unlock (exclusive mode) public final boolean release (int arg) {// call the lock to see if the lock can be unlocked if (tryRelease (arg )) {// get head Node h = head; // if the head Node is not empty and the waiting status is not equal to 0, wake up the next Node if (h! = Null & h. waitStatus! = 0) {// wake up the successor node unparkSuccessor (h);} return true;} return false ;}

This release method is the API provided by AQS to release the lock operation. It first calls the tryRelease method to try to obtain the lock. The tryRelease method is an abstract method, and its implementation logic is in the sub-class Sync.

// Try to release the lock protected final boolean tryRelease (int releases) {int c = getState ()-releases; // if the Thread holding the lock is not the current Thread, an exception is thrown if (Thread. currentThread ()! = GetExclusiveOwnerThread () {throw new IllegalMonitorStateException ();} boolean free = false; // if the synchronization status is 0, the lock is released if (c = 0) {// set the flag for lock release to true free = true; // set the occupied thread to null setExclusiveOwnerThread (null);} setState (c); return free ;}

This tryRelease method first obtains the current synchronization status, removes the passed parameters from the current synchronization status, and determines whether the new synchronization status is equal to 0, if it is equal to 0, it indicates that the current lock is released. Then, the lock release state is set to true, and then the thread occupying the lock is cleared, finally, call the setState method to set the new synchronization status and return the lock release status.

4. Fair lock and unfair lock

We know whether ReentrantLock is a fair lock or an unfair lock is based on the specific instance indicated by sync. During construction, the member variable sync is assigned a value. If the value is assigned to a NonfairSync instance, it indicates a non-fair lock. If the value is assigned to a FairSync instance, it indicates a fair lock. If there is a fair lock, the thread will obtain the lock according to the order in which they send the request, but in an unfair lock, the queue behavior is allowed: When a thread requests an unfair lock, if the lock status changes to available when a request is sent, the thread will skip all the waiting threads in the queue to directly obtain the lock. Next we will first look at the way to obtain the unfair lock.

// Unfair synchronization static final class NonfairSync extends Sync {// The final void lock () method for implementing the abstraction of the parent class () {// use CAS to set the synchronization status if (compareAndSetState (0, 1) {// if the setting is successful, it indicates that the lock is not occupied by setExclusiveOwnerThread (Thread. currentThread ();} else {// otherwise, it indicates that the lock has been occupied. Call acquire to get acquire (1 );}} // method of trying to obtain the lock protected final boolean tryAcquire (int acquires) {return nonfairTryAcquire (acquires) ;}// obtain the lock (exclusive mode) public final void in non-disruptive Mode Acquire (int arg) {if (! TryAcquire (arg) & acquireQueued (addWaiter (Node. EXCLUSIVE), arg) {selfInterrupt ();}}

We can see that in the lock method of an unfair lock, the first step of the thread will change the value of the synchronization status from 0 to 1 in CAS mode. In fact, this step is equivalent to trying to get the lock. If the change is successful, the thread gets the lock as soon as it comes, instead of waiting in the queue for synchronization. If the change fails, it indicates that the lock has not been released when the thread just came, so the acquire method will be called next. We know that this acquire method inherits from AbstractQueuedSynchronizer. Now let's review this method. After the thread enters the acquire method, it first calls the tryAcquire method to try to obtain the lock, nonfairSync overwrites the tryAcquire method and calls the nonfairTryAcquire method of the parent Sync class in the method. Therefore, the nonfairTryAcquire method is called to try to obtain the lock. Let's take a look at what this method has done.

// Unfair get lock final boolean nonfairTryAcquire (int acquires) {// get the current Thread final Thread current = Thread. currentThread (); // get the current synchronization status int c = getState (); // if the synchronization status is 0, the lock is not occupied if (c = 0) {// use CAS to update the synchronization status if (compareAndSetState (0, acquires) {// set the thread currently occupying the lock setExclusiveOwnerThread (current); return true ;} // otherwise, judge whether the lock is held by the current thread} else if (current = getExclusiveOwnerThread () {// if the lock is held by the current thread, directly modify the current synchronization status int nextc = c + acquires; if (nextc <0) {throw new Error ("Maximum lock count exceeded");} setState (nextc ); return true;} // return the failure flag if the lock is not held by the current thread ;}

NonfairTryAcquire is the Sync method. We can see that the thread first obtains the synchronization status after entering this method. If the synchronization status is 0, CAS is used to change the synchronization status, in fact, the lock is obtained again. If the synchronization status is not 0, it indicates that the lock is occupied. At this time, the system first checks whether the thread holding the lock is the current thread. If yes, it adds the synchronization status to 1, otherwise, the attempt to obtain the lock fails. The addWaiter method is called to add the thread to the synchronization queue. In summary, in the unfair lock mode, the next thread will try to obtain the lock twice before entering the synchronization queue. If the lock is obtained successfully, it will not enter the synchronization queue, otherwise, the synchronization queue is queued. Next, let's take a look at the fair lock acquisition method.

// Implement the synchronized static final class FairSync extends Sync for fair lock {// implement the final void lock () method for retrieving the lock from the abstract of the parent class () {// call acquire to let the thread get acquire (1) in synchronization queue;} // try to get the lock method protected final boolean tryAcquire (int acquires) {// obtain the current Thread final Thread current = Thread. currentThread (); // get the current synchronization status int c = getState (); // if the synchronization status is 0, the lock is not occupied if (c = 0) {// determine whether the synchronization queue has a forward node if (! HasQueuedPredecessors () & compareAndSetState (0, acquires) {// if no synchronization node exists and the synchronization status is set successfully, setExclusiveOwnerThread (current); return true ;} // otherwise, determine whether the current thread holds the lock} else if (current = getExclusiveOwnerThread ()) {// if the current thread holds the lock, directly modify the synchronization status int nextc = c + acquires; if (nextc <0) {throw new Error ("Maximum lock count exceeded");} setState (nextc); return true ;}// return false if the lock is not held by the current thread ;}}

The acquire method is called directly when the lock method of the fair lock is called. Similarly, the acquire method first calls the tryAcquire method rewritten by FairSync to try to obtain the lock. In this method, the value of the synchronization status is obtained first. If the synchronization status is 0, the lock is released, in this case, the non-fair lock is different because it will first call the hasQueuedPredecessors method to query whether there is a queue in the synchronization queue. If no one is in the queue, it will modify the value of the synchronization status, we can see that the fair lock here adopts the courtesy instead of getting the lock right away. Except for this and non-fair locks, other operations are the same. In summary, we can see that the fair lock only checks the lock status once before it enters the synchronization queue. Even if it finds that the lock is open, it will not be obtained immediately, instead, we first let the threads in the synchronization queue get the lock first, so we can ensure that the order in which all threads get the lock under the Fair lock is first followed, which also ensures the fairness of the lock acquisition.

So why don't we want all locks to be fair? After all, fairness is a good behavior, but unfair is a bad behavior. The system performance is affected by the large overhead of thread suspension and wake-up operations, especially when the competition is fierce, fair locks will lead to frequent thread suspension and wake-up operations, non-fair locks can reduce such operations, so the performance will be better than fair locks. In addition, because most threads use locks for a very short period of time, the thread wake-up operation may suffer latency, it is possible that thread B immediately acquires the lock and releases the lock when thread A is awakened, which leads to A win-win situation. The moment when thread A acquires the lock is not delayed, however, line B uses locks in advance and increases throughput.

5. Implementation Mechanism of conditional queue

There are some defects in the built-in condition queue. Each built-in lock can have only one associated condition queue, which causes multiple threads to wait for different condition predicates on the same condition queue, then, every call to policyall will wake up all the waiting threads. When the thread wakes up, it finds that it is not a condition predicate for waiting, and will be suspended again. This results in a lot of useless thread wake-up and suspension operations, which will waste a lot of system resources and reduce system performance. If you want to write a concurrent object with multiple condition predicates, or want to gain more control besides the visibility of the condition queue, explicit Lock and Condition instead of built-in Lock and Condition queue are required. A Condition is associated with a Lock, just like a Condition queue and a built-in Lock-in association. To create a Condition, you can call the Lock. newCondition method on the associated Lock. Let's first look at an example of using Condition.

Public class BoundedBuffer {final Lock lock = new ReentrantLock (); final Condition notFull = lock. newCondition (); // Condition predicate: notFull final Condition notEmpty = lock. newCondition (); // condition predicate: notEmpty final Object [] items = new Object [100]; int putptr, takeptr, count; // production method public void put (Object x) throws InterruptedException {lock. lock (); try {while (count = items. length) notFull. await (); // The queue is full. The thread waits for items [putptr] = x on the notFull queue; if (++ putptr = items. length) putptr = 0; ++ count; notEmpty. signal (); // production successful, wake up node of notEmpty queue} finally {lock. unlock () ;}// consumption method public Object take () throws InterruptedException {lock. lock (); try {while (count = 0) notEmpty. await (); // The queue is empty. The thread waits for Object x = items [takeptr] On the notEmpty queue; if (++ takeptr = items. length) takeptr = 0; -- count; notFull. signal (); // The consumption is successful, and the return x;} finally {lock. unlock ();}}}

A lock object can generate multiple condition queues. Here there are two condition queues notFull and notEmpty. When the container is full, the thread that calls the put method needs to be blocked. Wait until the condition predicate is true (the container is not full) until it wakes up and continues to execute; when the container is empty, the thread that calls the take method also needs to be blocked. Wait until the condition predicate is true (the container is not empty) to wake up and continue execution. These two types of threads are waiting according to different Condition predicates, so they will be blocked in two different Condition queues, and wait until the appropriate time to wake up by calling the API on the Condition object. The following is the implementation code of the newCondition method.

// Create a Condition queue public Condition newCondition () {return sync. newCondition ();} abstract static class Sync extends AbstractQueuedSynchronizer {// create a Condition object final ConditionObject newCondition () {return new ConditionObject ();}}

The implementation of conditional queues on ReentrantLock is based on AbstractQueuedSynchronizer. The Condition object we obtain when calling the newCondition method is the instance of AQS's internal class ConditionObject. All operations on the condition queue are performed by calling the API provided by ConditionObject. For more information about the specific implementation of ConditionObject, refer to my article "Java concurrency series [4] ---- condition queue for analyzing the source code of AbstractQueuedSynchronizer". I will not repeat it here. So far, our analysis of the ReentrantLock source code has come to an end. I hope to read this article to help readers understand and master ReentrantLock.

The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.