The explicit locking principle of concurrent programming

Source: Internet
Author: User

Synchronized keyword combined with the object's monitor, the JVM provides us with a "built-in lock" semantics, the lock is simple, does not require us to care about locking and release the lock process, we only need to tell the virtual machine which code blocks need to lock, other details will be implemented by the compiler and the virtual machine itself.

Our "Built-in lock" can be understood as a built-in feature of the JVM, so a significant problem is that it does not support the customization of some advanced features, for example, I want this lock to support fair play, I want to block threads on different queues based on different conditions, I want to support timed competition locks, time-outs return, I also want the blocked thread to be able to respond to interrupt requests, and so on.

These special requirements are "built-in lock" can not be satisfied, so at the JDK level introduced the concept of "explicit lock", no longer by the JVM to lock and release locks, these two actions released to our program to do, the program level is inevitably more complex, but the lock flexibility is improved, can support more custom features, But ask you to have a deeper understanding of the lock.

Lock-Explicit locks

The Lock interface is located under the Java.util.concurrent.locks package and is basically defined as follows:

public interface Lock {    //获取锁,失败则阻塞    void lock();    //响应中断式获取锁    void lockInterruptibly()    //尝试一次获取锁,成功返回true,失败返回false,不会阻塞    boolean tryLock();    //定时尝试    boolean tryLock(long time, TimeUnit unit)    //释放锁    void unlock();    //创建一个条件队列    Condition newCondition();}

Lock defines the most basic method that an explicit lock should have, and the implementation of each subclass should have more complex capabilities, and the entire lock framework is as follows:

Among them, there are three main implementations of the explicit lock, Reentrantlock is its main implementation class, Readlock and Writelock are the two inner classes defined inside Reentrantreadwritelock, they inherit from lock and realizes all the methods of its definition, fine read and write separation. The Reentrantreadwritelock provides read-lock write locks to the outside.

As for Locksupport, it provides the ability to block and wake up a thread, which is, of course, implemented by the Unsafe class and then by invoking the underlying API of the operating system.

Abstractqueuedsynchronizer you can call it queue synchronizer, or it can be referred to as AQS, it is a core of our implementation of the lock, essentially a synchronization mechanism, recording the current possession of the lock thread, Every thread that wants to acquire a lock needs to use this synchronization mechanism to determine whether they have the condition of possession of the lock, if there is no blocking wait, otherwise will occupy the lock, modify the flag, which we will later detailed analysis.

The basic understanding of Reentrantlock

Reentrantlock is the most basic implementation of lock explicit locks and is the most frequently used lock implementation class. It provides two constructors to support a fair competition lock.

Public Reentrantlock ()

Public Reentrantlock (Boolean fair)

The default parameterless constructor means that an unfair lock is enabled, and of course it is possible to pass the second constructor to the fair parameter with a value of true to indicate that a fair lock is enabled.

The difference between a fair lock and an unfair lock is that a fair lock is the first-come-first-served principle when choosing the next thread to hold the lock, and the thread that waits longer will have a higher priority. The non-fair lock ignores this principle.

There are pros and cons to both strategies, and a fair strategy ensures that each thread is fairly competitive to the lock, but maintaining the fairness algorithm itself is a resource drain, and each lock request thread is directly hung at the end of the queue, and only the thread with the head of the queue is eligible to use the lock, followed by a queue.

So assuming that a get lock is running, B trying to get a lock failure is blocked, and C is also trying to get a lock, failure and blocking, although C requires only a short run time, it still needs to wait for the end of B execution to get the lock to run.

Under the premise of the non-fair lock, A execution end, find the queue header B thread, start the context switch, if the C came over the competition lock, the unfair policy premise, C is able to obtain the lock, and assume that its rapid execution is over, when the B thread is switched back and then go to acquire the lock will not have any problem, the result is The execution ends during the context switch of the B thread. Obviously, the CPU throughput is improved under the unfair strategy.

However, an unfair policy lock may cause some threads to starve, never run, each with pros and cons, and a timely trade-offs. Fortunately, our explicit lock supports two modes of switch selection. We will analyze the details of the implementation later.

The following three inner classes are more important in Reentrantlock:

The internal class Sync inherits from our AQS and re-writes part of the method, Nonfairsync and Fairsync are the two subcategories of Sync, which correspond to fair lock and non-fair lock respectively.

Why did you do it?

There is a lock method in class Sync, and the lock method under the fair policy and the lock method under the unfair policy should have different implementations, so there is no writing to die, but a subclass to implement it.

This is actually a typical design pattern, the "template method".

For AQS, we'll do a detailed analysis later, and here you'll see it as a container for recording the current possession lock thread information and blocking all thread information in that lock.

Then look at Reentrantlock, and you'll find that either the lock method, the Lockinterruptibly method, the Trylock, or the Unlock method are all related to the transfer of the sync, or the related method in AQS.

Below we will go into the source code to analyze and analyze the implementation of this AQS.

The basic principle of AQS

AQS is our Abstractqueuedsynchronizer, you can understand it as a container, it is an abstract class, there is a parent class Abstractownablesynchronizer. The responsibility of this parent class is simple, there is a thread type of member property, which is used to give AQS to save the current possession of the lock threads.

In addition, AQS defines a static inner class Node, which is the data structure of a doubly linked list. AQS also corresponds to two pointers, a queue head pointer, and a tail pointer.

The attribute state of type int is also a very important member, with a value of zero indicating that no thread holds the current lock, and that a value of one thread holds that the lock is not freed, more than a description of the thread that holds the lock is re-entered multiple times.

AQS defines a lot of methods, there are public, private, here do not repeat, we start with the Reentrantlock lock and unlock, analyze the way it calls the method, take the unfair lock for example.

public void lock() {    sync.lock();}

Reentrantlock's Lock method calls the lock method of the sync directly, and we said that the lock method defined in sync is an abstract method, specifically implemented in subclasses, the Nonfairsync lock method is implemented as follows:

final void lock() {    if (compareAndSetState(0, 1))        setExclusiveOwnerThread(Thread.currentThread());    else        acquire(1);}

The logic is simple, trying to update the state with CAS with a value of 1, indicating that the current thread is trying to occupy the lock, and if it succeeds, that the value of state is originally a, or that the lock is not occupied by any thread, then the current thread is saved in the thread field of the parent class.

If the update fails, then the lock is held and the current thread needs to be suspended, so call the Acquire method (the method in Aqs).

public final void acquire(int arg) {    if (!tryAcquire(arg) &&        acquireQueued(addWaiter(Node.EXCLUSIVE), arg))        selfInterrupt();}

Tryacquire Quilt class Sync is rewritten, so this is called the Nonfairsync Tryacquire method.

protected final boolean tryAcquire(int acquires) {    return nonfairTryAcquire(acquires);}
final boolean nonfairTryAcquire(int acquires) {    final Thread current = Thread.currentThread();    int c = getState();    if (c == 0) {        if (compareAndSetState(0, acquires)) {            setExclusiveOwnerThread(current);            return true;        }    }    else if (current == getExclusiveOwnerThread()) {        int nextc = c + acquires;            if (nextc < 0) // overflow                throw new Error("Maximum lock count exceeded");            setState(nextc);            return true;    }    return false;}

This code is not complex, the main logic is that if the state is zero, the thread that has just held the lock freed the lock resource, so try to occupy the lock, otherwise determine whether the thread that holds the lock is the current thread, that is, to determine whether it is a re-entry lock operation, if it is to increase the number of re-entry.

For the return value, returns true if the owning lock succeeds, or the re-entry lock succeeds, otherwise reunification returns false.

Then look,

public final void acquire(int arg) {    if (!tryAcquire(arg) &&        acquireQueued(addWaiter(Node.EXCLUSIVE), arg))        selfInterrupt();}

If the Tryacquire method returns true, the outer acquire returns and ends the call to the lock method, otherwise the possession lock fails and the current thread is prepared to block, and the specific blocking situation we continue to analyze.

The Addwaiter method is used to wrap the current thread into a node node and add to the end of the queue, let's look at the source code:

private Node addWaiter(Node mode) {    Node node = new Node(Thread.currentThread(), mode);    Node pred = tail;    if (pred != null) {        node.prev = pred;        if (compareAndSetTail(pred, node)) {            pred.next = node;            return node;        }    }    enq(node);    return node;}

The code is simple, no longer verbose, and this method eventually causes the current thread to hang at the end of the waiting queue.

Added to the wait queue will return to the Acquirequeued method, which will do the last attempt to acquire the lock, and if it still fails, call the Locksupport method to suspend the thread.

The core logic of the entire method is written in the dead Loop, the first half of the loop body tries to acquire the lock again, it is important to note that the node that the head points to is not a valid waiting thread in the queue, and the node that the head's next pointer points to is the first valid waiting thread.

That is, if the node's predecessor is head, then it is the first effective heir to the lock.

If it still fails, it will first call Shouldparkafterfailedacquire to determine if the current thread should be blocked, which in most cases returns true, and in some special cases returns false.

Then Parkandcheckinterrupt will block the current thread directly, calling Locksupport's Park method. The whole process of acquiring the lock is basically over, and then we can unblock it.

public void unlock() {    sync.release(1);}

Unlock calls AQS's release method,

public final boolean release(int arg) {    if (tryRelease(arg)) {        Node h = head;        if (h != null && h.waitStatus != 0)            unparkSuccessor(h);        return true;    }    return false;}

First call the Tryrelease attempt release,

protected final boolean tryRelease(int releases) {    int c = getState() - releases;    if (Thread.currentThread() != getExclusiveOwnerThread())        throw new IllegalMonitorStateException();    boolean free = false;    if (c == 0) {        free = true;        setExclusiveOwnerThread(null);    }    setState(c);    return free;}

If the current thread is not the one that owns the lock, then the exception is thrown directly, which is a necessary anomaly to judge.

If C equals zero, you are not re-entering the lock multiple times, emptying the Exclusiveownerthread field, and modifying the state status. The reason this code does not have synchronization logic is that the unlock method can only be called by the thread that owns the lock, and only one thread will be able to invoke success at the same time.

If c is not equal to zero, that is, the current thread is re-entering the lock multiple times, the state will be reduced by one modification, and Tryrelease will return false, which needs attention. We'll go back to the release method,

public final boolean release(int arg) {    if (tryRelease(arg)) {        Node h = head;        if (h != null && h.waitStatus != 0)            unparkSuccessor(h);        return true;    }    return false;}

As you can see, if the Tryrelease returns false due to multiple re-entry of the thread, the result is that our unlock method returns FALSE.

In other words, how many times you re-enter the lock, you need to manually call how many times unlock, and only the last Unlock method return is true, this is the principle.

And if our Tryrelease call succeeds and returns the True,unparksuccessor method, it unpark the thread that corresponds to the first valid node of our queue. Unparksuccessor is relatively simple, does not involve any synchronization mechanism, here no longer repeat.

In general, unlock is much simpler than lock, because unlock does not need a synchronization mechanism, only the thread friend to get the lock can be called, there is no concurrent access, and the lock method is not the same, it faces a large number of threads simultaneously access.

We go back to the Acquirequeued method,

After the thread is awakened, the code is restarted from the last blocked position, that is, the line routines wakes up in the Parkandcheckinterrupt method,

private final boolean parkAndCheckInterrupt() {    LockSupport.park(this);  //这里开始苏醒    return Thread.interrupted();}

First thing, call the interrupted method, which is used to determine whether the current thread is interrupted during blocking.

If it encounters an interruption, it enters the If judgment, records it, and uses it to return the method. The awakened thread will re-start from the loop body header and try again to compete for the lock until all the nodes in the waiting queue are out of line, before they have a chance to acquire the lock and return the interrupt flag.

So, blocking a thread in the dead loop is one of our more common blocking patterns, designed to make it possible to re-compete for associated lock resources after it is awakened.

Above, we have completed an introduction to the principle of locking and release lock of Reentrantlock, which is a combination of shared lock and exclusive lock, which is relatively more complex, for the reentrantreadwritelock lock of Read and write separation. We will analyze the next article separately.

In addition to some of the other relevant reentrantlock in response to interrupt the acquisition of the lock method, support the correlation method of time-out return, and so on, without exception, relying on the principle of our introduction above, I believe we have the ability to self-fathom.

Well, this article concludes, we look forward to the next article.

The explicit locking principle of concurrent programming

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.