Implementation of Lock: ReentrantLock details, lockreentrantlock

Source: Internet
Author: User

Implementation of Lock: ReentrantLock details, lockreentrantlock
Summary

The Lock depends on CPU commands on the hardware layer and is completely completed by Java code. The underlying layer uses the LockSupport class and the Unsafe class for operations;

Although the locks have many implementations, they all rely on the AbstractQueuedSynchronizer class. We will use ReentrantLock to explain it;

ReentrantLock call Process

API calls of the ReentrantLock class are delegated to an internal class Sync, which inherits the AbstractQueuedSynchronizer class;

public class ReentrantLock implements Lock, java.io.Serializable {    ......    abstract static class Sync extends AbstractQueuedSynchronizer {......

Sync has two sub-categories: fair lock and non-fair lock. The default is unfair lock.

/*** Sync object for non-fair locks */static final class NonfairSync extends Sync {

/*** Sync object for fair locks */static final class FairSync extends Sync {

The call process of Lock is as follows (which involves the ReentrantLock class, Sync (abstract class), AbstractQueuedSynchronizer class, And NofairSync class. These classes fully use the Template method, which is quite awesome ):

First, let's look at a class dependency diagram:

Let's look at another lock call diagram:

Lock API details

From the bottom up, it is analyzed step by call

NofairTryAcquire
/** * Performs non-fair tryLock.  tryAcquire is implemented in * subclasses, but both need nonfair try for trylock method. */final boolean nonfairTryAcquire(int acquires) {    final Thread current = Thread.currentThread();    int c = getState();    if (c == 0) {        if (compareAndSetState(0, acquires)) {            setExclusiveOwnerThread(current);            return true;        }    }    else if (current == getExclusiveOwnerThread()) {        int nextc = c + acquires;        if (nextc < 0) // overflow            throw new Error("Maximum lock count exceeded");        setState(nextc);        return true;    }    return false;}

Let's look at this code. First, get the current state (initialized to 0). When it is equal to 0, it means that no thread has obtained the lock, and then it is implemented through CAS (the underlying layer is implemented through CompareAndSwapInt) change the state and set the current thread to the lock-holding thread; other threads will directly return false; when the thread re-enters, the state is not equal to 0, and CAS is not required at this time, because the thread already holds the lock, and then sets the state value through setState again, here a function of biased lock is implemented, that is, the lock is biased towards the thread;

AddWaiter

This method is entered only when the lock is held by one thread and another thread requests to obtain the lock.

/** * Creates and enqueues node for current thread and given mode. * * @param mode Node.EXCLUSIVE for exclusive, Node.SHARED for shared * @return the new node */private Node addWaiter(Node mode) {    Node node = new Node(Thread.currentThread(), mode);    // Try the fast path of enq; backup to full enq on failure    Node pred = tail;    if (pred != null) {        node.prev = pred;        if (compareAndSetTail(pred, node)) {            pred.next = node;            return node;        }    }    enq(node);    return node;}

First, the thread holding the lock enters the method. Here a CLH (the first letter of the Three People's name: Craig, Landin, and Hagersten) queue is actually a linked list,

To put it simply, CLH queue:

The CLH queue is composed of node nodes. mode indicates that each Node has two modes: The sharing mode and the exclusive mode, and maintains a state: waitStatus. The optional values are as follows:

First, a new Node is created. The mode is Node. EXCLUSIVE. The default value is null, which is the EXCLUSIVE lock;

Then:

If the queue already has node, that is, tail! = Null: The precursor node of the new node is set to tail. Then, tail is directed to the current node through CAS. The successor node of the precursor node points to the current node and then returns to the current node;

If the queue is empty or CAS fails, enq is used to join the queue:

/** * Inserts node into queue, initializing if necessary. See picture above. * @param node the node to insert * @return node's predecessor */private Node enq(final Node node) {    for (;;) {        Node t = tail;        if (t == null) { // Must initialize            if (compareAndSetHead(new Node()))                tail = head;        } else {            node.prev = t;            if (compareAndSetTail(t, node)) {                t.next = node;                return t;            }        }    }}

When you enter the team, either the first team member and the head node is set and tail is set cyclically or add tail is set. If the CAS fails, an infinite loop is made until the set is successful, even in high-concurrency scenarios, the setting can be ensured successfully, and then the encapsulated node is returned;

AcquireQueued
/** * Acquires in exclusive uninterruptible mode for thread already in * queue. Used by condition wait methods as well as acquire. * * @param node the node * @param arg the acquire argument * @return {@code true} if interrupted while waiting */final boolean acquireQueued(final Node node, int arg) {    boolean failed = true;    try {        boolean interrupted = false;        for (;;) {            final Node p = node.predecessor();            if (p == head && tryAcquire(arg)) {                setHead(node);                p.next = null; // help GC                failed = false;                return interrupted;            }            if (shouldParkAfterFailedAcquire(p, node) &&                parkAndCheckInterrupt())                interrupted = true;        }    } finally {        if (failed)            cancelAcquire(node);    }}

The main function of this method is to block the nodes that have already entered the Virtual Queue. We can see that if the current node's precursor node is head and the lock is obtained successfully, it will return directly, no need to block;

If the front node is not a header node or fails to obtain the lock, determine whether to block the node:

/** * Checks and updates status for a node that failed to acquire. * Returns true if thread should block. This is the main signal * control in all acquire loops.  Requires that pred == node.prev. * * @param pred node's predecessor holding status * @param node the node * @return {@code true} if thread should block */private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {    int ws = pred.waitStatus;    if (ws == Node.SIGNAL)        /*         * This node has already set status asking a release         * to signal it, so it can safely park.         */        return true;    if (ws > 0) {        /*         * Predecessor was cancelled. Skip over predecessors and         * indicate retry.         */        do {            node.prev = pred = pred.prev;        } while (pred.waitStatus > 0);        pred.next = node;    } else {        /*         * waitStatus must be 0 or PROPAGATE.  Indicate that we         * need a signal, but don't park yet.  Caller will need to         * retry to make sure it cannot acquire before parking.         */        compareAndSetWaitStatus(pred, ws, Node.SIGNAL);    }    return false;}

This code judges the status of the node's front-end node. If the front-end node is in the signal status, true is returned, indicating that the current node can enter the blocking status;

Otherwise, set the CAS status of the precursor node to signal, and then enter the parkAndCheckInterrupt code block park through the for loop in the upper layer:

/** * Convenience method to park and then check if interrupted * * @return {@code true} if interrupted */private final boolean parkAndCheckInterrupt() {    LockSupport.park(this);    return Thread.interrupted();}

At this time, the thread is handed over to the operating system kernel for blocking;

In general, acquireQueued depends on the status of the precursor node to determine whether the current thread should be in the blocking status. If the precursor node is in the cancel status, discard these nodes and re-build the queue;

Unlock API details

The process is similar to the process of lock api-related classes. Here we will talk about the main code. unlock is relatively simple.

First, ReentrantLock calls the release Interface of Sync, that is, the release interface of AbstractQueuedSynchronizer.

 

/** * Releases in exclusive mode.  Implemented by unblocking one or * more threads if {@link #tryRelease} returns true. * This method can be used to implement method {@link Lock#unlock}. * * @param arg the release argument.  This value is conveyed to *        {@link #tryRelease} but is otherwise uninterpreted and *        can represent anything you like. * @return the value returned from {@link #tryRelease} */public final boolean release(int arg) {    if (tryRelease(arg)) {        Node h = head;        if (h != null && h.waitStatus != 0)            unparkSuccessor(h);        return true;    }    return false;}

At this time, the tryRelease of Sync will be called first. If true is returned, the lock is released successfully.

 

protected final boolean tryRelease(int releases) {    int c = getState() - releases;    if (Thread.currentThread() != getExclusiveOwnerThread())        throw new IllegalMonitorStateException();    boolean free = false;    if (c == 0) {        free = true;        setExclusiveOwnerThread(null);    }    setState(c);    return free;}

The function of this interface is very simple. If it is not a thread call to obtain the lock, an exception is thrown directly. Otherwise, if the current state-releases = 0, that is, the lock has been completely released, return true and clear the resource;

After free is returned, release obtains the head node and enters the following code:

/** * Wakes up node's successor, if one exists. * * @param node the node */private void unparkSuccessor(Node node) {    /*     * If status is negative (i.e., possibly needing signal) try     * to clear in anticipation of signalling.  It is OK if this     * fails or if status is changed by waiting thread.     */    int ws = node.waitStatus;    if (ws < 0)        compareAndSetWaitStatus(node, ws, 0);     /*     * Thread to unpark is held in successor, which is normally     * just the next node.  But if cancelled or apparently null,     * traverse backwards from tail to find the actual     * non-cancelled successor.     */    Node s = node.next;    if (s == null || s.waitStatus > 0) {        s = null;        for (Node t = tail; t != null && t != node; t = t.prev)            if (t.waitStatus <= 0)                s = t;    }    if (s != null)        LockSupport.unpark(s.thread);}

This function is: when the status of the first node is less than 0, cas of the first node is set to 0, and then the next node is obtained through the linked list. If the next node is null or does not meet the requirements, then, the entire chain table is traversed from the end of the team until the node closest to the head node is traversed and

If the waiting status is as expected, the successor node of the header node is set to this node;

Wake up the node that has just been screened out to meet the requirements, that is, the node obtains the right to compete for the lock;

This is the characteristic of unfair lock: the threads waiting in the queue do not necessarily obtain the lock first than the later threads. So far, the unlock has been explained;

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.