Reprint please indicate the source: Http://blog.csdn.net/luonanqin
A recent study of concurrent packages in JDK has been divided into three parts, thread management, lock operations, and atomic operations. Thread management is usually used more often, but the lock operation and atomic operation is basically useless, but before the university ran a few examples to play. When you see Reentrantlock, it's very simple to find usage and synchronized, but the internal principle is more complicated. On the Internet to check the relevant content of Reentrantlock, did not find anyone to analyze it very thoroughly, but there are a few talk about the internal lock implementation mechanism, but unfortunately are mainly text, it is difficult to chain the entire internal process to understand, I believe many people do not really put this new synchronization mechanism to understand, Including me, of course. So spent a few days to debug, and finally the most basic lock-unlock and reentrantlock bound with the condition await-signal mechanism in the form of a flowchart drew out, At the same time I think the more difficult to understand the Code also analyzed its internal principles to annotate, to share with you, hoping to help you understand the realization of its principles faster, but also welcomed the criticism.
At present, we only study the principle of synchronization in normal state, if there is a thread interruption, the process will be more complex. So for the first time on the two without interruption of the process to explain, I can take the process of breaking down and then sent out, so as not to mislead everyone.
Before jdk1.5, the synchronization between multithreading is based on synchronized. Synchronized is a Java keyword that is interpreted directly by the JVM as an instruction for thread synchronization management. Because the operation is simple, and now the next version of the JDK has been a lot of synchronized optimization, so has been writing multithreaded program commonly used synchronization tools. So why launch a new sync API? Synchronized performance is not good when jdk1.5 is released, which may be a potential cause for concurrent packages, but more importantly, new APIs provide more flexible, finer-grained synchronization operations to meet different requirements. But the problem is obvious, the more flexible and controllable the higher the more prone to error, so most people rarely use the concurrent synchronization lock API. This article does not compare the pros and cons of both, after all, the existence is reasonable. Can use more advanced tools, and sometimes improve work efficiency, speed up abnormal investigation is also very good, but I was out of learning its principles and ideas to carry out research, the work of which one will also be cautious choice.
Reentrantlock class Diagram:
The Abstractownablesynchronizer class holds and acquires an exclusive thread. Abstractqueuedsynchronizer is a virtual queue that manages the lock acquisition and lock release of threads, as well as threading interrupts in various situations. Provides a default synchronization implementation, but the implementation of the acquire lock and release lock is defined as an abstract method, which is implemented by subclasses. The goal is to give developers the freedom to define how locks are acquired and how locks are freed. Sync is the internal abstraction class of Reentrantlock, which implements a simple acquisition lock and a release lock. Nonfairsync and Fairsync respectively represent "unjust locks" and "fair locks", both of which inherit from sync and are all internal classes of reentrantlock. Reentrantlock implements the Lock-unlock method of the lock interface and determines whether to use Nonfairsync or Fairsync according to the fair parameters. here are two key elements:
The state node within the node Abstractqueuedsynchronizer inside the Abstractqueuedsynchronizer :
node is the encapsulation of each thread that accesses the synchronization code. This includes not only the threads that need to be synchronized, but also the state of each thread, such as waiting to unblock, waiting for the condition to wake up, canceled, and so on. Node also associates the precursors and successors, namely Prev and next. Personally, it is thought that in order to manage multiple threads in different states in a centralized way, when different threads change state, they can react to other threads as soon as possible and improve the efficiency of operation. For example, if a node's prev has been canceled, it can be ignored when the prev is unblocked, and then try to unblock the node's blocking state.
Multiple node connections become virtual queues (since there is no real queue container for each element so that it is virtual, I call it the release queue, which means waiting to be released), then it has to have head and tail. For a fair lock, the head is a special node with no threads, only next, and the latest thread to request a lock is added to the end of the queue, or tail, when the lock fails. But for the unfair lock, the newly requested thread of the lock will jump in line, perhaps in the front, perhaps not.
There may be a question here: what is the use of the head in the queue? Why not a thread waiting for the lock as head? The reason is simple, because each waiting thread is likely to be interrupted and canceled, and for a thread that has been canceled, it naturally has the opportunity to take it GC. Then the GC must have the subsequent node head, so that the sethead is too fragmented, and it is too cumbersome to set the head for a variety of thread state changes. So it's tricky to set the head's next to wait for the lock Node,head as a guide, because the head has no threads, so there's no "cancel" status.
State :
The
State is used to record the holding of locks.
The state is 0 when no thread holds the lock. When a thread acquires a lock, the value of the state increases, how many developers can customize it, and the default is 1, indicating that the lock is being occupied by a thread. When a thread that has occupied a lock acquires the lock again, the state grows again, which is locked. When the thread that holds the lock releases the lock, the state also subtracts the value passed in when it was first taken, and defaults to 1. When multiple threads compete for locks, state must be set up through CAs to ensure that locks can only be held by one thread. Of course, this is the rule of exclusive locking, which is not the case for shared locks. The difference between the
Fair and unfair locks is only the time to modify the state. Look at the following code to see that:
Reentrantlock.class
Protected Final Boolean tryacquire (int acquires) {
final Thread current = Thread.CurrentThread ();
int c = getState ();
if (c = = 0) {
//This is the implementation of a fair lock, rather than a fair lock does not invoke the Hasqueuedpredecessors method, that is, there is no need to determine whether there is content in the queue, directly through the CAS modify state to compete for the lock
if (!). Hasqueuedpredecessors () &&
compareandsetstate (0, acquires)) {
setexclusiveownerthread (current);
return true;
}
else if (current = = Getexclusiveownerthread ()) {
int nextc = c + acquires;
if (NEXTC < 0)
throw new Error ("Maximum lock count Exceeded");
SetState (NEXTC);
return true;
}
return false;
}
Basically, the meaning of node and state is understood, and a normal lock-unlock process should be easy to understand. But there are a lot of places in the code that have to do something special with the threads that have been canceled, so it's hard to understand. Because there have been several articles on the source code, so here I will no longer take each piece of source to come up with a detailed explanation.
Let's see.
Lock ()Process: (The NODE0 and Node1 in the diagram do not exist in the source code, which I added for the convenience of the explanation)
As shown in the figure, the color of the word is a CAS operation, where three of the red CAs have to deal with success and failure appropriately, why the other blue CAs do not care about the success of the settings. Here's what I've explained in the following code:
Abstractqueuedsynchronizer.class private static Boolean Shouldparkafterfailedacquire (node pred, node node) {int W
s = pred.waitstatus; if (ws = = node.signal)/* This Node has already set status asking a release * to SIGNAL it
It can safely park.
*/return true; if (ws > 0) {/* predecessor was cancelled.
Skip over predecessors and * indicate retry.
* * do {Node.prev = pred = Pred.prev;
while (Pred.waitstatus > 0);
Pred.next = node; else {/* * Waitstatus must be 0 or PROPAGATE. Indicate that we * need a signal, but don ' t park yet.
Caller'll need to * retry to make sure it cannot acquire before.
* * * Why do not care about whether the success but also to set it. * If the setting fails, the precursor has been signal. If the precursor is head, there is a chance to acquire the lock, so return false can again Tryacquire * * If the setting succeeds, indicates that the precursor waits for signal. If you confirm pred.waits againTatus is still node.signal, indicating that the predecessor must block the current thread if it waits to release the lock * So after returning true, the Park/Compareandsetwaitstatus (pred, WS
, node.signal);
return false; }
Look again.
unlock ()The Flowchart:
As shown here, the pink polyline corresponds to the pink virtual polyline in the lock flowchart, where thread A calls lock blocking and thread B calls unlock to unblock thread A. You can also see that unlock has only one CAS operation, but you don't have to worry about the success of the settings. I gave this code the following explanation:
Abstractqueuedsynchronizer.class private void Unparksuccessor (node node) {/* * If status is negative (i.e., Possibly needing signal) try * to clear in anticipation of signalling.
It is OK if this * fails or if the status is changed by waiting thread.
*/int ws = Node.waitstatus;
* * Why do not care about whether the success but also to set it.
* Note that node here is actually head * * If the setting succeeds, that is, head.waitstatus=0, then the thread that is about to be blocked has the opportunity to call Tryacquire to acquire the lock again. * That is, the Compareandsetwaitstatus (pred, WS, node.signal) execution failure in the Shouldparkafterfailedacquire method returns False, So that we can have another chance to tryacquire. * * If the setting fails, the thread behind the head is blocked, but it doesn't matter, the following code immediately releases the blocking thread from/if (WS < 0) Compareandse
Twaitstatus (node, WS, 0); * * Thread to Unpark are held in successor, which is normally * just the next node. But if cancelled or apparently null, * traverse backwards from tail to find the actual * non-cancelled successor
.
*/Node s = node.next;
if (s = = NULL | | s.waitstatus > 0) {s = null; for (Node t = tail t!= null && t!= Node t = t.prev) if (t.waitstatus <= 0)
s = t;
} if (s!= null) Locksupport.unpark (s.thread); }
The above two paragraph code explanation is I feel more difficult to understand, although has the English annotation, but did not explain why this does, I am also repeatedly debugging just want to understand. But I have to say that these two pieces of code clever, as far as possible with the CAS operation to reduce the chance of blocking, so that the thread can have more opportunities to acquire locks, after all, blocking threads is kernel operation, the overhead is not small.
This article only tells the ordinary Lock-unlock, the next chapter will talk about waits-the notice, namely condition await-signal detailed process
Resources:
Spin lock, line spin lock, MCS lock, CLH lock http://coderbee.net/index.php/concurrent/20131115/577/comment-page-1
Deep JVM lock mechanism 1-synchronized http://blog.csdn.net/chen77716/article/details/6618779
Deep JVM lock mechanism 2-lock http://blog.csdn.net/chen77716/article/details/6641477
Comparison of two kinds of locking mechanisms between reentrantlock and synchronized http://blog.csdn.net/fw0124/article/details/6672522
Introduction to lock Optimization in virtual machines (Adaptive spin/Lock coarsening/Lock removal/lightweight lock/bias Lock) http://icyfenix.iteye.com/blog/1018932 reference to "deep understanding of Java Virtual machines: JVM advanced features and best practices"