Original: http://www.ibm.com/developerworks/cn/java/j-jtp10264/index.html
Multithreading and concurrency are not new, but one of the innovations in Java language design is the first mainstream language that integrates the cross-platform threading model and formal memory model directly into the language. The Core class library contains a Thread
class that can be used to build, start, and manipulate threads, and the Java language includes constructs that communicate concurrency constraints across threads- synchronized
and volatile
. While simplifying the development of platform-independent concurrency classes, it never makes it much more tedious to write concurrent classes, but makes it easier.
Synchronized Quick Review
Declaring a block of code as synchronized has two important consequences, usually referring to the code having atomicity (atomicity) and visibility (visibility). Atomicity means that a thread can only execute code that is protected by a specified monitoring object (lock) at a time, preventing multiple threads from conflicting when they update the shared state. Visibility is more subtle; it deals with the various anomalous behaviors of memory caches and compiler optimizations. In general, threads are not constrained by the values of cached variables in a way that does not have to be immediately visible to other threads (whether they are in registers, in processor-specific caches, or through instruction Reflow or other compilers), but if the developer uses synchronization, as shown in the following code The runtime will ensure that the updates made to the variables by one thread are updated before the existing synchronized
blocks, and when you enter another block protected by the same monitor (lock) synchronized
, the updates to the variables are immediately visible. Similar rules exist volatile
on variables as well.
Synchronized (lockobject) { //Update object State}
Therefore, implementing synchronization requires that you consider everything you need to safely update multiple shared variables, that you cannot have a race condition, that you cannot break the data (assuming that the boundaries of the synchronization are correct), and that other threads that are correctly synchronized can see the latest values for those variables. By defining a clear, cross-platform memory model that has been modified in JDK 5.0 to correct some of the errors in the original definition, it is possible to build a concurrency class "write once, run Anywhere", by following this simple rule:
Whenever
you write a variable that might be read by another thread, or if the variable you are reading is finally written by another thread, you must synchronize.
But now a little better, in the nearest JVM, there is no contention for synchronization (when a thread owns a lock, no other thread attempts to acquire the lock) the performance cost is still low. (This is not always the case; synchronization in the early JVM is not optimized, so many people think so, but now it becomes a misconception that, whether it's contention or not, synchronization has a high performance cost.) )
Improvements to the Synchronized
So it seems that the synchronization is pretty good, isn't it? So why did the JSR 166 team spend so much time developing java.util.concurrent.lock
frameworks? The answer is simple-sync is good, but it's not perfect. It has some functional limitations-it can't break a thread that is waiting to get a lock, can't get a lock by voting, and if you don't want to wait, you can't get a lock. Synchronization also requires that the release of the lock be made only in the same stack frame as the stack frame where the lock was obtained, and in most cases this is fine (and interacts well with exception handling), but there are some situations where it is more appropriate to have some non-block-structured locking.
Reentrantlock class
java.util.concurrent.lock
in the   Lock
A framework is an abstraction of locking, which allows the implementation of a lock to be implemented as a Java class, rather than as a language feature. This is for   Lock
Various implementations may have different scheduling algorithms, performance characteristics, or locking semantics.   reentrantlock
class implements the   Lock
    synchronized
The same concurrency and memory semantics, but adds some features like lock polling, timed lock waiting, and interruptible lock waiting. In addition, it provides better performance in the case of intense contention. (In other words, when many threads are trying to access a shared resource, the JVM can spend less time dispatching the thread and more of it to the execution thread.) )
reentrant What does a lock mean? Simply put, it has a lock-related get counter, if one of the threads that owns the lock gets the lock again, then the fetch counter increases by 1, and then the lock needs to be freed two times to get a true release. This mimics synchronized
the semantics; If a thread enters a synchronized block that is protected by a monitor that the thread already owns, it allows the thread to continue and synchronized
does not release the lock when the thread exits the second (or subsequent) block. The synchronized
Lock is released only when the thread exits the first block it enters to protect the monitor .
When you look at the code example in Listing 1, you can see that Lock
there is a noticeable difference between the synchronized and the--lock that must be released in the finally block. Otherwise, if the protected code throws an exception, the lock may never be released! This difference may seem like nothing, but in fact it is extremely important. Forgetting to release a lock in a finally block may leave a time bomb in the program, and when the bomb explodes one day, it takes a lot of effort to find the source. With synchronization, the JVM will ensure that the lock is automatically freed.
Listing 1. Protect code blocks with Reentrantlock.
Lock lock = new Reentrantlock (); Lock.lock (); try { //Update object state}finally { lock.unlock ();}
In addition, the implementation of contention is more scalable than the current synchronized implementation ReentrantLock
. (in future versions of the JVM, the contention performance of synchronized is likely to be improved.) This means that when many threads are contending for the same lock, ReentrantLock
The overall expense used is usually synchronized
much less.
Compare the scalability of Reentrantlock and synchronized
Tim Peierls uses a simple linear congruent pseudo-random number generator (PRNG) to construct a simple evaluation, which is used to measure the relative scalability of the system synchronized
Lock
. This example is good because nextRandom()
PRNG is actually doing some work each time it is called, so the benchmark program is really measuring a reasonable, real, synchronized
and Lock
application, Instead of testing the code purely on paper or doing nothing (just like many so-called benchmark programs.) )
In this benchmark program, there is aPseudoRandom
interface, it has only one methodnextRandom(int bound)
。 The interface andjava.util.Random
The functionality of the class is very similar. Since the next random number is generated, PRNG uses the newly generated number as input, and maintains the last generated number as an instance variable, with the emphasis that the code snippet that updates this state is not preempted by other threads, so I'm going to use some sort of lock to ensure that. (  java.util.Random
The class can do this, too. We are   pseudorandom
Built two implementations, one using syncronized, the other using   Java.util.concurrent.ReentrantLock
. The driver generates a number of threads, each of which frantically competes for time slices, and then calculates how many rounds per second the different versions can perform. Figure 1 and Figure 2 summarize the results of the different number of threads. This evaluation is not perfect, and only runs on two systems (one is dual Xeon running Hyper-threading Linux and the other is a single-processor Windows system), but should be enough to show synchronized and reentrantlock compared to the scalability advantages.
The graphs in Figure 1 and Figure 2 show throughput rates in units of calls per second and adjust different implementations to 1 threads synchronized
. Each implementation is relatively quick to focus on a steady state of throughput, which usually requires the processor to be fully utilized, most of the processor time is spent on the actual work (computer random number), only a small amount of time spent on thread scheduling expenses. You'll notice that the synchronized version behaves quite poorly when dealing with any type of contention, while the Lock
version spends less time on scheduled expenses, leaving space for higher throughput and more efficient CPU utilization.
Condition variable
The root class Object
contains some special methods for wait()
notify()
communicating with and notifyAll()
between threads. These are advanced concurrency features that have never been used by many developers-which may be a good thing because they are quite subtle and easy to use inappropriately. Fortunately, with the introduction of JDK 5.0 java.util.concurrent
, developers have almost no place to use these methods.
There is an interaction between the notification and the lock--in order towait
Ornotify
, you must hold the lock for the object. Just likeLock
Is the same as the summary of synchronization,Lock
The framework contains thewait
Andnotify
The generalization of this generalization is called条件(Condition)
。Lock
Object acts as a factory object that is bound to the condition variable of this lock, with the standardwait
Andnotify
method is different for the specifiedLock
, you can have more than one condition variable associated with it. This simplifies the development of many concurrency algorithms. For example条件(Condition)
Javadoc shows an example of a bounded buffer implementation that uses two condition variables, "not full" and "not empty", which is more readable (and more efficient) than the implementation of each lock with only one wait setting.  Condition
methods and   wait
,   notify
and   notifyall
method similar to named   await
,   signal
and   signalall
  because they cannot overwrite   Object
".
It's not fair
If you look at Javadoc, you'll see that ReentrantLock
a parameter to the constructor is a Boolean value that allows you to choose whether you want a Fair (fair) lock or an unfair (unfair) lock. A fair lock allows a thread to obtain a lock in the order in which the lock is requested, whereas an unfair lock permits bargaining, in which case the thread can sometimes get a lock than the other thread that requested the lock first.
Why don't we let all the locks be fair? After all, fairness is a good thing, and injustice is bad, isn't it? (When children want a decision, they always shout "It's not fair".) We think fairness is very important and children know it. In reality, fairness guarantees that the lock is very robust and has a great performance cost. To make sure that the required accounting (bookkeeping) and synchronization is fair, it means that the fair lock being contested is lower than the rate of the unfair lock. As a default setting, fairness should be set false
so that, unless fairness is critical to your algorithm, it needs to be serviced strictly in the order in which threads are queued.
So what about synchronization? is the built-in monitor lock fair? The answer is a surprise to many, and they are unfair and always unfair. But no one complains about thread hunger, because the JVM guarantees that all threads will eventually get the locks they are waiting for. Ensuring statistical fairness, for the most part, is sufficient, and the cost is much lower than the absolute fairness guarantee. So, by default it's ReentrantLock
"unfair", the fact that it's just superficial about what's been happening in sync. If you don't mind this when syncing, ReentrantLock
don't worry about it.
Figure 3 and Figure 4 contain the same data as in Figure 1 and Figure 2, just add a dataset for random number datum detection, this time using a fair lock instead of the default negotiation lock. As you can see, fairness comes at a price. If you need fairness, you have to pay the price, but please do not use it as your default choice.
Good everywhere?
LookReentrantLock
In any way thansynchronized
Good--Allsynchronized
can do, it can do, it has with   Synchronized
the same memory and concurrency semantics, and also has   synchronized
Features that do not have a better performance under load. So, should we forget   synchronized
, and no longer think of it as a good idea that has already been optimized? Or even use   reentrantlock
  synchronized
code? In fact, several introductory books on Java programming have adopted this approach in their multithreaded chapters, fully using   Lock
to do the example, only synchronized as a history. But I think it's too much of a good thing to do.
And don't abandon synchronized.
Although ReentrantLock
it is a very moving implementation, it has some important advantages relative to synchronized, but I think it is a serious mistake to rush to see synchronized as tossed. java.util.concurrent.lock
the lock class in is a tool for advanced users and advanced scenarios . In general, unless you Lock
have a clear need for an advanced feature, or if you have clear evidence (and not just doubt) that the synchronization has become a bottleneck for scalability in certain situations, you should continue to use synchronized.
Why am I advocating conservatism in the use of an apparently "better" implementation? Because for   java.util.concurrent.lock
, synchronized still has some advantages. For example, when using synchronized, do not forget to release the lock, in exit   synchronized
block, the JVM will do it for you. You can easily forget to use   finally
  Lock
"Apple-converted-space". )
Another reason is that when the JVM uses synchronized to manage lock requests and releases, the JVM can include locking information when generating a thread dump. These are very valuable for debugging because they can identify the source of deadlocks or other unusual behavior. Lock
class is just a normal class, and the JVM does not know which thread owns the Lock
object. Moreover, almost every developer is familiar with synchronized, which can work in all versions of the JVM. Before JDK 5.0 becomes a standard (which may take two years from now), using Lock
A class will mean that the attributes to take advantage of are not each JVM, and are not familiar to every developer.
When do you choose to replace synchronized with Reentrantlock?
So, when should we use   reentrantlock
?" The answer is simple-when you really need something that synchronized doesn't have, such as a time lock, an interruptible lock, a block lock, multiple conditional variables, or a lock poll.   reentrantlock
There is also the benefit of scalability, which should be used in highly competitive situations, but keep in mind that most synchronized blocks are almost never contentions, so you can put high contention aside. I suggest developing with synchronized until it does prove synchronized inappropriate, rather than just assuming that is used; Reentrantlock
"performance will be better". Keep in mind that these are advanced tools for advanced users. (and, true premium users prefer to choose the simplest tools they can find until they think the simple tool doesn't work.) )。 As always, the first thing to do well, and then consider whether it is necessary to do faster.
Lock
The framework is a compatible alternative to synchronization, which provides synchronized
many features that are not provided, and its implementation provides better performance under contention. However, these obvious benefits are not enough to be used as a ReentrantLock
justification for substitution synchronized
. Instead, ReentrantLock
make a choice based on your ability to do so. In most cases, you should not choose it--synchronized work well, work on all JVMs, more developers understand it, and are less prone to error. Use it only when you really need Lock
it. In these cases, you will be pleased to have this tool.
Comparison of two locking mechanisms for reentrantlock and synchronized in Java