JDK 5.0 provides some effective new options for developers to develop high-performance concurrent applications. For example, java. util. concurrent. the ReentrantLock class in lock is used as an alternative to the synchronized function in Java. It has the same memory semantics and lock, but has better performance under contention conditions. In addition, it also has other features not provided by synchronized. Does this mean we should forget synchronized and use only ReentrantLock instead? Concurrency expert Brian Goetz just returned from his summer vacation and will give us the answer.
Multithreading and concurrency are not new content, but one of the innovations in Java language design is that it is the first mainstream language to directly integrate the cross-platform thread model and regular memory model into the language. The core class library contains a Thread class that can be used to construct, start, and manipulate threads. the Java language includes the construction of cross-Thread transmission of concurrency constraints-synchronized and volatile. While simplifying the development of concurrent classes unrelated to the platform, it never makes the writing of concurrent classes more complicated, but makes it easier.
Synchronized quick review
Declaring a code block as synchronized has two major consequences: the code is atomic and visible ). Atomicity means that a thread can only execute code protected by a specified Monitoring object (lock) at a time, so as to prevent multiple threads from conflicting with each other when updating the sharing status. Visibility is more subtle; it has to deal with abnormal behaviors of memory cache and compiler optimization. Generally, threads are in a way that does not need to be immediately visible to other threads (whether these threads are in registers, in a processor-specific cache, or through command shuffling or other compiler optimizations ), it is not restricted by the cached variable value, but if developers use synchronization, the following code shows, then, the Runtime Library will ensure that the updates made to the variables by a thread are prior to the updates made to the existing synchronized blocks. When another synchronized block protected by the same Monitor (lock) is entered, the updates to the variables are immediately displayed. Similar rules exist in volatile variables.
Synchronized (lockObject ){
// Update object state
}
Therefore, to implement synchronization, you need to consider everything required to securely update multiple shared variables. There are no contention conditions and data cannot be damaged (assuming that the synchronized boundary location is correct ), in addition, ensure that the latest values of these variables are visible to other threads correctly synchronized. By defining a clear and cross-platform Memory Model (this model was modified in JDK 5.0 to correct some errors in the original definition), by following the simple rule below, it is possible to build a "write once, run anywhere" concurrent class:
At any time, as long as the variable you write may be read by another thread, or the variable you read is finally written by another thread, you must synchronize it.
But now, in the recent JVM, there is no contention for synchronization (when a thread has a lock, no other threads attempt to obtain the lock). The performance cost is still very low. (This is not always the case; synchronization in early JVMs has not been optimized, so many people think so, but now this has become a misunderstanding, people think that whether or not it is competing for use, synchronization has a high performance cost .)
Improvement on synchronized
It seems that synchronization is quite good, right? So why did the JSR 166 team spend so much time developing the java. util. concurrent. lock framework? The answer is simple-synchronization is good, but it is not perfect. It has some functional limitations-it cannot interrupt a thread waiting to get the lock, nor get the lock by voting. If you don't want to wait, you won't be able to get the lock. Synchronization also requires that the lock can be released only in the same stack frame as the stack frame in which the lock is obtained. In most cases, this is okay (and it interacts well with Exception Handling ), however, some non-block structure locks are more appropriate.
ReentrantLock class
The lock framework in java. util. concurrent. Lock is an abstraction of locking. It allows implementation of locking as a Java class, rather than implementation as a language feature. This leaves space for multiple Lock implementations. Different implementations may have different scheduling algorithms, performance characteristics, or locking semantics. The ReentrantLock class implements Lock, which has the same concurrency and memory semantics as synchronized. However, it adds some features similar to Lock voting, timed Lock wait, and interrupt Lock wait. In addition, it provides better performance in the case of fierce competition. (In other words, when many threads want to access shared resources, JVM can spend less time scheduling threads and spend more time on execution threads .)
What does the reentrant lock mean? To put it simply, there is a lock-related acquisition counter. If a thread that owns the lock gets the lock again, it will add 1 to the acquisition counter, then the lock must be released twice before it can be truly released. This imitates synchronized semantics. If a thread enters the synchronized block protected by the monitor that the thread already owns, the thread is allowed to continue. When the thread exits the second (or later) the synchronized block is released only when the thread exits the first synchronized block protected by the monitor.
When viewing the sample code in Listing 1, we can see a significant difference between Lock and synchronized-The lock must be released in the finally block. Otherwise, if the protected code throws an exception, the lock may never be released! This difference may seem insignificant, but in fact it is extremely important. If you forget to release the lock in the finally block, a timer bomb may be left in the program. When one day the bomb explodes, it takes a lot of effort to find the source. With synchronization, JVM ensures that the lock is automatically released.
Listing 1: Using ReentrantLock to protect code blocks
Lock lock = new ReentrantLock ();
Lock. lock ();
Try {
// Update object state
}
Finally {
Lock. unlock ();
}
In addition, compared with the current synchronized implementation, the ReentrantLock implementation under contention is more scalable. (In future JVM versions, the competition performance of synchronized is likely to be improved .) This means that when multiple threads compete for the same lock, the total cost for using ReentrantLock is usually much less than synchronized.
Compare the scalability of ReentrantLock and synchronized
Tim Peierls uses a simple linear full pseudo-random number generator (PRNG) to construct a simple evaluation to measure the scalability between synchronized and Lock. This example is good, because PRNG does do some work each time nextRandom () is called, so this benchmark program is actually measuring a reasonable and real synchronized and Lock application, instead of testing purely on paper or code that does nothing (just like many so-called benchmarking programs .)
In this benchmark program, there is a udorandom interface, which has only one method nextRandom (int bound ). This interface is similar to the java. util. Random class. When the next random number is generated, PRNG uses the latest number as the input, and maintains the last number as an instance variable, the focus is to prevent code segments updated in this state from being preemptible by other threads, so I will use some form of locking to ensure this. (This can also be done by the java. util. Random class .) We have constructed two implementations for javasudorandom. One uses syncronized and the other uses java. util. concurrent. ReentrantLock. The driver generates a large number of threads, and each thread is frantically competing for a time slice, and then calculating the number of rounds that different versions can execute per second. Figures 1 and 2 summarize the results of different threads. This evaluation is not perfect, and only runs on two systems (one is dual Xeon running hyper-threading Linux, and the other is a single processor Windows System). However, synchronized should be sufficient to show its scalability advantage over ReentrantLock.
Figure 1. synchronized and Lock throughput, single CPU
Figure 2. synchronized and Lock throughput (after standardization), 4 CPUs
The charts in Figure 1 and figure 2 show the throughput in units of calls per second. Different implementations are adjusted to 1-thread synchronized. Each implementation is relatively quickly concentrated on the throughput of a stable State. This State usually requires the processor to be fully utilized, and most of the processor's time is spent in actual work (computer random number) only a small amount of time is spent on Thread Scheduling expenses. You will notice that the synchronized version performs quite poorly when dealing with any type of contention, while the Lock version takes quite a little time to schedule, this leaves space for higher throughput and enables more effective CPU utilization.
Condition variable
The root class Object contains some special methods for communication between the wait (), Y (), and notifyAll () threads. These are advanced concurrency features that many developers have never used-this may be a good thing because they are quite subtle and easy to use improperly. Fortunately, with java. util. concurrent introduced in JDK 5.0, there is almost no need for developers to use these methods.
There is an interaction between the notification and the lock-you must hold the Lock of this object to wait or notify on the object. Just as Lock is a summary of synchronization, the Lock framework includes a summary of wait and Policy, which is called a Condition ). The Lock Object acts as the factory object bound to the condition variable of the Lock. Unlike the standard wait and notify methods, there can be more than one condition variable associated with the specified Lock. This simplifies the development of many concurrent algorithms. For example, the Javadoc of Condition shows an example of a bounded buffer implementation. This example uses two Condition variables: "not full" and "not empty ", it is more readable (and more effective) than only one wait setting for each lock ). The Condition method is similar to the wait, notify, and policyall methods. They are named await, signal, and signalA respectively.