Java Theory and Practice: a more flexible and scalable locking mechanism in JDK 5.0

Source: Internet
Author: User
Tags exception handling finally block thread thread class visibility volatile

JDK 5.0 provides developers with a number of powerful new choices for developing high-performance concurrent applications. For example, the class Reentrantlock in Java.util.concurrent.lock is replaced by the synchronized feature in the Java language, which has the same memory semantics, the same locks, but better performance under race conditions, plus Synchronized offers no additional features. Does this mean that we should forget about synchronized and use Reentrantlock instead? The concurrency expert Brian Goetz, who just returned from his summer vacation, will provide us with an answer.

Multithreading and concurrency are not new, but one of the innovations in the Java language design is that it is the first major language to integrate a cross-platform threading model and a formal memory model into the language. The Core class library contains a thread class that you can use to build, start, and manipulate threads, and the Java language includes constructs--synchronized and volatile that communicate concurrency constraints across threads. While simplifying the development of a platform-independent concurrency class, it has never made it much more cumbersome to write concurrent classes, just making it easier.

Synchronized Quick Review

Declaring a block of code as synchronized has two important consequences, usually the code has atomicity (atomicity) and visibility (visibility). Atomicity means that a thread can only execute code that is protected by a specified monitor object (lock) at a time, preventing multiple threads from conflicting with each other while updating the shared state. Visibility is more subtle, and it copes with the various perverse behaviors of memory caching and compiler optimizations. In general, threads are in a way that they do not have to be immediately visible to other threads, regardless of whether the threads are in registers, in processor-specific caching, or through command rearrangement or other compiler optimizations, is not constrained by the value of the cached variable, but if the developer uses synchronization, as shown in the following code, The runtime will ensure that a thread's update to a variable is preceded by an update to an existing synchronized block, and when you enter another synchronized block that is protected by the same monitor (lock), these changes to the variable are immediately visible. Similar rules exist on volatile variables. (For the contents of the synchronization and Java memory models, see resources.) )

synchronized (lockObject) {
 // update object state
}

Therefore, the implementation of synchronization needs to consider all the necessary security to update multiple shared variables, there can be no race conditions, can not destroy the data (assuming the boundary of the synchronization is correct), and to ensure that other threads of correct synchronization can see the latest values of these variables. By defining a clear, cross-platform memory model (which is modified in JDK 5.0 to correct some of the errors in the original definition), it is possible to build the concurrent class "write once, run anywhere" by following this simple rule:

Whenever you write a variable that might then be read by another thread, or the variable you will read is finally written by another thread, you must synchronize.

But now it's a little bit better, in the most recent JVM, the performance costs of not competing synchronizations (when one thread owns a lock, and no other thread attempts to acquire a lock) are low. (And not always; synchronization in the early JVM has not yet been optimized, so many people think so, but now this becomes a misconception that, whether it's contention or not, synchronization has a high performance cost.) )

The improvement of the synchronized

So it seems that the synchronization is pretty good, right? So why did the JSR 166 team spend so much time developing the Java.util.concurrent.lock framework? The answer is simple-sync is good, but it's not perfect. It has some functional limitations-it can't interrupt a thread that is waiting to get a lock, and it can't get a lock by voting, and if you don't want to wait, you can't get a lock. Synchronization also requires that the release of a lock be made only in the same stack frame as the stack frame where the lock was obtained, which is, in most cases, not a problem (and it interacts well with exception handling), but there are cases where the locking of a block structure is more appropriate.

Reentrantlock class

The lock frame in Java.util.concurrent.lock is an abstraction of the lock, which allows the implementation of the lock as a Java class rather than as a language feature. This leaves space for the various implementations of lock, which may have different scheduling algorithms, performance characteristics, or locking semantics. The Reentrantlock class implements lock, which has the same concurrency and memory semantics as the synchronized, but adds features such as lock voting, timed lock waiting, and interruptible lock waiting. In addition, it provides better performance in the case of intense contention. (In other words, when many threads want to access shared resources, the JVM can spend less time scheduling threads, and more on execution threads.) )

What does a reentrant lock mean? In short, it has a lock-related fetch counter, and if a thread that owns the lock gets the lock again, the fetch counter is added 1, and the lock needs to be released two times to get real release. This mimics the semantics of the synchronized; If a thread enters a synchronized block protected by a monitor already owned by the thread, it allows the thread to proceed, not releasing the lock when the thread exits the second (or subsequent) synchronized block. The lock is released only if the thread exits the first synchronized block it enters into the monitor protection.

When you look at the code example in Listing 1, you can see that there is a noticeable difference between Lock and synchronized--lock must be released in the finally block. Otherwise, if the protected code throws an exception, the lock may never be released! This distinction may seem to be nothing, but in fact it is extremely important. Forgetting to release the lock in the finally block may leave a ticking time bomb in the program, and you'll have to spend a lot of effort to find out where the source is when the bomb explodes one day. With synchronization, the JVM will ensure that locks are automatically freed.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.