[Go] Java Reentrantlock and synchronized two locking mechanisms

Source: Internet
Author: User
Tags benchmark finally block

Multithreading and concurrency are not new, but one of the innovations in Java language design is the first mainstream language that integrates the cross-platform threading model and formal memory model directly into the language. The Core class library contains a Thread class that can be used to build, start, and manipulate threads, and the Java language includes constructs that communicate concurrency constraints across threads- synchronized and volatile . While simplifying the development of platform-independent concurrency classes, it never makes it much more tedious to write concurrent classes, but makes it easier.

Synchronized Quick Review

Declaring a block of code as synchronized has two important consequences, usually referring to the code having atomicity (atomicity) and visibility (visibility). Atomicity means that a thread can only execute code that is protected by a specified monitoring object (lock) at a time, preventing multiple threads from conflicting when they update the shared state. Visibility is more subtle; it deals with the various anomalous behaviors of memory caches and compiler optimizations. In general, threads are not constrained by the values of cached variables in a way that does not have to be immediately visible to other threads (whether they are in registers, in processor-specific caches, or through instruction Reflow or other compilers), but if the developer uses synchronization, as shown in the following code The runtime will ensure that the updates made to the variables by one thread are updated before the existing synchronized blocks, and when you enter another block protected by the same monitor (lock) synchronized , the updates to the variables are immediately visible. Similar rules exist volatile on variables as well. (See Resources for the contents of the synchronization and Java memory models.) )

Synchronized (lockobject) {   //Update object State}

Therefore, implementing synchronization requires that you consider everything you need to safely update multiple shared variables, that you cannot have a race condition, that you cannot break the data (assuming that the boundaries of the synchronization are correct), and that other threads that are correctly synchronized can see the latest values for those variables. By defining a clear, cross-platform memory model that has been modified in JDK 5.0 to correct some of the errors in the original definition, it is possible to build a concurrency class "write once, run Anywhere", by following this simple rule:

Whenever
you write a variable that might be read by another thread, or if the variable you are reading is finally written by another thread, you must synchronize.

But now a little better, in the nearest JVM, there is no contention for synchronization (when a thread owns a lock, no other thread attempts to acquire the lock) the performance cost is still low. (This is not always the case; synchronization in the early JVM is not optimized, so many people think so, but now it becomes a misconception that, whether it's contention or not, synchronization has a high performance cost.) )

Improvements to the Synchronized

So it seems that the synchronization is pretty good, isn't it? So why did the JSR 166 team spend so much time developing java.util.concurrent.lock frameworks? The answer is simple-sync is good, but it's not perfect. It has some functional limitations-it can't interrupt a thread that is waiting to get a lock, can't get a lock by polling, and can't get a lock if you don't want to wait. Synchronization also requires that the release of the lock be made only in the same stack frame as the stack frame where the lock was obtained, and in most cases this is fine (and interacts well with exception handling), but there are some situations where it is more appropriate to have some non-block-structured locking.

Reentrantlock class

java.util.concurrent.lockThe framework in a Lock lock is an abstraction that allows the implementation of a lock to be implemented as a Java class, rather than as a language feature. This leaves space for a variety of Lock implementations that may have different scheduling algorithms, performance characteristics, or locking semantics. ReentrantLockclass is implemented Lock , it has the synchronized same concurrency and memory semantics, but adds some features like polling locks, timed lock waits, and interruptible lock waits. In addition, it provides better performance in the case of intense contention. (In other words, when many threads are trying to access a shared resource, the JVM can spend less time dispatching the thread and more of it to the execution thread.) )

What does the reentrant lock mean? Simply put, it has a lock-related get counter, if one of the threads that owns the lock gets the lock again, then the fetch counter increases by 1, and then the lock needs to be freed two times to get a true release. This mimics synchronized the semantics; If a thread enters a synchronized block protected by a monitor that the thread already owns, it allows the thread to proceed, and when the thread exits the second (or subsequent) synchronized block, does not release the lock, only the thread exits it enters the first synchronized of the monitor protection block, the lock is released.

When you look at the code example in Listing 1, you can see that Lock there is a noticeable difference between the synchronized and the--lock that must be released in the finally block. Otherwise, if the protected code throws an exception, the lock may never be released! This difference may seem like nothing, but in fact it is extremely important. Forgetting to release a lock in a finally block may leave a time bomb in the program, and when the bomb explodes one day, it takes a lot of effort to find the source. With synchronization, the JVM will ensure that the lock is automatically freed.

Listing 1. Protect code blocks with Reentrantlock.
Lock lock = new Reentrantlock (); Lock.lock (); try {   //Update object state}finally {  lock.unlock ();}

In addition, the implementation of contention is more scalable than the current synchronized implementation ReentrantLock . (in future versions of the JVM, the contention performance of synchronized is likely to be improved.) This means that when many threads are contending for the same lock, ReentrantLock the overall expense used is usually synchronized much less.

Compare the scalability of Reentrantlock and synchronized

Tim Peierls uses a simple linear congruent pseudo-random number generator (PRNG) to construct a simple evaluation, which is used to measure   synchronized   and   Lock the relative scalability between  . This example is good because every time   is called, nextrandom ()  , PRNG is doing some work, so the benchmark is actually measuring a reasonable, real synchronized   and   Lock   application, rather than testing code that is purely theoretical or does nothing (like many so-called benchmark programs. )

In this benchmark program, there is one PseudoRandom interface, and it has only one method nextRandom(int bound) . This interface java.util.Random is very similar to the functionality of a class. Since the next random number is generated, PRNG uses the newly generated number as input, and maintains the last generated number as an instance variable, with the emphasis that the code snippet that updates this state is not preempted by other threads, so I'm going to use some sort of lock to ensure that. ( java.util.Random classes can also do this.) We PseudoRandom built two implementations, one using syncronized, the other using java.util.concurrent.ReentrantLock . The driver generates a number of threads, each of which frantically competes for time slices, and then calculates how many rounds per second the different versions can perform. Figure 1 and Figure 2 summarize the results of the different number of threads. This evaluation is not perfect, and only runs on two systems (one is dual Xeon running Hyper-threading Linux and the other is a single-processor Windows system), but it should be enough to demonstrate the synchronized ReentrantLock scalability benefits of comparison.

Figure 1. Synchronized and Lock throughput rates, single CPU

Figure 2. Throughput rate of synchronized and Lock (after normalization), 4 CPUs

The graphs in Figure 1 and Figure 2 show throughput rates in units of calls per second and adjust different implementations to 1 threads synchronized . Each implementation is relatively quick to focus on a steady state of throughput, which usually requires the processor to be fully utilized, most of the processor time is spent on the actual work (computer random number), only a small amount of time spent on thread scheduling expenses. You'll notice that the synchronized version behaves quite poorly when dealing with any type of contention, while the Lock version spends less time on scheduled expenses, leaving space for higher throughput and more efficient CPU utilization.

Condition variable

The root class Object contains some special methods for wait() communicating with notify() and between threads notifyAll() . These are advanced concurrency features that have never been used by many developers-which may be a good thing because they are quite subtle and easy to use inappropriately. Fortunately, with the introduction of JDK 5.0 java.util.concurrent , developers have almost no place to use these methods.

There is an interaction between the notification and the lock--to be on the object wait or notify , you must hold the lock on the object. Like Lock a synchronous generalization, the Lock framework contains a generalization of the sum, which is wait notify called 条件(Condition) . The Lock object acts as a factory object bound to the condition variable of the lock, and differs from the standard wait and notify method, and for the specified Lock , there can be more than one condition variable associated with it. This simplifies the development of many concurrency algorithms. For example, 条件(Condition) the Javadoc shows an example of a bounded buffer implementation that uses two condition variables, "not full" and "not empty", which is more readable (and more efficient) than the implementation of each lock with only one wait setting. Conditionmethods are wait notify notifyAll similar to, and methods, named await , and, respectively, signal signalAll because they cannot overwrite Object the corresponding method.

It's not fair

If you look at Javadoc, you'll see that ReentrantLock a parameter to the constructor is a Boolean value that allows you to choose whether you want a Fair (fair) lock or an unfair (unfair) lock. A fair lock enables a thread to obtain a lock in the order in which the lock is requested, whereas an unfair lock allows the lock to be acquired directly, in which case the thread can sometimes get a lock before the other thread that requested the lock.

Why don't we let all the locks be fair? After all, fairness is a good thing, and injustice is bad, isn't it? (When children want a decision, they always shout "It's not fair".) We think fairness is very important and children know it. In reality, fairness guarantees that the lock is very robust and has a great performance cost. To make sure that the required accounting (bookkeeping) and synchronization is fair, it means that the fair lock being contested is lower than the rate of the unfair lock. As a default setting, fairness should be set false so that, unless fairness is critical to your algorithm, it needs to be serviced strictly in the order in which threads are queued.

So what about synchronization? is the built-in monitor lock fair? The answer is a surprise to many, and they are unfair and always unfair. But no one complains about thread hunger, because the JVM guarantees that all threads will eventually get the locks they are waiting for. Ensuring statistical fairness, for the most part, is sufficient, and the cost is much lower than the absolute fairness guarantee. So, by default it's ReentrantLock "unfair", the fact that it's just superficial about what's been happening in sync. If you don't mind this when syncing, ReentrantLock don't worry about it.

Figure 3 and Figure 4 contain the same data as in Figure 1 and Figure 2, just add a dataset for random number datum detection, this time using a fair lock instead of the default negotiation lock. As you can see, fairness comes at a price. If you need fairness, you have to pay the price, but please do not use it as your default choice.

Figure 3. Relative throughput rate of synchronization, negotiation lock, and fair lock with 4 CPUs

Figure 4. Relative throughput rate of synchronization, negotiation, and fair lock when using 1 CPUs

Good everywhere?

It seems to be better ReentrantLock in synchronized every way--all synchronized that can be done, it can do it, it has the synchronized same memory and concurrency semantics as it does, and has synchronized features that are not, and have better performance under load. Should we forget, then, synchronized not to think of it as a good idea that has already been optimized? Or even ReentrantLock rewrite our existing synchronized code? In fact, several introductory books on Java programming have used this approach in their multithreaded chapters, using it entirely Lock as an example, only to synchronized as history. But I think it's too much of a good thing to do.

And don't abandon synchronized.

Although ReentrantLock it is a very moving implementation, it has some important advantages relative to synchronized, but I think it is a serious mistake to rush to see synchronized as tossed. java.util.concurrent.lock  the Lock class in is a tool for advanced users and advanced scenarios . In general, unless you Lock have a clear need for an advanced feature, or if you have clear evidence (and not just doubt) that the synchronization has become a bottleneck for scalability in certain situations, you should continue to use synchronized.

Why do I advocate conservatism in the use of an apparently "better" implementation? Because of java.util.concurrent.lock the locking class in, synchronized still has some advantages. For example, you cannot forget to release a lock when using synchronized, and the synchronized JVM will do it for you when you exit the block. You can easily forget to finally release the lock with a block, which is very harmful to the program. Your program can pass the test, but it will deadlock in the actual work, and it will be difficult to point out why (this is a good reason why it is not used by junior developers at all Lock .) )

Another reason is that when the JVM uses synchronized to manage lock requests and releases, the JVM can include locking information when generating a thread dump. These are very valuable for debugging because they can identify the source of deadlocks or other unusual behavior. Lockclass is just a normal class, and the JVM does not know which thread owns the Lock object. Moreover, almost every developer is familiar with synchronized, which can work in all versions of the JVM. Before JDK 5.0 becomes a standard (which may take two years from now), using Lock A class will mean that the attributes to take advantage of are not each JVM, and are not familiar to every developer.

When do you choose to replace synchronized with Reentrantlock?

So, when should we use ReentrantLock it? The answer is simple-when you really need something that synchronized doesn't have, such as a time lock, an interruptible lock, a block lock, multiple conditional variables, or a polling lock. ReentrantLockThere is also the benefit of scalability, which should be used in highly competitive situations, but keep in mind that most synchronized blocks are almost never contentions, so you can put high contention aside. I recommend developing with synchronized until it does prove synchronized inappropriate, rather than just assuming that ReentrantLock "performance will be better". Keep in mind that these are advanced tools for advanced users. (and, true premium users prefer to choose the simplest tools they can find until they think the simple tool doesn't work.) )。 As always, the first thing to do well, and then consider whether it is necessary to do faster.

Conclusion

LockThe framework is a compatible alternative to synchronization, which provides synchronized many features that are not provided, and its implementation provides better performance under contention. However, these obvious benefits are not enough to be used as a ReentrantLock justification for substitution synchronized . Instead,  ReentrantLock make a choice based on your ability to do so. In most cases, you should not choose it--synchronized work well, work on all JVMs, more developers understand it, and are less prone to error. Use it only when you really need Lock it. In these cases, you will be pleased to have this tool.

Original: http://www.ibm.com/developerworks/cn/java/j-jtp10264/index.html

[Go] Java Reentrantlock and synchronized two locking mechanisms

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.