Comparison between reentrantlock and synchronized

Source: Internet
Author: User

Http://blog.csdn.net/xinfengshiyu/article/details/6946278

The main point is that lock can complete all functions implemented by synchronized.
Major difference: Lock has more precise thread semantics and better performance than synchronized. Synchronized Automatically releases the lock, while the lock is
The programmer is required to release it manually and must release it in the finally clause.

The details are as follows:

JDK 5.0 provides some effective new options for developers to develop high-performance concurrent applications. For example, Java. util. concurrent. the reentrantlock class in lock is used as an alternative to the synchronized function in Java. It has the same memory semantics and lock, but has better performance under contention conditions. In addition, it also has other features not provided by synchronized. Does this mean we should forget synchronized and use only reentrantlock instead? Concurrency expert Brian Goetz just returned from his summer vacation and will give us the answer.
Multithreading and concurrency are not new content, but one of the innovations in Java language design is that it is the first mainstream language to directly integrate the cross-platform thread model and regular memory model into the language. The core class library contains a Thread class that can be used to construct, start, and manipulate threads. the Java language includes the construction of cross-thread transmission of concurrency constraints-synchronized and volatile. While simplifying the development of concurrent classes unrelated to the platform, it never makes the writing of concurrent classes more complicated, but makes it easier.
Synchronized quick review
Declaring a code block as synchronized has two major consequences: the code is atomic and visible ). Atomicity means that a thread can only execute code protected by a specified Monitoring object (LOCK) at a time, so as to prevent multiple threads from conflicting with each other when updating the sharing status. Visibility is more subtle; it has to deal with abnormal behaviors of memory cache and compiler optimization. Generally, threads are in a way that does not need to be immediately visible to other threads (whether these threads are in registers, in a processor-specific cache, or through command shuffling or other compiler optimizations ), it is not subject to cached variable values, but if developers use synchronization, as shown in the following code, the Runtime Library will ensure that a thread updates the variable prior to the existing
When the synchronized block is updated, the updates to the variables are immediately displayed when it enters another synchronized block protected by the same Monitor (Lock. Similar rules exist in volatile variables.
Synchronized (lockobject ){
// Update object state
}
Therefore, to implement synchronization, you need to consider everything required to securely update multiple shared variables. There are no contention conditions and data cannot be damaged (assuming that the synchronized boundary location is correct ), in addition, ensure that the latest values of these variables are visible to other threads correctly synchronized. By defining a clear and cross-platform Memory Model (this model was modified in JDK 5.0 to correct some errors in the original definition), by following the simple rule below, it is possible to build a "write once, run anywhere" concurrent class:
At any time, as long as the variable you write may be read by another thread, or the variable you read is finally written by another thread, you must synchronize it.
But now, in the recent JVM, there is no contention for synchronization (when a thread has a lock, no other threads attempt to obtain the lock). The performance cost is still very low. (This is not always the case; synchronization in early JVMs has not been optimized, so many people think so, but now this has become a misunderstanding, people think that whether or not it is competing for use, synchronization has a high performance cost .)
Improvement on synchronized
It seems that synchronization is quite good, right? So why did the JSR 166 team spend so much time developing the java. util. Concurrent. Lock framework? The answer is simple-synchronization is good, but it is not perfect. It has some functional limitations-it cannot interrupt a thread waiting to get the lock, nor get the lock by voting. If you don't want to wait, you won't be able to get the lock. Synchronization also requires that the lock can be released only in the same stack frame as the stack frame in which the lock is obtained. In most cases, this is okay (and it interacts well with Exception Handling ), however, some non-block structure locks are more appropriate.
Reentrantlock class
The lock framework in Java. util. Concurrent. Lock is an abstraction of locking. It allows implementation of locking as a Java class, rather than implementation as a language feature. This leaves space for multiple lock implementations. Different implementations may have different scheduling algorithms, performance characteristics, or locking semantics. The reentrantlock class implements lock, which has the same concurrency and memory semantics as synchronized. However, it adds some features similar to lock voting, timed lock wait, and interrupt lock wait. In addition, it provides better performance in the case of fierce competition. (In other words, when many threads want to access shared resources, JVM
It takes less time to schedule the thread and spend more time on the execution thread .)
What does the reentrant lock mean? To put it simply, there is a lock-related acquisition counter. If a thread that owns the lock gets the lock again, it will add 1 to the acquisition counter, then the lock must be released twice before it can be truly released. This imitates synchronized semantics. If a thread enters the synchronized block protected by the monitor that the thread already owns, the thread is allowed to continue. When the thread exits the second (or later) the synchronized block is released only when the thread exits the first synchronized block protected by the monitor.
When viewing the sample code in Listing 1, we can see a significant difference between lock and synchronized-The lock must be released in the Finally block. Otherwise, if the protected code throws an exception, the lock may never be released! This difference may seem insignificant, but in fact it is extremely important. If you forget to release the lock in the Finally block, a timer bomb may be left in the program. When one day the bomb explodes, it takes a lot of effort to find the source. With synchronization, JVM ensures that the lock is automatically released.
Listing 1: Using reentrantlock to protect code blocks
Lock = new reentrantlock ();
Lock. Lock ();
Try {
// Update object state
}
Finally {
Lock. Unlock ();
}
In addition, compared with the current synchronized implementation, the reentrantlock implementation under contention is more scalable. (In future JVM versions, the competition performance of synchronized is likely to be improved .) This means that when multiple threads compete for the same lock, the total cost for using reentrantlock is usually much less than synchronized.
Compare the scalability of reentrantlock and synchronized
Tim Peierls uses a simple linear full pseudo-random number generator (PRNG) to construct a simple evaluation to measure the scalability between synchronized and lock. This example is good, because PRNG does do some work each time nextrandom () is called, so this benchmark program is actually measuring a reasonable and real synchronized and lock application, instead of testing purely on paper or code that does nothing (just like many so-called benchmarking programs .)
In this benchmark program, there is a udorandom interface, which has only one method nextrandom (INT bound ). This interface is similar to the java. util. Random class. When the next random number is generated, PRNG uses the latest number as the input, and maintains the last number as an instance variable, the focus is to prevent code segments updated in this state from being preemptible by other threads, so I will use some form of locking to ensure this. (This can also be done by the java. util. Random class .) We have constructed two implementations for pseudorandom. One uses syncronized and the other uses
Java. util. Concurrent. reentrantlock. The driver generates a large number of threads, and each thread is frantically competing for a time slice, and then calculating the number of rounds that different versions can execute per second. This evaluation is not perfect, and only runs on two systems (one is dual Xeon running hyper-threading Linux, and the other is a single processor Windows System). However, synchronized should be sufficient to show its scalability advantage over reentrantlock.
The chart displays the throughput in units of calls per second. Different implementations are adjusted to 1-thread synchronized. Each implementation is relatively quickly concentrated on the throughput of a stable State. This State usually requires the processor to be fully utilized, and most of the processor's time is spent in actual work (computer random number) only a small amount of time is spent on Thread Scheduling expenses. You will notice that the synchronized version performs quite poorly when dealing with any type of contention, while the lock version takes quite a little time to schedule, this leaves space for higher throughput and enables more effective CPU utilization.
Condition variable
The root class object contains some special methods for communication between the wait (), Y (), and notifyall () threads. These are advanced concurrency features that many developers have never used-this may be a good thing because they are quite subtle and easy to use improperly. Fortunately, with Java. util. Concurrent introduced in JDK 5.0, there is almost no need for developers to use these methods.
There is an interaction between the notification and the lock-you must hold the Lock of this object to wait or notify on the object. Just as lock is a summary of synchronization, the lock framework includes a summary of wait and Policy, which is called a condition ). The Lock Object acts as the factory object bound to the condition variable of the lock. Unlike the standard wait and notify methods, there can be more than one condition variable associated with the specified lock. This simplifies the development of many concurrent algorithms. For example, the javadoc of condition shows an example of a bounded buffer implementation. This example uses two condition variables
"Full" and "not empty" are easier to read (and more effective) than using only one wait setting for each lock ). The condition method is similar to the wait, policy, and policyall methods. They are named await, signal, and signalall, because they cannot overwrite the corresponding methods on the object.
Unfair
If you view javadoc, you will see that a parameter of the reentrantlock constructor is a Boolean value, which allows you to choose whether to want a fair lock or an unfair lock. Fair locks allow the thread to obtain locks in order of request locks, while unfair locks allow bargaining. In this case, the thread can sometimes obtain locks first than other threads that request locks first.
Why don't we make all the locks fair? After all, fairness is a good thing, but it is not good to be unfair, isn't it? (When children want to make a decision, they always yell "this is not fair ". We think fairness is very important, and the children know it .) In reality, fairness ensures that the lock is very robust and has a high performance cost. To ensure the bookkeeping and synchronization required for fairness, it means that the competing fair locks have lower throughput than the unfair locks. As the default setting, fairness should be set to false unless fairness is critical to your algorithm and needs to be served strictly in the order of thread queuing.
What about synchronization? Is the built-in monitor lock fair? The answer surprised many people. They are unfair and never unfair. But no one complained about thread hunger, because JVM ensures that all threads will eventually get the lock they are waiting. Ensuring the fairness of statistics is sufficient in most cases, and the cost is much lower than the absolute fairness guarantee. Therefore, by default, reentrantlock is "unfair". This fact is just a superficial process of synchronization. If you don't mind this during synchronization, you don't have to worry about it during reentrantlock.
Only a dataset is added for random number benchmark detection. This detection uses a fair lock instead of the default negotiated lock. As you can see, fairness has a price. If you need to be fair, you must pay the price, but do not use it as your default choice.
Good everywhere?
It seems that reentrantlock is better than synchronized in whatever aspect -- all synchronized can do. It has the same memory and concurrency semantics as synchronized and has the features not available in synchronized, it also delivers better performance under load. So should we forget synchronized and no longer regard it as a good idea that has been optimized? Or even use reentrantlock to overwrite our existing synchronized code? In fact, several introductory Java programming books use this method in their multi-threaded chapter, fully used
Lock is used as an example, and synchronized is used as a history. But I think this is too good.
Do not discard synchronized
Although reentrantlock is a very touching implementation, it has some important advantages over synchronized, but I think it is definitely a serious error to rush to regard synchronized as being delayed. The lock class in Java. util. Concurrent. Lock is a tool for advanced users and advanced situations. In general, unless you have a clear need for a certain advanced feature of lock, or have clear evidence (rather than just doubt) that synchronization has become a bottleneck in scalability under specific circumstances, otherwise, synchronized should be used.
Why is my opinion conservative in the use of an apparently "better" implementation? Because synchronized still has some advantages for Lock classes in Java. util. Concurrent. Lock. For example, when synchronized is used, you cannot forget to release the lock. When you exit the synchronized block, JVM will do this for you. You can easily forget to release the lock with a Finally block, which is very harmful to the program. Your program can pass the test, but there will be deadlocks in the actual work, it will be difficult to point out the reason (this is also a good reason for not letting novice developers use lock at all .)
Another reason is that when the JVM uses synchronized to manage lock requests and releases, the JVM can include lock information when generating thread dump. These are very valuable for debugging because they can identify the sources of deadlocks or other abnormal behaviors. The lock class is just a common class, And the JVM does not know which thread has the Lock Object. In addition, almost every developer is familiar with synchronized, which can work in all versions of JVM. Before JDK 5.0 becomes a standard (it may take two years from now), using the lock class will mean that features to be used are not available in every JVM and are not familiar to every developer.
When should I replace synchronized with reentrantlock?
In this case, when should we use reentrantlock? The answer is very simple-when some features not available for synchronized are required, such as waiting for a time lock, waiting for an interrupted lock, no block structure lock, multiple conditional variables, or lock voting. Reentrantlock also has the scalability benefit and should be used in high contention situations, but remember that most synchronized blocks have almost never been used for contention, so we can put high contention on one side. I suggest using synchronized for development until it does prove that synchronized is inappropriate, instead of simply assuming that
Reentrantlock "better performance ". Remember that these are advanced tools for advanced users. (Moreover, real advanced users prefer to select the simplest tool they can find until they think that simple tools are not applicable .). As always, we should first do a good job and then consider whether it is necessary to do it faster.
Conclusion
The lock framework is a substitute for synchronization compatibility. It provides many features not provided by synchronized, and its implementation provides better performance in contention. However, these obvious advantages are not enough to justify replacing synchronized with reentrantlock. Instead, you should choose based on whether you need reentrantlock capabilities. In most cases, you should not select it -- synchronized works well and can work on all JVMs. More developers know about it and it is not easy to make mistakes. You can use the lock only when you need it. In these cases, you will be happy to have this tool.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.