Java Concurrent Programming---synchronized and lock two kinds of lock comparison __ programming

Source: Internet
Author: User
Tags cas lock queue mutex
Performance Comparison

In JDK1.5, synchronized is inefficient. Because this is a heavyweight operation, its greatest performance impact is the implementation, suspend the thread and recover the thread of the operation need to go into the kernel state of completion, these operations to the system's concurrency has brought great pressure. By contrast, using the lock object provided by Java, the performance is higher. In the multi-threaded environment, the throughput of the synchronized is very serious, while the reentranklock can be kept at a relatively stable level.

To the JDK1.6, has changed, to synchronize added a lot of optimization measures, there are adaptive spin, lock elimination, lock coarsening, lightweight locks, bias locks and so on. The performance of the Synchronize on the JDK1.6 is no worse than the lock. Officials also say they are also more supportive of synchronize, and there is room for optimization in future releases, so it is a priority to use synchronized to synchronize synchronized when the requirements are met.


Below is an analysis of the following two kinds of locking mechanism of the underlying implementation strategy.

The most important problem of mutex synchronization is the performance problem caused by thread blocking and awakening, so this kind of synchronization is called blocking synchronization, and it belongs to a pessimistic concurrency policy, that is, the thread gets an exclusive lock. An exclusive lock means that other threads can only rely on blocking to wait for the thread to release the lock. When the CPU conversion thread is blocked, it will cause the thread context to switch, and when there are many threads competing for the lock, the CPU frequent context switching will cause low efficiency. This concurrency strategy is used by synchronized.

With the development of instruction set, we have another choice: optimistic concurrency based on conflict detection strategy, popular speaking is the advanced operation, if no other thread contention to share data, then the operation succeeds, if the shared data is contention, resulting in a conflict, Then there are other compensation measures (the most common compensation measure is to continue to pick up until the trial is successful), many implementations of this optimistic concurrency policy do not need to suspend the thread, so this synchronization is called Non-blocking synchronization. This concurrency strategy is used by Reetrantlock.

In an optimistic concurrency strategy, the need for action and conflict detection is atomic, and it relies on hardware directives to ensure that this is a CAS operation (Compare and Swap). After JDK1.5, the Java program can use CAS operations only. We can further study the source code of Reentrantlock, will find that one of the more important way to get the lock is compareandsetstate, here is actually called the CPU to provide special instructions. Modern CPUs provide instructions to automatically update shared data, and can detect interference from other threads, and Compareandset () replaces locks. This algorithm is called a non-blocking algorithm, meaning that a thread's failure or suspension should not affect the failure or suspension of other threads.

In Java 5, special atomic variables, such as injection Automicinteger, Automiclong, automicreference, are introduced, which provide such as: Compareandset (), Methods such as Incrementandset () and getandincrement () use CAS operations. Therefore, they are all atomic methods guaranteed by hardware directives.


Use comparison basic syntax, reentrantlock and synchronized are very similar, they all have the same thread reentrant characteristics, but the code is a little different, a representation of the API level of the mutex (lock), A mutex (synchronized) that behaves as a primary grammatical level. Reentrantlock has added some advanced features relative to synchronized, with the following three major items:

1. Wait can be interrupted: when the thread holding the lock does not release the lock for a long time, the waiting thread can choose to discard the wait, instead of handling other things, it is very useful for processing the synchronization block. While waiting for a mutex to be generated by synchronized, it is blocked and cannot be interrupted.

2, can achieve a fair lock: Multiple threads waiting for the same lock, must be in accordance with the order of the time to request the lock queue, and not fair lock does not guarantee this, in the lock release, any waiting for the lock thread has the opportunity to obtain the lock. Synchronized locks are not fair in the lock, Reentrantlock are not fair by default, but they can be constructed using Reentrantlock (ture) to require fair locks.

3. Locks can be bound to multiple conditions: The Reentrantlock object can bind multiple condition objects at the same time (name: conditional variable or conditional queue), and in synchronized, The Wait () and notify () or Notifyall () methods of the lock object can implement an implied condition, but if you want to associate with more than one condition, you have to add a lock extra, and Reentrantlock does not. You only need to call the Newcondition () method multiple times. And we can also use binding condition objects to determine which threads the current thread notifies (that is, other threads that are bound to the condition object).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.