Learn more about Java synchronization, locking mechanisms

Source: Internet
Author: User
Tags cas

The film also has an understanding of our common synchronization (synchronized) and lock mechanisms from this article to try a high level.

We assume that the reader wants to know more about concurrency. Recommend a book "Java Concurrency Programming Combat", this is a classic book, English-speaking students can also read "Concurrent Programming in Java-design principles and Patterns was personally fenced by Doug Lea. Doug Lea is the great god of concurrency, and the JDK's concurrent package is his complete.

We all know that code that is synchronized decorated in Java is called a synchronous code block. Synchronizing a block of code means that only one thread is running at the same time. Other threads are excluded from the synchronization block, and access is run in some order. In fact, synchronized is a monitor-based implementation, with each instance and class having a monitor, which is usually what we call the "lock" action to get the monitor.

So usually we talk about synchronized based on the JVM level, using the lock built into the object. The static method locks the class's monitor. The instance method locks the monitor for the corresponding instance.

Synchronization is implemented using the Monitorenter and Monitorexit directives. Monitorenter attempts to get the lock on the object, assuming that the object is not locked or when the thread has acquired the lock. Then put the lock counter +1, the same monitorexit the lock counter-1.

So synchronized is reentrant for the same thread.

The monitor supports two types of threads: mutual exclusion (sync) and collaboration. Java uses object locks to achieve mutual exclusion of critical areas. Use the Wait (), notify (), Notifyall () method of object to implement.

Optimistic lock and pessimistic lock

These two names have been found in many places, and the so-called optimistic lock is that when it comes to making a change or other operation, it feels that there will be no other threads to do the same (competition). This is an optimistic attitude. is generally based on CAS atomic directives. About CAs can refer to this article in Java and contract the CAS operation. CAs typically do not hang threads, so sometimes performance is better. (thread switching is an operation that consumes performance).

Pessimistic lock, according to the definition of optimistic lock very easy to understand pessimistic lock is that there must be other threads competing for resources, so whether or not there will be contention. Pessimistic locks always lock resources first.

The synchronized of the past will be blocked by the thread, which means that context switching occurs. Switch from user state to kernel state. Since such a way is sometimes too resource-intensive, spin locks appear later. The so-called spin is actually assumed that the lock has been occupied by other threads, the current thread does not hang, but a short operation, the spin is in fact a certain degree of optimistic lock, because it always feel that the next time will be locked. Spin locks are therefore suitable for use in situations where competition is not intense, and it is understood that the current JVM is optimized for synchronized.

The use of spin is also a sub-scene. It is possible that thread spins have not acquired to the lock for very long. Then the CPU is wasted, rather than suspended threads, so there is an adaptive spin lock, it will be more historical spin whether to obtain the record of the lock to infer the spin time or whether the need for spin.

Lightweight lock

The concept of lightweight locks is a weight-level lock that requires a mutually exclusive operation, and the purpose of lightweight locking is to reduce the chance of mutual exclusion of multithreading. Not to replace mutual exclusion.

To understand the lightweight lock and the bias lock that follows, you must first understand the memory layout of the object's head. The following figure is the memory layout of the object header:


The initial 01 indicates no lock. 00 represents a lightweight lock, 10 represents a heavyweight lock, and so on. When the code enters the synchronization block, assuming that the synchronization object is not locked (the lock flag bit is a "01" state), the virtual machine will first establish a space named lock record in the stack frame of the current thread for storing the lock object now mark Copy of Word (the official adds a displaced prefix to this copy.) That is displaced Mark Word). The virtual machine then attempts to use the CAS operation to point the lightweight pointer of the object to the lock record of the stack, assuming that the update succeeds the current thread acquires the lock and is marked with a 00 lightweight lock.

Assume that the update operation failed. The virtual machine first checks whether the object's mark word points to the stack frame of the current thread, assuming that the current thread already owns the lock on the object. It is possible to proceed directly into the synchronization block, otherwise the lock object has been preempted by another thread. Assume that there are more than two threads contending for the same lock. The lightweight lock is no longer valid. To be inflated to a heavyweight lock. The status value of the lock flag changes to "ten", and Mark Word stores a pointer to a heavyweight lock (mutual repulsion), and the thread that waits for the lock goes into a blockage state.

Biased lock

A bias lock is an eccentric meaning when a lock is first acquired by a thread. The thread ID of the lock is obtained at the object header, and every time the thread enters the synchronization block, it does not need to be locked, assuming that once another thread acquires the lock, the bias lock mode fails, and the lock is withdrawn back to the unlocked or lightweight lock state. The effect of a bias lock is to completely eliminate the lock. Do not even CAS operations.

Take a look at the state transitions that the thread is entering into the synchronization block and out of the synchronization block.

When multiple threads request an object monitor at the same time. The object monitor sets several states to differentiate the requested thread:

    • Contention List: The thread that locks all requests is placed first in the competition queue
    • Entry list:contention List The threads that qualify as candidates are moved to the Entry list
    • Wait set: Those threads that call the wait method to be blocked are placed into the wait set
    • OnDeck: No matter what time at most only can have a line is impersonating in the competition lock, that thread is called OnDeck
    • Owner: The thread that acquired the lock is called owner
    • ! Owner: The thread that freed the lock
Here is a netizen drawing very image:


The newly requested thread is placed in the contentionlist. When an owner releases the lock. Assuming entrylist is empty, owner moves the thread from Contentionlist to Entrylist.

Obviously, the contentionlist structure is actually a lock-free queue, because only the owner will take the node from Contentionlist.

Entrylist and contentionlist logically belong to the waiting queue, contentionlist will be thread concurrency, in order to reduce the Contentionlist team tail contention, and build entrylist. The owner thread migrates the thread from contentionlist to entrylist when unlock, and specifies that a thread in entrylist (typically head) is a ready (OnDeck) thread. The owner thread does not pass the lock on to the OnDeck thread, simply handing the competition lock to the Ondeck,ondeck thread requires another competition lock.

This, while sacrificing some fairness, greatly improves overall throughput and calls OnDeck's choice behavior "competitive switching" in the hotspot.

Can be re-entered lock

The biggest advantage of the reentrant lock is that it avoids thinking. Because of the thread that has acquired the lock. There is no need to get the lock again, just need to be the counter +1. Actually synchronized is also a kind of reentrant lock. But what we're going to talk about in this section is the Reentrantlock and the de facto present in the concurrent package. The synchronized is a lock provided at the JVM level. At the Java language level, the JDK also provides us with excellent locks, which are all in the Java.util.concurren package.

Let's take a look at the differences between the locks provided by the JVM and the locks in the bundle:

1.synchronized locking and release are provided by the JVM and do not require our attention, while Lock's locking and release are all controlled by us, usually releasing the lock action to be implemented in finally.

2.synchronized has only one status condition. That is, each object has only one monitor, assuming that multiple condition combinations are required so synchronized is not sufficient. and lock provides multiple conditions for mutual exclusion. Very flexible.

The 3.ReentrantLock has the same concurrency and memory semantics as synchronized, plus more lock polls, timed lock waits, and interrupt lock waits.

Before the commentary Reentrantlock. First look at the Atomicinteger source of the general understanding of its implementation principle.

/** * atomically increments by one of the current value. * * @return The updated value *//the method is similar to the i++ of the synchronous version number.    First the current value of +1, then return,//can see is a for loop, only when Compareandset success will return//Then when did it succeed? Public final int Incrementandget () {for (;;) {int current = get (); variable of type//volatile. So each fetch is the newest value int next = current + 1;//plus 1 operation if (Compareandset (current, next))//key is the If method//hypothesis Compa        Reandset succeeds, the whole plus operation succeeds, assuming failure, then the other thread has changed the value//then the next round of add 1 operation, until the successful return to next;     }}/** * Gets the current value. * * @return The current value *///get method is easy, return value, this value is a member variable of the class.    and is the volatile public final int get () {return value; }/** * Atomically sets the value to the given updated value * If the current value {@code = =} The expected Val     Ue. * * @param expect the expected value * @param update the new value * @return true if successful. False return indicates that * the ActuaL value is not equal to the expected value. */Public final Boolean compareandset (int expect, int update) {///Continue tracking unsafe method, found not provided, actually this method is an atomic method based on the local class library, using a reference So that you can complete the operation. Assuming that the value in memory is the same as the expected value, that is, no other thread has altered the value, the value is updated to the expected value, the return succeeds, or the return failure returns Unsafe.compareandswapint (this, valueoffset, expect,    Update); }
Predictably, assuming the competition is intense, the probability of failure is greatly added. Performance can also be affected. In fact, the locks in the contract are mostly based on CAS operations. This section intends to explain the reentrant lock, but many things still need to be known, just as well to write the introduction Reentrantlock again.

Copyright notice: This article blog original articles, blogs, without consent, may not be reproduced.

Learn more about Java synchronization, locking mechanisms

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.