Locks in Java

Source: Internet
Author: User
Tags cas

In the process of learning or using Java, the process encounters a variety of lock concepts: Fair lock, unfair lock, spin lock, reentrant lock, biased lock, lightweight lock, heavyweight lock, read-write lock, mutex wait. Here to tidy up the various locks in Java, if there is a lack of hope that we are in the bottom of the message to discuss.

Fair lock and non-fair lock

A fair lock is when multiple threads are waiting for the same lock and must be locked in the order in which they are requested.

The advantage of a fair lock is that the thread that waits for the lock does not starve, but the overall efficiency is relatively low; the benefit of an unfair lock is a relatively high overall efficiency, but some threads may starve or wait for a lock early, but wait a long time to get a lock. The reason is that the fair lock is strictly in accordance with the order of the request to obtain a lock, and not a fair lock can be preempted, that is, if at some point the thread needs to acquire a lock, and this time the lock is available, then the threads will be directly preempted, and then blocking the waiting queue of the thread will not be awakened.

A fair lock can be implemented using new Reentrantlock (true).

Spin lock

Java threads are mapped to the native thread of the operating system, and if you want to block or wake up a thread, you need an operating system to help complete it, which requires a transition from the user state to the kernel mentality, so the state swapping takes a lot of processor time, For code-simple synchronization blocks (such as Getter () and setter () methods that are synchronized decorated), state transitions can consume more time than user code executes.

The development team of the virtual machine noticed that in many applications, the locked state of the shared data would only last for a short period of time, and it was not worthwhile for this time to take hold and restore the site. If the physical machine has more than one processor, allowing two or more threads to execute concurrently simultaneously, we can let the thread that asks for the lock "wait a moment" but not abandon the processor's execution time and see if the thread holding the lock will release the lock soon. In order for the thread to wait, we just have to let the thread perform a busy loop (spin), which is called a spin lock.

Spin waits cannot replace blocking. The spin wait itself avoids the overhead of thread switching, but it takes up processor time, so if the lock takes a short time, the spin will be very good, and conversely, if the lock is taken for a long time, then the spin thread will only waste processor resources. Therefore, the time of the spin wait must have a certain limit, if the spin exceeds the limit number of times (the default is 10, you can use-xx:preblockspin to change) did not successfully obtain the lock, you should use the traditional way to hang up the thread.

The spin lock is introduced in the JDK1.4.2 and is opened using the-xx:+usespinning. The JDK6 has been turned on by default and an adaptive spin lock has been introduced. Adaptive means that the spin time is not fixed, but is determined by the previous spin time in the same lock and the state of the lock owner.

Spin is used in lightweight locks, and threads do not use spin in a heavyweight lock.

If, on the same lock object, the spin wait has just been successfully acquired, and the thread holding the lock is running, the virtual machine will assume that the spin is likely to succeed again, and that it will allow the spin wait to last for a relatively long time, such as 100 cycles. In addition, if the spin is rarely successfully obtained for a lock, it may be possible to omit the spin process later to avoid wasting processor resources.

Lock Removal

Lock elimination is the virtual machine JIT runtime, which requires synchronization on some code, but is detected to eliminate the possibility of a lock on the competition for shared data. Lock elimination is based on data support from the escape analysis, if the judgment in a piece of code, all the data on the heap will not escape and can be accessed by other threads, then they can be treated as data on the stack, think they are thread-private, synchronous lock nature will not need to do.

Look at a method like this:

    publicString concatString(StringStringString s3)    {        =new StringBuffer();        sb.append(s1);        sb.append(s2);        sb.append(s3);        return sb.toString();    }

You can know that the Append method of StringBuffer is defined as follows:

    publicsynchronizedappend(StringBuffer sb) {        super.append(sb);        returnthis;    }

This means that the synchronization operation is involved in the concatstring () method. However, it is possible to observe that the scope of the SB object is limited to the inside of the method, that is, the SB object does not "escape" and other threads cannot access it. Therefore, although there is a lock, but can be safely eliminated, after the immediate compilation, this code will ignore all the synchronization and directly executed.

Lock Coarsening

In principle, when we write code, it is always recommended to limit the scope of the synchronization block as small as possible-only in the actual scope of the shared data to synchronize, so that the number of operations that need to synchronize as small as possible, if there is a lock, the waiting thread can get the lock as soon as possible. In most cases, these are correct. However, if the contact operations of some columns are repeated and unlocked by the same object, even if the lock operation is present in the loop body, the frequent mutex synchronization operation can lead to unnecessary performance loss even if there is no thread contention.

For example, a concatstring () method like lock elimination. If StringBuffer sb = new StringBuffer (), defined in the method body, then the thread is competing, but each append () operation is repeatedly locked for the same object, then the virtual machine detects that there is such a situation, Will extend the lock synchronization to the outside of the entire sequence of operations, that is, after the first append () operation and the last append () operation, such a lock scope extension operation is called the lock coarsening.

can be re-entered lock

A reentrant lock, also called a recursive lock, means that the inner recursive function still has the code to acquire the lock after the same thread's outer function acquires the lock, but is unaffected.

Both Reentrantlock and synchronized are reentrant locks in the Java environment. The maximum effect of reentrant locks is to avoid deadlocks.

class Locks and Object locks

Class Lock: A lock that adds a static synchronized to the method, or synchronized (Xxx.class). Method1 and METHOD2 in the following code:

Object Lock: Reference METHOD4, Method5,method6.

 Public  class lockstrategy{     PublicObject Object1 =NewObject (); Public Static synchronized void method1(){} Public void method2(){synchronized(Lockstrategy.class) {}    } Public synchronized void METHOD4(){} Public void Method5()    {synchronized( This){}    } Public void Method6()    {synchronized(Object1) {}    }}

Here's an exercise to deepen your understanding of object locks and class locks.
There is a class that defines:

public   Class  synchronizedtest  { public  Span class= "Hljs-keyword" >synchronized void   Method1  () {} public  synchronized  void  method2  () {} public  static  synchronized  void  method3  () {} public  Span class= "Hljs-keyword" >static synchronized  void  method4  (){}}

So, there are two instances of synchronizedtest A and B, what are some of the options that can be accessed by more than one thread at a time?
A. a.method1 () vs. A.METHOD2 ()
B. A.METHOD1 () vs. B.METHOD1 ()
C. A.METHOD3 () vs. B.METHOD4 ()
D. A.METHOD3 () vs. B.METHOD3 ()
E. a.method1 () vs. A.METHOD3 ()
What is the answer? Be

bias Lock, lightweight lock and Heavyweight lock

Synchronized, lightweight locks, and heavyweight locks are implemented through the Java object header. Bloggers in the Java object size inside the analysis of the Java object's memory layout is divided into: object header, instance data and fill it, and the object header can be divided into "Mark Word" and type pointer Klass. "Mark Word" is key, by default, its storage object's hashcode, generational age, and lock tag bit.

This is all about the hotspot virtual machine. First look at the contents of "Mark Word":

Lock Status Store Content flag Bit
No lock The hashcode of the object, the age of the object, whether it is a biased lock (0) 01
Lightweight Pointer to lock record in stack 00
Heavy-weight Pointer to mutex (heavyweight lock) 10
GC Flag Empty 11
Biased lock Biased to thread ID, time stamp biased, object generational age, or biased lock (1) 01

Notice that the unlocked and biased locks here are marked with 0 and 1 respectively in the penultimate bit of "Mark Word".

The biased lock is a lock optimization introduced in JDK6, which aims to eliminate the synchronization primitive of data in the non-competitive situation, and further improve the running performance of the program.

A biased lock will favor the first thread that acquires it, and if the lock is not fetched by another thread during the next execution, the thread holding the biased lock will never need to be synchronized. In most cases, the lock does not only have multi-threaded competition, but is always obtained multiple times by the same thread, in order to allow threads to acquire a lock at a lower cost to introduce a biased lock.

When the lock object is first fetched by the thread, the thread uses the CAS operation to record the thread ID of the lock into the object mark Word, with the bias flag bit 1. In the future, when the thread enters and exits the synchronization block, it does not require CAS operations to lock and unlock, simply test whether the object header's mark Word stores a biased lock pointing to the current thread. If the test succeeds, the thread has already acquired the lock.

If a thread fails with a CAS operation, it indicates that there is a competition on the lock object and that the other thread obtains ownership of the lock at this time. When the global security point (SafePoint, which has no executing bytecode at this point in time) is reached, the thread that gets the lock is suspended, expands to a lightweight lock (involving a monitor record,lock Record related operation, not expanded here), The thread that is also being revoked to the lock continues to execute the synchronization code.

When there is another thread trying to acquire the lock, the bias mode is declared to be over.

Before the thread executes the synchronization block, the JVM creates space in the current thread's stack frame to store the lock record, and Mard Word in the object header is copied to the lock record, officially known as displaced Mark Word. The thread then attempts to replace Mark Word in the object header with a pointer to the lock record using CAs. If successful, the current thread acquires the lock and, if it fails, indicates that another thread is competing for the lock, and the current thread attempts to use the spin to acquire the lock. If the spin fails, the lock expands into a heavyweight lock. If the spin is successful, it is still in a lightweight lock state.

The unlocking process of the lightweight lock is also done through CAS operations, and if the object's Mark Word still points to the thread's lock record, then use the CAS operation to replace the object's current mark Word and the displaced mark word that is assigned to the thread, if the replacement succeeds, The entire synchronization process is complete, and if the substitution fails, it means that another thread has attempted to acquire the lock, and that the suspended thread will be awakened while the lock is released.

Lightweight lock Boost Program synchronization performance is based on: for most of the lock, the entire synchronization cycle is not competitive (different from the biased lock). This is an empirical data. If there is no competition, lightweight locks use CAS operations to avoid the overhead of using mutexes, but if there is a lock race, there is additional CAS operations in addition to the cost of mutexes, so lightweight locks are slower than traditional heavyweight locks in the event of competition.

The entire synchronized lock process is as follows:

    1. Detects if Mark Word is the ID of the current thread and, if it is, indicates that the current thread is in a biased lock
    2. If not, use CAs to replace the current thread's ID with Mard Word, which indicates that the current thread obtains a biased lock, with the bias flag bit 1
    3. If it fails, it indicates a competition, revoking the biased lock, and then upgrading to a lightweight lock.
    4. The current thread uses CAs to replace the object header's mark word with a lock record pointer, and if successful, the current thread obtains the lock
    5. If it fails, indicating that another thread is competing for a lock, the current thread attempts to use a spin to acquire the lock.
    6. If the spin is successful, it is still in a lightweight state.
    7. If the spin fails, it is upgraded to a heavyweight lock.
pessimistic lock and optimistic lock

Pessimistic lock: It is assumed that concurrency conflicts occur and that all operations that may violate data integrity are masked.
Optimistic Lock: Assume that no concurrency conflicts occur and that data integrity is detected only when the commit operation is committed. (Use a version number or timestamp to match the implementation)

shared and exclusive locks

Shared locks: If a transaction t adds a shared lock to data A, the other transaction can only have a shared lock on a, and cannot lock it. Transactions that are allowed to share locks can read only data and cannot modify data.
Exclusive lock: If transaction T adds an exclusive lock to data A, no other transaction can be added to a plus any type of lock. A transaction that obtains an exclusive lock can read data and modify data.

read/write lock

A read-write lock is a resource that can be accessed by multiple read threads, or accessed by a write thread, but not concurrently with a read thread. Read-write locks in Java are implemented through Reentrantreadwritelock. The specific use method does not unfold here.

Mutual exclusion Lock

The so-called mutex refers to a maximum of one thread at a time to hold a lock. The lock of synchronized and Juc in the JDK is the mutex.

No lock

To ensure the safety of the site, it is not necessary to synchronize, the two have no causal relationship. Synchronization is only a means of ensuring the correctness of shared data contention, and if a method does not involve sharing data, it naturally does not require any synchronization to ensure correctness, so some code is inherently thread-safe.

    1. No State programming. Stateless code has some common characteristics: not dependent on the data stored on the pair and the common system resources, the amount of state used by the parameters passed in, do not call the non-stateless method, etc. You can refer to Servlets.
    2. Thread-local storage. can refer to Threadlocal
    3. Volatile
    4. Cas
    5. Co-process: Multi-tasking scheduling in single thread, and the switch between multiple tasks in a single thread.

Resources
1. "In-depth understanding of Java Virtual Machine" Zhou Zhiming
2. "The Art of Java concurrent programming," Fang Fei
3. Java Object Size Insider analysis
4. One of the internal details of the JVM: synchronized keyword and implementation details (Lightweight lock lightweight Locking)
5. Internal details of the JVM two: biased lock (biased Locking)

Locks in Java

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.