Java concurrent Programming--four (Synchronized\lock\volatile) lock mechanism principle and correlation __ programming

Source: Internet
Author: User
Tags cas int size mutex static class switches visibility volatile
Preface

In fact, the title is more appropriate to use a mutex mechanism, the main two problems in concurrency is: how the thread synchronization and how the thread communicates.

Synchronization is mainly through mutual exclusion mechanism to ensure that the mutual exclusion mechanism we are most familiar with the lock, of course, there is no lock of CAS implementation.

Multi-threaded shared resources, such as the memory of an object, how to ensure that multiple threads do not simultaneously access (read or write) The object, which is the biggest problem of concurrency, resulting in a mutual exclusion mechanism (lock). synchronized

When should you synchronize? Apply Brian ' s rule of synchronization:

If You are writing a variable that might next is read by another
thread, or reading a variable that might have the last been written by
another thread, your must use synchronization, and further, both the
Reader and the writer must synchronize using the same monitor lock. function:

Visibility :
When the lock is acquired, the thread's local storage fails, and the critical section (the code area before the lock is released) obtains data from main memory and brushes the master after releasing the lock.
order (Mutex):
Guarantees that the critical area code threads are mutually exclusive. synchronized the basics of implementing synchronization:

synchronized implements the lock mechanism through object Header (Markwork).
Each object in Java can be used as a lock (and, to be exact, the object header of each object is the basis for the synchronized implementation, each object is an object lock)

Specific performance (code example):

public class MyTest {
//static synchronized adornment: Lock Mytest.class Object (class object of current classes) public
    static synchronized void () Throws IOException {
    ...
    }
Lock the current object (that is, the object that this is pointing to) public
    synchronized void Inc2 () throws IOException {
    ...

    }
    Object Lock=new object ();
The specified lock object that is displayed is lock public
    void LockObject () throws IOException {
       synchronized (lock) {
    ...      
        }
    }
synchronized Lock principle byte code level

Let's take a look at the byte code for the Synchronized method:

public class Synchronized {public
    static void Main (string[] args) {
        Synchronized (synchronized.class) {

        }
        m ();
    }

    public static synchronized void m () {
    }
}

Javap-v Synchronized.class:

............
public static void Main (java.lang.string[]);
 Flags:acc_public, acc_static
 Code:
 stack=2, Locals=1, args_size=1
 0:LDC #1//Class com/four/synchronized
 2:dup
 3:monitorenter 
 4:monitorexit
 5:invokestatic #16//Method M: () V
 8:return
 Linenumbertable: Line
 5:0 line
 8:5 line
 9:8
 localvariabletable:
 Start Length Slot Name Signature
 0 9 0 args [ljava/lang/string;

 public static synchronized void M ();
 Flags:acc_public, acc_static, acc_synchronized
 Code:
 stack=0, locals=0,
 args_size=0 0:return Linenumbertable: Line
 12:0
 localvariabletable:
 Start Length Slot Name Signature
 }
 ............

The synchronized block is synchronized by inserting the Monitorenter,monitorexit

The bytecode generated through the JAVAP command contains the Monitorenter and Monitorexit directives.
These two instructions are in turn before and after the critical section (the code block that needs to be synchronized).
hold monitor object to implement lock mechanism by entering and exiting this monitor object, using monitorenter instruction and moniterexit instruction

So, what is Monitorenter,monitorexit? synchronized lock storage and object headers from Object headers:

As mentioned above, synchronized the objectheader (Markwork) to implement the lock mechanism, each object in Java can be used as a lock (exactly, each object has the object header, then the synchronized implementation provides the basis, Each object is an object lock)

Object Headers

The layout of an object in memory is divided into three areas: object header, instance data, and alignment padding

Object Headers
The object header consists of two parts: Mark Word and Type pointer.

the Synchronized source code implementation uses Mark Word to identify the object's lock state.

Mark Word

Mark word is used to store run-time data for the object itself, such as hash code (HASHCODE), GC generational age, Synchronized lock information (lock status flag, thread held lock, biased thread ID, biased timestamp), and so on, The memory size is the same as the virtual seat length.

Type pointer

The type pointer points to the object's class metadata, through which the virtual machine determines which class the object is an instance of.

Where the object header stores the details of the synchronized lock implementation:

behind the Monitorenter\monitorexit: Synchronizd Lock upgrade C + + code implementation

The synchronized keyword implements the lock acquisition and release process based on the two instructions above, and the interpreter executes the monitorenter into the InterpreterRuntime.cpp interpreterruntime: Monitorenter function, specifically implemented as follows:

can be reentrant
A task can acquire a lock multiple times, such as a method of calling an object's synchronized tag in a thread, in which the second synchronized method is invoked, The third synchronized method is then invoked in the second synchronized method. Each time a thread enters a synchronized method, the JVM keeps track of the number of times it is locked, each time +1, when the method completes, the JVM counts to 1, and when the JVM count is 0 o'clock, the lock is completely freed and other threads can access the variable. Show Lock

Note: The return statement is placed in a try to avoid premature release of the lock; Jdoc recommended Lock.lock followed by try{}finally{}
Refer to the example in the previous article, we use the display lock to implement the mutual exclusion mechanism:

  Private lock lock = new Reentrantlock ();
  public int Next () {
        lock.lock ();
        try {
            ++currentevenvalue;//Danger point here!
            Thread.yield (); Speed up the thread switching
            ++currentevenvalue;
            return currentevenvalue;
        } finally {
            lock.unlock ();
        }
    }

The problem with the

Use lock lock =new reentrantlock () is that the code is not elegant enough to increase the amount of code; We generally use synchronized to implement mutual exclusion mechanisms. But 1. When an exception is thrown in the code, the end of the display lock can be used for resource cleanup. 2.ReentrantLock gives us finer granularity of control

public class Attemptlocking {private lock lock = new Reentrantlock ();
        private void untimed () {//Lock.lock ();
        Boolean captured = Lock.trylock ();
        try {System.out.println ("Trylock ()" + captured);
            Finally {if (captured) {Lock.unlock ();
        }} private void Timed () {Boolean captured = false;
        try {captured = Lock.trylock (2, timeunit.seconds);
        catch (Exception e) {throw new RuntimeException (e);
        try {System.out.println ("Lock.trylock (2, Timeunit.seconds)" + captured);
            Finally {if (captured) {Lock.unlock (); }} public static void Main (string[] args) {final attemptlocking attemptlocking = new ATTEMPTL
        Ocking ();
        Attemptlocking.untimed ();

        Attemptlocking.timed ();
      New Thread () {{          Setdaemon (TRUE);
                public void Run () {attemptLocking.lock.tryLock ();
            SYSTEM.OUT.PRINTLN ("acquired");
        }}.start ();
        Thread.yield ();
        Attemptlocking.untimed ();
    Attemptlocking.timed (); }

}
Lock mutex execution principle is implemented by Aqs Synchronizer. Please see AQS details lock visibility is also AQS, specifically the state variable Happen-before rules. How Aqs (or JDK locks) ensure visibility

For more in-depth understanding please read:

Java concurrent Programming--Nine Abstractqueuedsynchronizer
Aqs detailed

Java Concurrent Programming--reentrantlock source code (re-lock, fair lock, unfair lock)

Java concurrent Programming-read-write lock Reentrantreadwritelock

Java Concurrency Programming--condition (wait\signal\notify wait-notification mode) Atomic&volatie

What is atomicity (Atomic): an operation that will not be interrupted by the thread dispatch mechanism, which completes the operation before the threads context switch begins.
Atomicity applies to basic data types other than long\double, because long\double is 64bit and the JVM executes for 64bit as a two 32bit operation, then context switches can occur directly in both executions.
When we add volatile to long\double, we can guarantee atomic operation, only read and write operations, such as Long l=0;l++,++ operation is a typical non-atomic operation, because "+ +" operation is actually a read operation and a combination of write operations. volatile

Ensure the "visibility" of shared variables (when one thread modifies the shared variable, another thread can read the modified value) and, in some cases, is better than synchronized performance, because it does not compete for locks and does not cause context switching.
Principle Analysis

With volatile modified variables, converted to assembly language, will be more out of "lock add1 ...." ", the directive raises two things:
1. Write data from the current processor cache line to system main memory
2. Writeback main memory operation causes the data in other CPUs to be cached for this RAM (see below for volatile memory semantics)

To improve processing speed, the CPU first puts the system memory data into the internal cache (L1,l2,l3 ...). , and then the next time there is a cache hit (cached Hit). If the variable declares a volatile, the JVM sends a lock prefix command to the CPU, writes the data directly into memory, and causes the other processor to cache the memory address of the variable to expire, guaranteeing the consistency of the cache.

More than one thread accesses a volatile domain, and without synchronized, one of the tasks modifies the domain, and it is likely that this "modification" was put into the processor cache instead of the main memory. Other threads may not read the modified value of this field (read operation occurs in main memory.) )。 So you can use volatile to ensure that each change will be the latest value of the brush into the memory (or you can use synchronized to each access to the domain of the method of locking, synchronized can also ensure that the modification brush to the main memory).

When a volatile field relies on its previous value (such as ++i), or it relies on other variables, volatile does not work . It is recommended to use synchronized instead of volatile. Take a look at the example below: Be careful to rely on the basic type of "atomicity"

Class Circularset {private int[] array;
    private int len;

    private int index = 0;
        public circularset (int size) {array = new int[size];
        len = size;
        for (int i = 0; i < size; i++) {Array[i] =-1;
        } public synchronized void add (int i) {Array[index] = i;
    index = ++index% Len;  Public synchronized Boolean contains (int val) {for (int i = 0; i < len; i++) {if (array[i)
            = = val) {return boolean.true;
    } return boolean.false;
    } public class Serialnumberchecker {private static final int SIZE = 10;
    private static Circularset serials = new Circularset (1000);

    private static Executorservice exec = Executors.newcachedthreadpool (); The static class Serialchecker implements Runnable {@Override public void run () {while Boolean. TRUE) {int serial = SerialnumbergeneratOr.nextserialnumber ();
                    if (Serials.contains (serial)) {System.out.println ("Duplicate:" + serial);
                System.exit (0);

            } serials.add (serial); }} public static void Main (string[] args) {//is a thread simultaneously reads and writes to the Serialnumbergenerator domain serialnumber Operation;//If the serialnumber++ is atomic, the program does not break for (int i = 0; i < SIZE; i++) {Exec.execute (new seria
        Lchecker ());
 }

    }
}
public class Serialnumbergenerator {
    //use volatile to guarantee serialnumber visibility (value changed to brush into the main memory)
    private static volatile int serialnumber = 0;

    public static int Nextserialnumber () {
        //serialnumber++ Do you think it's atomic? return
        serialnumber++
    }

}
outp:duplicate:3954

The above example proves that the volatile of the basic variable cannot guarantee atomicity. So
1. In general, use synchronized rather than volatile. 2. + + operation is not atomic, the typical combination of read and write operation

public class Atomicity {
int i;
void F1 () {i++;}
void F2 () {i + 3;}
} /* Output: (Sample)
...
BYTE code:
void F1 ();
Code:
0:aload_0
1:dup
2:getfield #2;//field i:i   First, get
5:iconst_1
6:iadd
7: Putfield #2; Field I:i  After several steps, the last put
10:return
void F2 ();
Code:
0:aload_0
1:dup
2:getfield #2;//field i:i  First, get
5:iconst_3
6:iadd
7: Putfield #2; Field I:i  After several steps, finally put
10:return
*///:~
of Atomic

atomic implementation of the CPU angle :

Bus lock
When a thread performs a i++ operation in cpu1, it locks the communication between the system memory and each CPU-the bus, ensuring that other tasks do not change the value of I in main memory when the CUP1 performs i++ operations. However, no other instructions will be executed during this lockdown.

Cache lock
When a thread performs a i++ operation in cpu1, it does not lock the communication between the system memory and the individual CPUs, locking the memory address of the data cache data, allowing only CPU1 i++ operations to be written to main memory, preventing other tasks (cpu2:i++) from changing the data (cache consistency), Also invalidates other CPU caches

Java atomicity implementation using CAS loops
Pseudo code:

for (;;) {
            CurrentValue = GetValue ()//Get current data, bitwise operation Atomic
            Boolean success = Compareandset (CurrentValue, currentvalue++); If CurrentValue is not changed (Currentvalue==getvalue ()), replace the CurrentValue

            if (success) {break
                ;
            }
        with currentvalue++ (new value)
volatile Memory Semantics
public class Volatiletest {
    volatile long i = 0;

    Private long Get  () {return
        i;
    }

    private void set (long i) {
        this.i = i;
    }

    private void AddOne () {
        i++
    }

}

Equivalent to:


class VolatileTest2 {
    long i = 0;

    Private synchronized long get () {//atomicity return
        i;
    }

    Private synchronized void set (long i) {//atomicity
        this.i = i;
    }

    private void AddOne () {//i++ does not have a method lock, the operation is not atomic
        int tempi = Get ();
        Tempi + 1L;
        Set (TEMPI);
    }

Summary: Volatile
atomicity: Volatile variable, read-write operation Atomicity, but for ++i this composite operation is not atomic
Visibility: The read operation of the volatile variable can always see the updated data for the last write operation.

Volatile write operation memory semantics: the thread's local storage brush into the master (the same as the memory of the release lock)
Volatile read operation memory semantics: the local store corresponding to this thread is set to invalid, read from main memory. (Same as memory semantics for acquiring locks) the comparison between synchornized and volatile synchornized and volatile in common:
Ensure the visibility of the data (read main memory); Synchornized Disadvantages:
1 synchornized will trigger the lock competition, resulting in context switching, affecting performance, volatile not.
2 synchronized because of the lock competition, have caused deadlock, starvation and other multithreading problems, volatile not. Volatile Disadvantages:
1 volatile ensures visibility but does not guarantee atomicity (such as i++), synchronized guarantees visibility while ensuring atomicity
2 is only used at variable level, and synchronized usage is more extensive

Comparison of Http://www.javaperformancetuning.com/news/qotm051.shtml lock (Explicit lock) and synchronized (implicit lock)

(Trylock ()) non-blocking fetch lock, or set wait time
Synchronized pessimistic lock concurrency policy, access to exclusive lock, the so-called exclusive lock is a thread into the synchronized after the lock, other threads if you want to get this lock only enter (Wait ()) blocking state, blocking state will trigger context switches, when more threads compete, will generate frequent context switches.
Lock implementation of optimistic lock policy, using CAS algorithm, does not create thread blocking

(lock.lockinterruptibly ()) Unlike synchronized, the thread that gets the lock if interrupted, catches the interrupted exception, and releases the lock

When an exception is thrown in the code, the finally in which the lock is displayed can be cleaned up. Lock gives us finer granularity of control JDK Atomic class

Atomic operation: One or a group of operations that cannot be interrupted

Atomiclnteger, Atomiclong, atomicreference

It is also recommended to use synchronized or lock, which is used by the atomic class, as described in JDK Document

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.