The implementation of the Java volatile keyword from the root of the analysis

Source: Internet
Author: User
Tags cas

1. Analysis Overview
    1. Related Concepts of memory models
    2. Three concepts in concurrent programming
    3. Java memory model
    4. Deep analysis of volatile keywords
    5. Scenarios using the volatile keyword
2. Related concepts of memory model

Cache consistency issues. It is commonly said that the variable accessed by multiple threads is a shared variable.

That is, if a variable exists in multiple CPUs (typically in multithreaded programming), there may be a problem with cache inconsistencies.

To address the problem of cache inconsistencies, there are generally 2 workarounds:

    • By adding a lock# lock on the bus
    • By caching the consistency protocol

These 2 approaches are available at the hardware level.

The above approach 1 has a problem because the other CPUs are unable to access the memory during the locking of the bus, resulting in inefficiency.

Cache consistency protocol. The best known is the Intel Mesi protocol, which guarantees that a copy of the shared variables used in each cache is consistent. The core idea is that when the CPU writes the data, if the variable that is found to be an action is a shared variable, that is, a copy of the variable exists in the other CPU, a signal is signaled to the other CPU that the cache row of the variable is invalid, so that when the other CPU needs to read the variable, The cache line that caches the variable in its own cache is not valid, and it is re-read from memory.

3. Three concepts in concurrent programming

In concurrent programming, we typically encounter the following three problems: atomicity, visibility, order.

3.1 atomicity

Atomicity: That is, one operation or multiple operations are either executed completely and the execution process is not interrupted by any factor, or it is not executed.

3.2 Visibility

Visibility means that when multiple threads access the same variable, a thread modifies the value of the variable, and other threads can immediately see the modified value.

3.3 Ordering

Ordering: The order in which the program executes is executed in the order of the Code.

From the Code order, statement 1 is in front of statement 2, then the JVM will actually execute this code to ensure that statement 1 must be executed before statement 2? Not necessarily, why? Command reordering (instruction Reorder) may occur here.

The following explains what is Order reordering, in general, the processor may optimize the input code in order to improve the efficiency of the program, and it does not guarantee that the execution order of the individual statements in the program is consistent with the order in the code, but it will ensure that the results of the final execution of the program and the execution of the code sequence are consistent.

Command reordering does not affect the execution of a single thread, but it can affect the correctness of concurrent execution of threads.

In other words, it is necessary to ensure atomicity, visibility, and order in order for concurrent programs to execute correctly. As long as one is not guaranteed, it may cause the program to run incorrectly.

4. Java Memory model

In the Java Virtual Machine specification, an attempt was made to define a Java memory model (MODEL,JMM) to mask the memory access differences between the various hardware platforms and operating systems to enable Java programs to achieve consistent memory access across various platforms. So what does the Java memory model prescribe, which defines the access rules for variables in the program, and, to a large extent, defines the order in which the program executes. Note that for better execution performance, the Java memory model does not restrict the execution engine from using the processor's registers or caches to increase instruction execution speed, nor does it restrict the compiler from reordering instructions. In other words, in the Java memory model, there is also the problem of cache consistency and instruction reordering.

The Java memory model stipulates that all variables are present in main memory (similar to the physical memory mentioned earlier), and each thread has its own working memory (similar to the previous cache). All the operations of a thread on a variable must be done in working memory, not directly on main storage. And each thread cannot access the working memory of other threads.

4.1 atomicity

In Java, read and assign operations to variables of the base data type are atomic operations, meaning that these operations are not interrupted, executed, or not executed.

Please analyze which of the following actions are atomic operations:

    1. x = 10; Statement 1
    2. y = x; Statement 2
    3. x + +; Statement 3
    4. x = x + 1; Statement 4

In fact, only statement 1 is an atomic operation, and the other three statements are not atomic in nature.

That is, only a simple read, assignment (and must assign a number to a variable, the reciprocal assignment of a variable is not an atomic operation) is the atomic operation.

As can be seen from the above, the Java memory model only guarantees that basic reading and assignment are atomic operations, and if you want to achieve the atomicity of a wider range of operations, you can do so by synchronized and lock.

4.2 Visibility

For visibility, Java provides the volatile keyword to guarantee visibility.

When a shared variable is modified by volatile, it guarantees that the modified value is immediately updated to main memory, and when other threads need to read it, it will read the new value in RAM.

The common shared variable does not guarantee visibility, because when a common shared variable is modified, it is indeterminate when it is written to main memory, and when other threads go to read it may be the original old value at this time and therefore cannot guarantee visibility.

In addition, visibility is ensured through synchronized and lock, and synchronized and lock ensure that only one thread acquires the lock at the same time and executes the synchronization code, and that changes to the variable are flushed to main memory before the lock is released. Visibility can therefore be guaranteed.

4.3 Ordering

In the Java memory model, the compiler and processor are allowed to reorder instructions, but the reordering process does not affect the execution of a single-threaded procedure, but it can affect the correctness of multithreaded concurrency execution.

In Java, you can use the volatile keyword to ensure a certain "order" (it can prohibit the command reordering). It is also possible to maintain order through synchronized and lock, and it is clear that synchronized and lock ensure that each time a thread executes the synchronous code, which is the equivalent of allowing the thread to execute the synchronization code in order, naturally guaranteeing order.

In addition, the Java memory model has some innate "order", that is, no need to be guaranteed by any means of order, which is often referred to as the Happens-before principle. If the order of execution of two operations cannot be deduced from the happens-before principle, then they cannot guarantee their order, and the virtual machine can reorder them arbitrarily.

The following is a detailed introduction to the following Happens-before principles (the first occurrence principle):

    1. Program Order rules: Within a thread, in code order, the preceding operation precedes the operation that is written in the back
    2. Locking rule: A unlock operation occurs after the face of the same lock amount lock operation
    3. Volatile variable rule: The write operation of a variable precedes the read operation that faces the variable.
    4. Delivery rule: If operation a precedes operation B, and Operation B precedes Operation C, it can be concluded that operation a precedes operation C
    5. Thread Start rule: the Start () method of the thread object takes precedence over each action of this thread
    6. Thread Break rule: The call to the thread interrupt () method occurs when the code of the interrupted thread detects that the interrupt event occurred
    7. Thread termination rule: All operations in a thread occur first in thread termination detection, and we can detect that the thread has terminated execution by means of the Thread.Join () method end, Thread.isalive () return value
    8. Object Finalization rule: Initialization of an object occurs at the beginning of his finalize () method

Of these 8 rules, the first 4 rules are more important, and the last 4 rules are obvious.

Let's explain the first 4 rules:

    1. For procedural order rules, my understanding is that the execution of a program code looks orderly in a single thread. Note that although this rule mentions "writing in front of operations that occur in the following operations", this should be the order in which the program appears to execute in code order, because the virtual machine may order the program code to reorder. Although it is reordered, the result of the final execution is consistent with the results of the program's sequential execution, which only re-sorts instructions that do not have data dependencies. Therefore, it is important to understand that in a single thread, program execution appears to be executed in an orderly manner. In fact, this rule is used to ensure that the program executes the results in a single thread, but there is no guarantee that the program will perform correctly in multiple threads.
    2. The second rule is also easier to understand, that is, in a single thread or multi-threading, if the same lock is in a locked state, then the lock must be released before continuing the lock operation.
    3. The third rule is a more important rule, and it is the content that will be emphasized in the following article. The intuitive explanation is that if a thread goes first to write a variable and then a thread goes to read it, the write operation will definitely take place in the read operation.
    4. The fourth rule is actually embodies the happens-before principle has the transitive nature.
5, in-depth analysis of volatile keyword 5.1 volatile keyword two-layer semantics

Once a shared variable (a member variable of a class, a static member variable of a class) is modified by volatile, then there are two layers of semantics:

    1. The visibility of this variable is ensured by a thread that modifies the value of a variable, which is immediately visible to other threads.
    2. command reordering is prohibited.

For visibility, look at a piece of code first, if thread 1 executes first, thread 2 executes:

Thread 1false;  while (!stop) {dosomething ();} true;  

This code is a typical piece of code, and many people may use this notation when they break a thread. But in fact, is this code going to work exactly right? Is it bound to break the thread? Not necessarily, perhaps most of the time, this code can break the thread, but it can also lead to the inability to break threads (although this is a very small possibility, but as soon as this happens it will cause a dead loop).

The following explains why this code may cause the thread to fail. As explained earlier, each thread has its own working memory as it runs, and when thread 1 runs, it copies a copy of the value of the stop variable into its working memory.

Then when thread 2 changed the value of the stop variable, but before it could write to the main memory, thread 2 went to do something else, and thread 1 would have been looping because it did not know that thread 2 had changed the stop variable.

But it becomes different after the volatile modification:

    • First: Using the volatile keyword forces the modified value to be immediately written to main memory;
    • Second: With the volatile keyword, when thread 2 is modified, it causes thread 1 to work in-memory cache variable stop cache row is invalid (reflected to the hardware layer, the CPU L1 or the corresponding cache line in the L2 cache is invalid);
    • Third: Because thread 1 works in-memory cache variable stop cache line is invalid, so thread 1 reads the value of the variable stop again to read the main memory.

Then thread 2 when modifying the stop value (of course, this includes 2 operations, modify the value in the thread 2 working memory, and then write the modified value to memory), will make thread 1 in the work in-memory cache variable stop cache row is invalid, and then threads 1 read, found that their cache row is invalid, It waits for the cache line corresponding to the main memory address to be updated, and then goes to the corresponding main memory to read the latest value.

Then thread 1 reads the latest correct value.

Does 5.2 volatile guarantee atomicity?

Volatile does not guarantee atomicity, see an example below.

PublicClassTest {PublicVolatileint inc =0;PublicvoidIncrease () {inc++;}PublicStaticvoidMainstring[] args) {final Test test = new Test (); for (int i=0;i<10;i++) {new Thread () { public void run ( for (int j= 0;j<1000;j++) Test.increase ();}; }.start (); } while (Thread.activecount () >1) //ensure that the previous threads are finished executing thread. yield (); System. out.println (Test.inc);}}          

How much do you think the output of this program is? Maybe some friends think it's 10000. But in fact running it will find that each run results in an inconsistent, a number less than 10000.

There is a misunderstanding here, the volatile keyword can ensure that the visibility is not wrong, but the above procedure is not able to guarantee the atomic nature. Visibility only guarantees the most recent value per read, but volatile has no way of guaranteeing the atomicity of the operation of the variable.

As mentioned earlier, the self-increment operation is not atomic, it includes reading the original value of the variable, adding 1 operations, writing to the working memory. That is, the three sub-operations of the self-increment operation may be split and executed, which may cause the following to occur:

If the value of the variable Inc is 10 at a time.

Thread 1 self-increment the variable, thread 1 reads the original value of the variable Inc first, and then thread 1 is blocked;

Thread 2 then self-increment the variable, thread 2 also reads the original value of the variable inc, because thread 1 only reads from the variable Inc, and does not modify the variable, so it does not cause the cache line of the working in-memory cache variable INC to be invalid for thread 2. So thread 2 will go directly to main memory to read the value of INC, find the value of INC 10, and then add 1 operations, and 11 write to the working memory, and finally write to main storage.

Then thread 1 then adds 1, since the value of the INC has been read, note that at this point the value of the in-Memory inc of Thread 1 is still 10, so that Threads 1 to Inc 1, and then 11 to the working memory, and finally to main storage.

Then, after two threads had a single self-increment operation, Inc increased by 1.

Explained here, there may be friends have doubts, no ah, the front is not to ensure that a variable when modifying volatile variables, will the cache row invalid? Then the other threads read the new value, yes, that's right. This is the volatile variable rule in the Happens-before rule above, but note that after thread 1 reads the variable, it is blocked, and the INC value is not modified. Then, although volatile ensures that the value read of the thread 2 pair of variable Inc is read from memory, thread 1 is not modified, so thread 2 does not see the modified value at all.

The source is here, the self-increment operation is not atomic, and volatile does not guarantee that any operation on the variable is atomic.

Change the above code to any of the following to achieve the effect:

Using synchronized:

PublicClassTest {Publicint inc =0;PublicSynchronizedvoidIncrease() {inc++;}PublicStaticvoidmain(string[] args) { final Test test = new test (); For (int i=0;i<10;i++) { new Thread () { public void Run() {for (int j= 0;j<1000;j++) test.increase ();}; }.start (); } while (Thread.activecount () >1) //Ensure that the previous thread finishes executing Thread.yield (); System.out.println (TEST.INC); }}

Using Lock:

PublicClassTest {Publicint inc =0; LockLock =New Reentrantlock ();PublicvoidIncrease () {Lock.Lock ();try {inc++;}finally{Lock.unlock (); } }PublicStaticvoidMainstring[] args) {final Test test = new Test (); for (int i=0;i<10;i++) {new Thread () { public void run ( for (int j= 0;j<1000;j++) Test.increase ();}; }.start (); } while (Thread.activecount () >1) //ensure that the previous threads are finished executing thread. yield (); System. out.println (Test.inc);}}          

Using Atomicinteger:

PublicClassTest {Public Atomicinteger inc =New Atomicinteger ();PublicvoidIncrease () {inc.getandincrement ();}PublicStaticvoidMainstring[] args) {final Test test = new Test (); for (int i=0;i<10;i++) {new Thread () { public void run ( for (int j= 0;j<1000;j++) Test.increase ();}; }.start (); } while (Thread.activecount () >1) //ensure that the previous threads are finished executing thread. yield (); System. out.println (Test.inc);}}          

Some atomic operation classes are provided under the Java 1.5 java.util.concurrent.atomic package, namely, the self-increment of the base data type (plus 1 operations), the decrement (minus 1 operations), and the addition operation (plus one number), the subtraction operation (minus one number) is encapsulated, Ensure that these operations are atomic operations. Atomic is the use of CAs to achieve atomic operations (Compare and Swap), the CAs are actually implemented using the CMPXCHG instructions provided by the processor, and the processor execution CMPXCHG instruction is an atomic operation.

Does 5.3 volatile guarantee order?

Volatile can guarantee order in some degree.

The volatile keyword prohibit command reordering has two layers of meaning:

1) When the program performs a read or write operation to a volatile variable, the changes in its preceding operation must have been made, and the result is already visible to the subsequent operation;

2) in the case of instruction optimization, you cannot put the statements that are accessed by volatile variables behind them, and you cannot put the statements that follow the volatile variable in front of them.

As an example:

X, Y is non-volatile variable  2;         0;         //Statement 4y =-//Statement 5   

Because the flag variable is a volatile variable, then in the process of order reordering, the statement 3 will not be placed in statement 1, Statement 2 before the statement 3 is not put into statement 4, statement 5 after. Note, however, that the order of statement 1 and Statement 2, Statement 4, and statement 5 are not guaranteed.

And the volatile keyword guarantees that execution to the statement 3 o'clock, Statement 1 and statement 2 must be completed, and statement 1 and statement 2 execution results to statement 3, Statement 4, Statement 5 is visible.

Principle and realization mechanism of 5.4 volatile

Here's a look at how volatile can guarantee visibility and prohibit order reordering.

The following is an excerpt from the in-depth understanding of Java Virtual machines:

"Observing the addition of the volatile keyword and the assembly code generated when the volatile keyword was not added, a lock prefix instruction is added when adding the volatile keyword"

The lock prefix instruction is actually equivalent to a memory barrier (also a memory fence), and the memory barrier provides 3 functions:

    1. It ensures that command reordering does not place the instructions behind the memory barrier and does not queue the preceding instruction behind the memory barrier, i.e., the operation in front of the memory barrier is fully completed when the command is executed;
    2. It forces the modification of the cache to be immediately written to main memory;
    3. If it is a write operation, it causes the corresponding cache line in the other CPU to be invalid.
6. Scenarios with volatile keywords

The Synchronized keyword is to prevent multiple threads from executing a piece of code at the same time, which can affect program execution efficiency, while the volatile keyword has better performance than synchronized in some cases. Note, however, that the volatile keyword cannot replace the synchronized keyword because the volatile keyword does not guarantee the atomicity of the operation. In general, the following 2 conditions are required to use volatile:

    1. The write operation on the variable does not depend on the current value (e.g. + + operation, example above)
    2. The variable is not included in the invariant with other variables

In fact, these conditions indicate that these valid values that can be written to a volatile variable are independent of the state of any program, including the current state of the variable.

In fact, my understanding is that the above 2 conditions need to ensure that the operation is atomic, so that programs that use the volatile keyword can execute correctly when concurrency occurs.

Here are a few scenarios for using volatile in Java.

Status Marker Amount

False while (!flag) {    dosomething ();} Setflagtrue;}  
False //Thread 1:context = Loadcontext ();  true;            //Thread 2:while (!inited) {sleep ()}dosomethingwithconfig (context);   

Double check

Classsingleton{ Private volatile static Singleton instance = null; Private Singleton() {} public static Singleton getinstance() { if (instance==  NULL) { synchronized (singleton.class) { if (instance==null) instance = new Singleton ();}} return instance;}}                 

As for why this is required, please refer to:

Double check in Java (double-check) http://blog.csdn.net/dl88250/article/details/5439024 and http://www.iteye.com/topic/652440

The implementation of the Java volatile keyword from the root of the analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.