Java Multithreading-----------------volatile memory semantics

Source: Internet
Author: User
Tags visibility volatile

Java Multithreading-----------------volatile memory semantics

  The volatile keyword is the most lightweight synchronization mechanism provided by a Java virtual machine. Because the volatile keyword is related to the Java memory model, we have added more additions to the Java memory model before introducing the volatile keyword (as described in the previous blog post).

1. Java memory model (JMM)

JMM is a specification, mainly used to define the access rules for shared variables, in order to solve the various thread safety problems caused by the inconsistent data of local memory and shared memory of multiple threads, and the instruction reordering of compiler processor, to guarantee the atomicity, visibility and order of multithreaded programming.
JMM Specifies that all variables are stored in main memory, that each thread has its own working memory, that the working memory in the thread stores the copy of the main memory of the variable used by the thread, and that all the actions of each thread on the variable must be made in working memory.
The transfer of variable values between threads must be done through main memory.

JMM defines the interaction protocol between main memory and working memory in 8 operations:
1) Lock: Acts on the main memory, which identifies a variable as an exclusive state of a thread.
2) Unlock: Acting on the main memory, it releases a variable that is in a locked state.
3) Read: Acts on main memory, which transfers the value of a variable from main memory to the working memory of the thread.
4) Load: Acts on the working memory, which puts the value read from main memory into a variable copy of the working memory.
5) Use: Works on working memory, which passes the value of a variable from main memory to the execution engine
6) Assign: function and working memory, which assigns a value from the execution engine to the working memory variable.
7) Store: Acts on the working memory and transfers the value of a variable in the working memory to the main memory.
8) Write: Acts on the main memory, it puts the store operation from the working memory value in the main memory variables.

These 8 operations and the restrictions on the rules of Operation 8 can determine which memory accesses are thread-safe under concurrency, which is cumbersome, and jdk1.5 proposes a happens-before rule to determine whether threads are safe.

As you can understand, the Happens-before rule is the core of JMM. Happens-before is used to determine the order of execution of two operations. Both of these operations can be in the same thread or in two of threads.
Happens-before rules: If one operation Happens-before another, the result of the first operation is visible to the second operation (but this does not mean that the processor must be executed in Happens-before order, as long as the execution results are not changed. can be arbitrarily optimized). Happens-before rules have been introduced in front of blog post, here no longer repeat (http://www.cnblogs.com/gdy1993/p/9117331.html)

JMM memory rules are only a rule, the final implementation of the rule is through the Java Virtual machine, compilers and processors to work together to implement, and memory barrier is the Java Virtual machine, compiler, processor communication between the link.
Java reasons encapsulate these underlying implementation and control, providing synchronized, lock and volatile keywords such as security issues to ensure multithreading.

2. Volatile keyword

 (1) The guarantee of the visibility of volatile

  Before introducing the volatile keyword, let's look at a piece of code:

// Thread 1        Boolean false ;          while (! stop) {            dosomething ();        }                 // Thread 2        true;

There are two threads: Thread 1 and Thread 2, thread 1 does not stop executing the dosomething () method when Stop==false, thread 2 is set to True when it is executed, thread 1 is interrupted, and many people use this method of disconnection, but this is not safe. Because stop as a normal variable, thread 2 of its modification, and can not immediately be perceived by thread 1, that is, thread 1 on the stop modification in its own working memory, not yet to write the main memory, thread 2 in the working memory of the stop is not modified, may cause the thread can not be interrupted, although this possibility is very small , but once it happens, the consequences are serious.

The use of volatile variable modifiers can avoid this problem, which is also the first important implication of volatile:

Volatile modified variables ensure that different threads have visibility into the variable operation, that a thread modifies the value of the variable, and that the new value is immediately visible to other threads.

The principle of volatile for visibility assurance :

For volatile-modified variables, when a thread modifies it, it forces the value to be flushed to main memory, which invalidates other threads ' caches of the variable in their working memory, so that when other threads operate on the variable, they must be reloaded from main memory

(2) The guarantee of the atomicity of volatile?

 First look at this piece of code (in-depth understanding of Java virtual machines):

 Public classVolatiletest { Public Static volatile intRace = 0;  Public Static voidIncrease () {Race++; }         Public Static Final intThread_count = 20;  Public Static voidMain (string[] args) {thread[] threads=NewThread[thread_count];  for(Thread t:threads) {T=NewThread (NewRunnable () {@Override Public voidrun () { for(inti = 0; I < 10000; i++) {increase ();            }                }            });        T.start (); }                 while(Thread.activecount () > 1) {Thread.yield (); } System.out.println (race);//Race < 200000            }}

Race is a volatile modified shared variable that creates 20 threads to self-increment the shared variable, 10,000 times per line Cheng, and if volatile can guarantee atomicity, the final race result is definitely 200000. However, the result is that the value of race ' is always less than 200000 per program run, which also proves that volatile does not guarantee the atomicity of a shared variable operation. The principle is as follows:

Thread 1 reads the value of the race, and then the CP allocates the time slice to end, thread 2 reads the value of the shared variable at this time, and the race is self-increment, and the value of the operation is flushed to the main memory, then thread 1 has read the value of race, so the remaining value is the original, this value is old value, The race is flushed to main memory after the self-increment operation, so the value in main memory is also the old value. This is also why volatile can only be guaranteed to read relative new values.

  (3) The guarantee of the order of volatile

  First look at this piece of code:

 //  thread 1  boolean  initialized = false  ;        Context  = Loadcontext ();                        Initialized  = true  ;  //  thread 2   While  (!        Initialized) {sleep (); } dosomething (context);  

  Thread 2 uses the context variable to complete some operations when the initialized variable is true, and thread 1 is responsible for loading the context and setting the initialized variable to true after the load is complete. However, since initialized is only an ordinary variable, the normal variable can only guarantee that all the values that depend on the result of the assignment will get the correct value in the execution of the method, and that the order of the variables is consistent with the execution order of the program code. As a result, it is possible that when thread 1 sets the initialized variable to true, the context is still not loaded, but thread 2 May execute the DoSomething () method because it reads initialized to True. Can have a very strange effect.

The second semantics of volatile is the prohibition of reordering: 

The operation of the volatile variable is not reordered with any read and write operations before the operation;

The read volatile variable operation does not reorder any read-write operations after the operation.

  (4) The underlying implementation principle of volatile

  The underlying Java language is the use of memory barriers to implement volatile semantics.

 Write operations for volatile variables:
①java Virtual opportunity to insert a release Barrier before the operation, the release barrier prohibits the write operation of the volatile variable and the reordering of any read-write operations before the operation.
②java Virtual Opportunity Insert a storage barrier (store Barrier) after this operation, and the storage barrier allows write operations to the volatile variable to be synchronized to the main memory.
For read operations on volatile variables:
③java Virtual Opportunity Insert a load Barrier before this operation so that each read of the volatile variable is reloaded from main memory (flush the processor cache)
The ④java virtual opportunity inserts a gain barrier (acquire Barrier) after the operation, allowing any read and write operations after volatile to be reordered with the operation.
①③ guarantee Visibility, ②④ guarantee order.

 (5)The relationship between volatile keyword and happens-before

 The volatile rule in the Happens-before rule is: Write Happens-before for a volatile field each subsequent read operation against the variable.

The write thread executes write (), and then the read thread executes read () method, each arrow in the diagram represents a happens-before relationship, the Black arrows are based on the program order rules, the blue arrows are based on the volatile rule, the Red Arrows are based on transitivity, That is, Operation 2happens-before Operation 3, that is, the volatile shared variable update operation in the subsequent read operation, the change of the volatile variable to the subsequent volatile variable reading is visible.

 

Java Multithreading-----------------volatile memory semantics

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.