Previous blog "Dead java Concurrency"-in-depth analysis of the volatile implementation principle has described the characteristics of volatile:
- volatile visibility; For a volatile read, you can always see the final write of this variable;
- volatile atomicity; volatile is atomic for a single read/write (32-bit long, Double), except for composite operations, such as i++;
- A "memory barrier" is used at the bottom of the JVM to implement volatile semantics
The following LZ introduces volatile in two directions through the happens-before principle and volatile memory semantics.
Volatile and Happens-before
In this blog "Dead java Concurrency"-–java memory model of the Happend-before, LZ elaborated Happens-before is used to determine whether the data competition, thread security is the main basis, it guarantees the visibility of the multi-threaded environment. Let's take a classic example to analyze the happens-before relationship between read and write of volatile variables.
Public class volatiletest { inti =0;volatile BooleanFlag =false;//thread A Public void Write() {i =2;//1Flag =true;//2}//thread B Public void Read(){if(flag) {//3System.out.println ("---i ="+ i);//4} }}
According to the Happens-before principle, the following relationship is obtained on the above procedure:
- According to Happens-before procedure Order principle: 1 Happens-before 2, 3 Happens-before 4;
- According to Happens-before's volatile principle: 2 Happens-before 3;
- According to Happens-before's transitivity: 1 Happens-before 4
Operation 1, Operation 4 There is a happens-before relationship, then 1 must be 4 visible. may have classmates will ask, Operation 1, Operation 2 may occur reordering Ah, will it? If you have seen the LZ blog will understand that volatile in addition to ensure visibility, there is no reordering. So the a thread will immediately become visible to thread B after all the shared variables that are visible before the volatile variable is written, and threads B reads the same volatile variable.
Memory semantics of volataile and its implementation
In JMM, the communication between threads is implemented using shared memory. The memory semantics of volatile are:
When a volatile variable is written, JMM will immediately flush the shared variable value in the local memory corresponding to that thread into main memory.
When a volatile variable is read, JMM sets the local memory corresponding to the thread to be invalid and reads the shared variable directly from the main memory
So the volatile write memory semantics are flushed directly into main memory, and the read memory semantics are read directly from the main memory.
So how is volatile memory semantics implemented? For general variables, they are reordered, but not for volatile, which affects their memory semantics, so in order to implement volatile memory semantics JMM will limit reordering. The reordering rules are as follows:
Translate as follows:
- If the first action is a volatile read, it cannot be reordered, regardless of the second operation. This operation ensures that the operation after the volatile read is not sorted by the compiler until the volatile read;
- When the second action is volatile, it cannot be reordered, no matter what the first action is. This operation ensures that the action before the volatile write is not sorted by the compiler and after the volatile write;
- When the first operation is volatile, the second operation is volatile and cannot be reordered.
The underlying implementation of volatile is by inserting a memory barrier, but for the compiler it is almost impossible to find an optimal placement to minimize the number of insert memory barriers, so the JMM employs a conservative strategy. As follows:
- Insert a storestore barrier in front of each volatile write operation
- Insert a storeload barrier behind each volatile write operation
- Insert a loadload barrier behind each volatile read operation
- Insert a loadstore barrier behind each volatile read operation
The Storestore barrier guarantees that all normal write operations preceding it are flushed to main memory before volatile writes.
The role of the storeload barrier is to avoid volatile writes and reordering of volatile read/write operations that may follow.
The Loadload barrier is used to prohibit the processor from using the above volatile reading with the following normal read order.
The Loadstore barrier is used to prohibit the processor from using the above volatile reads with the following normal write-down ordering.
Below we analyze the above Volatiletest example:
publicclass VolatileTest { int0; volatilebooleanfalse; publicvoidwrite(){ 2; true; } publicvoidread(){ if(flag){ System.out.println("---i = " + i); } }}
The memory barrier legend of the volatile instruction is illustrated by an example above.
Volatile memory barrier insertion strategy is very conservative, in fact, as long as not change the volatile write-read memory semantics, the compiler can be optimized according to the situation, omit unnecessary barriers. Below (excerpt from the Art of Java Concurrent programming):
Public class volatilebarrierexample { intA =0;volatile intV1 =1;volatile intV2 =2;voidReadAndWrite () {inti = v1;//volatile Read intj = v2;//volatile ReadA = i + j;//General readingV1 = i +1;//volatile WriteV2 = J *2;//volatile Write}}
An example diagram with no optimizations is as follows:
Let's analyze what memory barrier instructions are superfluous.
1: This must be preserved.
2: Prohibit all of the following ordinary write with the above volatile read reorder, but because of the existence of a second volatile read, that ordinary reading at all can not cross the second volatile read. So you can omit it.
3: The following no longer exists ordinary read, can be omitted.
4: Reserved
5: Reserved
6: Follow a volatile write below, so you can omit
7: Reserved
8: Reserved
So 2, 3, 6 can be omitted, it is as follows:
Resources
- Fang Fei: "The Art of Java concurrent programming"
"Dead java Concurrency"-----Analysis of Java Memory model volatile