Two-layer semantics for 1.volatile keywords
Once a shared variable (a class's member variable, a class's static member variable) is volatile modified, it has two-layer semantics:
1 ensures the visibility of different threads when manipulating the variable, that is, a thread modifies the value of a variable, which is immediately visible to other threads.
2) The instruction reordering is prohibited.
Read a piece of code first, if thread 1 executes first, thread 2 executes after:
Thread 1
Boolean stop = false;
while (!stop) {
dosomething ();
}
Thread 2
stop = true;
This code is a typical piece of code that many people might use when interrupting threads. But in fact, does this code run exactly right? Is it certain that the thread will be interrupted? Not necessarily, perhaps for most of the time, this code can break the thread, but it may also cause a disconnection (although this is a small possibility, it can cause a dead loop if this happens).
The following explains why this code could cause a thread to be disconnected. As explained earlier, each thread has its own working memory during the run, so thread 1 will copy the value of the stop variable to its working memory when it is running.
So when thread 2 changes the value of the stop variable, but before it can be written into main memory, and thread 2 turns to do something else, thread 1 does not know that the thread 2 changes to the stop variable, so it continues to loop.
But with volatile it becomes different:
First: Using the volatile keyword forces the modified value to be written to main memory immediately;
Second: Use the volatile keyword, when the thread 2 is modified, will cause the thread 1 in the working memory of the cache variable stop cache line invalid (reflected to the hardware layer, is the CPU L1 or L2 cache of the corresponding cache line invalid);
Third: Because the cached row of the cached variable stop in thread 1 's working memory is invalid, thread 1 reads the main memory again when the value of the variable stop is read.
Then thread 2 modifies the stop value (there are 2 operations here, of course). Modifying the value in the thread 2 working memory and then writing the modified value to memory invalidates the cached row of the cached variable stop in the working memory of thread 1, and then the thread 1 reads and finds that its own cache line is invalid. It waits for the main memory address of the cache row to be updated, and then reads the most recent value to the corresponding main memory.
So thread 1 reads the latest correct value.
Characteristics of 2.volatile
When we declare the shared variable to be volatile, the read/write to this variable will be very special. A good way to understand the volatile feature is to take a single read/write of the volatile variable as a synchronization of the individual read/write operations using the same monitor lock. Let's take a look at the sample code below, which is illustrated by a specific example:
Class Volatilefeaturesexample {
volatile long VL = 0L;//Use the volatile to declare a 64-bit long variable public
void set (long l) {
VL = L; The write of a single volatile variable is public
void Getandincrement () {
vl++; The read/write of a composite (multiple) volatile variable is public
long get () {return
VL; Read} for a single volatile variable
Suppose there are multiple threads that call three methods of the above program, which are semantically equivalent to the following program:
Class Volatilefeaturesexample {
long vl = 0L; 64-bit long normal variable public
synchronized void set (long l) { //write the same monitor synchronization for a single generic variable
VL = l;
}
public void Getandincrement () {//normal method call
Long temp = get (); Call synchronized Read Method
temp + 1L; General write operation
Set (temp); Invoke a synchronized write method
} public
synchronized long get () {
//to read a single generic variable synchronously with the same monitor return
VL;
}
As shown in the example above, a single read/write operation for a volatile variable is synchronized with a read/write operation with a normal variable, and the execution effect is the same for the same monitor lock.
The Happens-before rule of the monitor lock guarantees memory visibility between the release monitor and the two threads that get the monitor, which means that the read of a volatile variable always sees (any thread) the last write to the volatile variable.
3.volatile Write-Read established happens before relationship
The above is about the characteristics of the volatile variable itself, for programmers, the impact of volatile on the memory visibility of the thread is more important than volatile's own characteristics, but also need our attention.
Starting with JSR-133, the write-read of volatile variables enables communication between threads.
From the memory semantics point of view, volatile has the same effect as the monitor lock: volatile write and monitor release have the same memory semantics; volatile read has the same memory semantics as the monitor's acquisition.
Consider the following example code using the volatile variable:
Class Volatileexample {
int a = 0;
Volatile Boolean flag = false;
public void writer () {
a = 1; 1
flag = true; 2
} public
void Reader () {
if (flag) { //3
int i = A; 4. ...
}}
If thread A executes the writer () method, thread B executes the reader () method. According to the happens before rule, the happens before relationship established by this process can be divided into two categories:
According to the procedure order rule, 1 happens before 2; 3 happens before 4.
According to the volatile rule, 2 happens before 3.
According to the transitivity rules of happens before, 1 happens before 4.
The graphical representations of the above happens before relationship are as follows:
In the figure above, each arrow links two nodes, representing a happens before relationship. The black arrows indicate the sequence rules of the program; The orange arrows represent the volatile rules; The blue arrows represent the happens before guarantees provided after these rules are combined.
Here a thread writes a volatile variable, the B thread reads the same volatile variable. A thread all visible shared variables before writing the volatile variable, which immediately becomes visible to B threads after the B thread reads the same volatile variable.
4.volatile write-Read memory semantics
The memory semantics of volatile write are as follows:
When writing a volatile variable, JMM flushes the shared variable in the local memory corresponding to the thread to the main memory.
Take the example program Volatileexample above as an example, assuming thread a executes the writer () method first, then thread B executes the reader () method, and the initial state is flag and a in the local memory of two threads. The following figure is a diagram of the status of a shared variable after thread A performs volatile writes:
As shown in the figure above, thread a flag the value of two shared variables updated by thread A in the local memory a after writing the variable to the main memory. At this point, the value of the shared variable in local memory A and in main memory is the same.
The memory semantics for volatile reading are as follows:
When reading a volatile variable, jmm the thread's corresponding local memory to an invalid. The thread then reads the shared variable from the main memory.
The following is a diagram of the state of a shared variable after thread B reads the same volatile variable:
As shown in the figure above, after reading the flag variable, local memory B has been set to invalid. At this point, thread B must read the shared variable from main memory. The read operation of thread B will cause local memory B to become consistent with the value of the shared variable in main memory.
If we combine the two steps of volatile writing and volatile reading, after reading thread B to read a volatile variable, the value of all the visible shared variables that write thread A will immediately become visible to read thread B before writing the volatile variable.
The following is a summary of the memory semantics for volatile write and volatile reading:
Thread A writes a volatile variable, which is essentially a message from thread A to a thread that is about to read the volatile variable (its modification to the shared variable).
Thread B reads a volatile variable, essentially a message that thread B receives from a previous thread (modifying the shared variable before writing the volatile variable).
Thread A writes a volatile variable, and then thread B reads the volatile variable, which essentially is thread a sending a message to thread B through main memory.
Does 5.volatile guarantee atomic sex?
Knowing from above that the volatile keyword guarantees the visibility of the operation, but does volatile guarantee that the operation of the variable is atomic?
Let's look at an example:
public class Test {public
volatile int inc = 0;
public void Increase () {
inc++;
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1)//Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
How much do you think the output of this program is? Maybe some friends think it's 10000. But in fact, running it will find that the results are inconsistent each time, is a number less than 10000.
Maybe some friends will have questions, no, the above is a self-adding to Variable Inc, and since volatile guarantees visibility, the modified value can be seen in every thread after it has been added to the Inc, so there are 10 threads doing 1000 operations each, Then the final Inc value should be 1000*10=10000.
There is a misunderstanding here, the volatile keyword can ensure that the visibility is not wrong, but the above procedure is wrong in the failure to guarantee atomicity. Visibility only guarantees that the most recent value is read at a time, but volatile cannot guarantee the atomicity of the operation of the variable.
As mentioned earlier, the self-add operation is not atomic, which includes reading the original value of the variable, adding 1 operations, and writing to the working memory. This means that the three child operations of the self augmentation operation may be split and executed, which may result in the following:
If a time variable inc has a value of 10,
Thread 1 makes the variable self operation, thread 1 reads the original value of the variable inc first, then thread 1 is blocked;
Thread 2 Then increases the variable by itself, and thread 2 reads the original value of the variable inc, because thread 1 only reads the variable Inc and does not modify the variable, so it does not invalidate the cached row of the cached Variable Inc. in the working memory of thread 2. So thread 2 goes directly to main memory to read the value of INC, finds the value of INC 10, then adds 1, and writes 11 to the working memory and finally writes to main memory.
Thread 1 then goes on to add 1, since the value of the INC has been read, noting that at this point in the working memory of thread 1, the value of Inc. is still 10, so thread 1 to the INC 1 operation after the value of INC is 11, and then 11 to the working memory, and finally to main memory
Then two threads had a single self increase operation, Inc. only increased by 1.
Explained here, may have the friend to have the question, not ah, the front is not to ensure that a variable when modifying the volatile variable, the cache line is invalid? And then the other thread reads it and reads the new value, yes, that's right. This is the volatile variable rule in the Happens-before rule above, but note that thread 1 is blocked after reading the variable, and the INC value is not modified. Then, although volatile can guarantee that the value of thread 2 to Variable Inc is read from memory, thread 1 is not modified, so thread 2 does not see the modified value at all.
The root of this is that the self augmentation is not atomic, and volatile does not guarantee that any manipulation of the variable is atomic.
The above code can be changed to any of the following to achieve the effect:
Adopt synchronized:
public class Test {public
int inc = 0;
Public synchronized void Increase () {
inc++;
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1)//Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Using Lock:
public class Test {public
int inc = 0;
Lock lock = new Reentrantlock ();
public void Increase () {
lock.lock ();
try {
inc++;
} finally{
Lock.unlock ();
}
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1)//Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Adopt Atomicinteger:
public class Test {public
Atomicinteger inc = new Atomicinteger ();
public void Increase () {
inc.getandincrement ();
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1)//Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Under the Java 1.5 java.util.concurrent.atomic package, some atomic operations classes are provided, that is, the addition of the basic data type (plus 1 operation), the self subtraction (minus 1 operation), and the additive operation (plus a number), the subtraction operation (minus one number) is encapsulated, Ensure that these operations are atomic operations. Atomic is an atomic operation using CAs (Compare and Swap), where CAs is actually implemented using the CMPXCHG instructions provided by the processor, while the processor executing the CMPXCHG instruction is an atomic operation.
Can 6.volatile guarantee order?
In the preceding mentioned volatile keyword can prohibit order reordering, so volatile can guarantee to some extent order.
The volatile keyword prohibits order reordering with two layers of meaning:
1 when the program executes the read or write operation to the volatile variable, all the changes in the previous operation must have been made, and the result is visible to the subsequent operation;
2 When the instruction is optimized, the statements that are accessed by the volatile variable cannot be executed behind it, nor can the statements following the volatile variable be executed before it.
Perhaps the above is the comparison of the winding, to give a simple example:
X, y is a volatile variable
//flag is volatile variable
x = 2; Statement 1
y = 0; Statement 2
flag = true;//Statement 3
x = 4; Statement 4
y =-1; Statement 5
Because the flag variable is a volatile variable, the order reordering process does not place statement 3 in front of statement 1, Statement 2, or statement 3, after statement 4 and statement 5. But note that the order of statement 1 and Statement 2, Statement 4, and statement 5 are not guaranteed.
and volatile keyword can guarantee, execute to Statement 3 o'clock, Statement 1 and statement 2 must be executed, and the execution result of statement 1 and statement 2 is visible to statement 3, Statement 4, statement 5.
So let's go back to one of the previous examples:
Thread 1: Context
= Loadcontext (); Statement 1
inited = true; Statement 2
//thread 2: While
(!inited) {sleep
()
}
dosomethingwithconfig (context);
As mentioned earlier, it is possible that statement 2 would be executed before statement 1, so long as the context has not been initialized, and thread 2 uses an uninitialized language to manipulate it, causing the program to go wrong.
This problem does not occur if the inited variable is decorated with the volatile keyword, because when executed to statement 2 o'clock, the context is guaranteed to be initialized.