solemnly explain
According to the https://www.cnblogs.com/dolphin0520/p/3920373.html adaptation, made appropriate cuts. 1. Background
Before Java 5, it was a controversial keyword, because using it in a program often resulted in unexpected results. After Java 5, the volatile keyword was able to regain its vitality.
Although the volatile keyword is literally easy to understand, it's not easy to use it well. Since the volatile keyword is related to the Java memory model, before we talk about volatile key, let's take a look at the concepts and knowledge associated with the memory model, and then analyze the implementation principle of the volatile keyword, Finally, several scenes using volatile keyword are given.
Our instruction execution is CPU, while our data is placed in main memory (physical memory), CPU reads data executes. But in order to solve the CPU high speed and main memory read slow contradiction, using the cache, each time to read the required data to the cache, and then the CPU and cache interaction, the completion of the operation, the high cache of data to write to main memory.
Thread synchronization issues occur when multithreading has multiple caches.
To address the problem of cache inconsistencies, there are generally 2 ways to do this:
1 by the way of adding lock# lock on the bus
Because the CPU and other parts of the communication is done through the bus, if the bus with lock# lock, that is, blocking other CPU access to other parts (such as memory), so that only one CPU can use the variable memory.
2) through the Cache consistency protocol
However, there is a problem with the above approach, which is inefficient because the other CPUs cannot access memory during the lock bus.
So there is a cache consistency protocol. The best known is the Intel Mesi protocol, which guarantees that copies of shared variables used in each cache are consistent. Its core idea is that when the CPU writes the data, if it discovers that the variable of the operation is a shared variable, that is, a copy of the variable on the other CPU, it signals that the other CPU will set the cached row of the variable to an invalid state, so when the other CPU needs to read the variable, A cached row that caches the variable in its own cache is found to be invalid, and it is re-read from memory. 2. Three concepts in concurrent programming
In concurrent programming, we typically encounter the following three problems: atomicity, visibility, and order. Let's take a look at these three concepts: 1. Atomic Sex
Atomicity: That is, an operation or multiple operations are either fully executed and the process of execution is not interrupted by any factor or executed.
A classic example is the issue of bank account transfer.
So these 2 operations must be atomic to ensure that there are no unexpected problems.
Also reflect what will happen in concurrent programming.
For the simplest example, consider what happens if the assignment of a 32-bit variable does not have atomic sex.
1
i = 9;
If a thread executes to this statement, I assume that assigning a value to a 32-bit variable includes two procedures: assignment for a low 16-bit value and a high 16-bit assignment.
Then there is the possibility that when a low 16-bit value is written and suddenly interrupted, and then another thread reads the value of I, the wrong data is read. 2. Visibility
Visibility means that when multiple threads access the same variable, a thread modifies the value of the variable, and other threads can immediately see the modified value.
For a simple example, look at the following code:
Thread 1 executes the code
int i = 0;
i = ten;
Thread 2 executes code
j = i;
If execution thread 1 is CPU1, execution thread 2 is CPU2. The above analysis shows that when thread 1 executes I =10, it first loads the initial value of I into the CPU1 cache, then assigns 10, then the value of I in the CPU1 cache becomes 10, but it is not immediately written to main memory.
At this point, thread 2 executes j = i, which first reads the value of I from main memory and loads it into the CPU2 cache, noting that the value of I in memory at this time is 0, then the value of J is 0 instead of 10.
This is the visibility problem, and thread 2 does not immediately see the value modified by thread 1 after it has been modified by thread 1 on the variable i. 3. Order of
Ordering: The order in which the program executes is executed in the order of the Code. For a simple example, look at the following code:
int i = 0;
Boolean flag = false;
i = 1; Statement 1
flag = true; Statement 2
The code above defines an int type variable, defines a Boolean type variable, and then assigns the two variables separately. In code order, statement 1 is in front of statement 2, so the JVM will guarantee that statement 1 will be executed before statement 2 when the code is actually executed. Not necessarily, for what. Instruction reordering may occur here (instruction Reorder).
Here's an explanation of what a command reordering is, in general, the processor in order to improve the efficiency of the program, the input code may be optimized, it does not guarantee that the execution sequence of each statement in the program is consistent with the order in the code, but it will ensure that the final execution of the program results and code execution results are consistent.
For example, in the above code, statement 1 and statement 2 who first executed the final program results did not affect, then it is possible in the execution, statement 2 first executed and statement 1 after execution.
Note, however, that while the processor will reorder the instructions, it guarantees that the final result of the program will be the same as the sequential execution of the code. Let's look at one of the following examples:
int a = ten; Statement 1
int r = 2; Statement 2
A = a + 3; Statement 3
r = a*a; Statement 4
Well, that's not going to be the order of execution: statement 2 statement 1 Statement 4 Statement 3
No, because the processor will consider data dependencies between instructions when it is reordered, and if an instruction instruction 2 must use the result of instruction 1, then the processor will guarantee that instruction 1 will execute before instruction 2.
Although reordering does not affect the results of program execution within a single thread, multithreading does. Let's look at an example:
Thread 1: Context
= Loadcontext (); Statement 1
inited = true; Statement 2
//thread 2: While
(!inited) {sleep
()
}
dosomethingwithconfig (context);
In the code above, because statement 1 and statement 2 have no data dependencies, they may be reordered. If a reordering occurs, the thread 1 execution executes the statement 2 first, and this is 2 that the initialization work is done, then jumps out of the while loop to execute the Dosomethingwithconfig (context) method. When the context is not initialized, it causes a program error.
As you can see from the above, instruction reordering does not affect the execution of individual threads, but it affects the correctness of concurrent execution of threads.
In other words, in order for the concurrent program to execute correctly, it is necessary to ensure atomicity, visibility, and order. As long as one is not guaranteed, it can cause the program to run incorrectly. 3.Java Platform Support
In the Java Virtual Machine specification, an attempt is made to define a Java memory model (Java Memory model,jmm) to mask memory access differences between individual hardware platforms and operating systems to achieve a consistent memory access for Java programs under various platforms. So what does the Java memory model specify, which defines the access rules for variables in a program, and, to a large extent, defines the order in which programs are executed. Note that in order to achieve better performance, the Java memory model does not limit the execution engine using the processor's registers or cache to elevate the instruction execution speed, nor does it restrict the compiler from reordering the instructions. That is, there are also problems with cache consistency and instruction reordering in the Java memory model.
The Java memory model stipulates that all variables exist in main memory (similar to the physical memory described above), and each thread has its own working memory (similar to the previous cache). All operations of a thread on a variable must be performed in working memory, not directly on main memory. And each thread cannot access the working memory of another thread.
For a simple example: in Java, execute the following statement:
1
I = 10;
The execution thread must first assign to the cache row of the variable I in its own worker thread before writing to the master. Instead of writing the value 10 directly into main memory.
So what does the Java language itself provide for atomicity, visibility, and ordering? 1. Atomic Sex
In Java, read and assign operations to variables of the base data type are atomic operations, that is, these operations are not interruptible, executed, or not executed.
Please analyze which of the following actions are atomic operations:
x = ten; Statement 1
y = x; Statement 2
x + +; Statement 3
x = x + 1; Statement 4
So the above 4 statements only have the atomic nature of the operation of statement 1.
That is, only a simple read, assignment (and must assign a number to a variable, and the reciprocal assignment between variables is not atomic) is the atomic operation.
But here's one thing to note: Under 32-bit platforms, the reading and assigning of 64-bit data is done through two operations and cannot guarantee its atomicity. But it seems that in the latest JDK, the JVM has ensured that reading and assigning 64-bit data is also atomically manipulated.
As you can see from the above, the Java memory model only guarantees that basic reads and assignments are atomic operations, and that if you want to achieve the atomicity of a wider range of operations, you can do so by synchronized and lock. Since synchronized and lock can guarantee that only one thread executes the code block at any one time, there is no atomic problem in nature, which guarantees atomicity. 2. Visibility
For visibility, Java provides volatile keywords to ensure visibility.
When a shared variable is volatile decorated, it guarantees that the modified value is immediately updated to main memory, and that it reads the new value in memory when another thread needs to read it.
Ordinary shared variables do not guarantee visibility, because when the normal shared variable is modified and when it is written to main memory is indeterminate, when other threads go to read, the memory may still be the original old value, so the visibility can not be guaranteed.
In addition, visibility can be ensured through synchronized and lock, and synchronized and lock ensure that only one thread acquires the lock at the same time and then executes the synchronization code, and the modifications to the variable are flushed to main memory before the lock is released. Therefore, visibility can be guaranteed. 3. Order of
The rules of order are shown in the following two scenarios: line agenda and thread
The execution of a method from the point of view of a thread, which is executed in a way called "Serial" (as-if-serial), has been applied to sequential programming languages.
This thread "observes" that when other threads execute unsynchronized code concurrently, any code is likely to cross execution. The only constraint that works is that the synchronization block and the operation of the volatile field remain relatively orderly for synchronization methods. happens-before principle (first occurrence principle):
1. The program is sequential in a single thread, and multithreading is not guaranteed.
2. Through the lock can guarantee the serial
3. Write operations take precedence over read operations for different threads of reading and writing (this reservation)
The principle of 4.happens-before is transitive
4.volatile key word interpretation
In front of a lot of things, in fact, is to tell the volatile keyword to pave the next, then we will enter the theme. two-layer semantics for 1.volatile keywords
Once a shared variable (a class's member variable, a class's static member variable) is volatile modified, it has two-layer semantics:
1 ensures the visibility of different threads when manipulating the variable, that is, a thread modifies the value of a variable, which is immediately visible to other threads.
2) The instruction reordering is prohibited.
Read a piece of code first, if thread 1 executes first, thread 2 executes after:
Thread 1
Boolean stop = false;
while (!stop) {
dosomething ();
}
Thread 2
stop = true;
As explained earlier, each thread has its own working memory during the run, so thread 1 will copy the value of the stop variable to its working memory when it is running.
So when thread 2 changes the value of the stop variable, but before it can be written into main memory, and thread 2 turns to do something else, thread 1 does not know that the thread 2 changes to the stop variable, so it continues to loop.
But with volatile it becomes different:
First: Using the volatile keyword forces the modified value to be written to main memory immediately;
Second: Use the volatile keyword, when the thread 2 is modified, will cause the thread 1 in the working memory of the cache variable stop cache line invalid (reflected to the hardware layer, is the CPU L1 or L2 cache of the corresponding cache line invalid);
Third: Because the cached row of the cached variable stop in thread 1 's working memory is invalid, thread 1 reads the main memory again when the value of the variable stop is read.
Then thread 2 modifies the stop value (there are 2 operations here, of course). Modifying the value in the thread 2 working memory and then writing the modified value to memory invalidates the cached row of the cached variable stop in the working memory of thread 1, and then the thread 1 reads and finds that its own cache line is invalid. It waits for the main memory address of the cache row to be updated, and then reads the most recent value to the corresponding main memory.
So thread 1 reads the latest correct value. 2.volatile guaranteed atomicity.
It is known from above that the volatile keyword guarantees the visibility of the operation, but volatile can guarantee that the operation of the variable is atomic.
Let's look at an example:
public class Test {public
volatile int inc = 0;
public void Increase () {
inc++;
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1) //Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Let's think about the output of this program. Maybe some friends think it's 10000. But in fact, running it will find that the results are inconsistent each time, is a number less than 10000.
If a time variable inc has a value of 10,
Thread 1 makes the variable self operation, thread 1 reads the original value of the variable inc first, then thread 1 is blocked;
Thread 2 Then increases the variable by itself, and thread 2 reads the original value of the variable inc, because thread 1 only reads the variable Inc and does not modify the variable, so it does not invalidate the cached row of the cached Variable Inc. in the working memory of thread 2. So thread 2 goes directly to main memory to read the value of INC, finds the value of INC 10, then adds 1, and writes 11 to the working memory and finally writes to main memory.
Thread 1 then goes on to add 1, since the value of the INC has been read, noting that at this point in the working memory of thread 1, the value of Inc. is still 10, so thread 1 to the INC 1 operation after the value of INC is 11, and then 11 to the working memory, and finally to main memory
Then two threads had a single self increase operation, Inc. only increased by 1.
Explained here, may have the friend to have the question, not ah, the front is not to guarantee that a variable will invalidate the cached line when modifying the volatile variable. And then the other thread reads it and reads the new value, yes, that's right. This is the volatile variable rule in the Happens-before rule above, but note that thread 1 is blocked after reading the variable, and the INC value is not modified. Then, although volatile can guarantee that the value of thread 2 to Variable Inc is read from memory, thread 1 is not modified, so thread 2 does not see the modified value at all.
(There is a question, according to the description of the previous paragraph, visibility can not guarantee that the visibility can only guarantee the atomic read and write.) )
The root of this is that the self augmentation is not atomic, and volatile does not guarantee that any manipulation of the variable is atomic.
The above code can be changed to any of the following to achieve the effect:
Adopt synchronized:
Copy code public class Test {public int inc = 0;
Public synchronized void Increase () {inc++;
public static void Main (string[] args) {final Test test = new test (); for (int i=0;i<10;i++) {new Thread () {public void run () {A for (int j=0;j&
lt;1000;j++) test.increase ();
};
}.start ();
while (Thread.activecount () >1)//Ensure that all previous threads are finished Thread.yield ();
System.out.println (TEST.INC);
The copy code uses lock: Copy code public class Test {public int inc = 0;
Lock lock = new Reentrantlock ();
public void Increase () {lock.lock ();
try {inc++;
} finally{Lock.unlock ();
} public static void Main (string[] args) {final Test test = new test ();
for (int i=0;i<10;i++) {new Thread () {public void run () { for (int j=0;j<1000;j++) test.increase ();
};
}.start ();
while (Thread.activecount () >1)//Ensure that all previous threads are finished Thread.yield ();
System.out.println (TEST.INC);
The replication code is atomicinteger: Copy code public class Test {public Atomicinteger inc = new Atomicinteger ();
public void Increase () {inc.getandincrement ();
public static void Main (string[] args) {final Test test = new test (); for (int i=0;i<10;i++) {new Thread () {public void run () {A for (int j=0;j&
lt;1000;j++) test.increase ();
};
}.start ();
while (Thread.activecount () >1)//Ensure that all previous threads are finished Thread.yield ();
System.out.println (TEST.INC); }
}
Under the Java 1.5 java.util.concurrent.atomic package, some atomic operations classes are provided, that is, the addition of the basic data type (plus 1 operation), the self subtraction (minus 1 operation), and the additive operation (plus a number), the subtraction operation (minus one number) is encapsulated, Ensure that these operations are atomic operations. Atomic is an atomic operation using CAs (Compare and Swap), where CAs is actually implemented using the CMPXCHG instructions provided by the processor, while the processor executing the CMPXCHG instruction is an atomic operation. 3.volatile can guarantee the order of it.
In the preceding mentioned volatile keyword can prohibit order reordering, so volatile can guarantee to some extent order.
The volatile keyword prohibits order reordering with two layers of meaning:
1 when the program executes the read or write operation to the volatile variable, all the changes in the previous operation must have been made, and the result is visible to the subsequent operation;
2 When the instruction is optimized, the statements that are accessed by the volatile variable cannot be executed behind it, nor can the statements following the volatile variable be executed before it.
Perhaps the above is the comparison of the winding, to give a simple example:
X, y is a volatile variable
//flag is volatile variable
x = 2; Statement 1
y = 0; Statement 2
flag = true; Statement 3
x = 4; Statement 4
y =-1; Statement 5
Because the flag variable is a volatile variable, the order reordering process does not place statement 3 in front of statement 1, Statement 2, or statement 3, after statement 4 and statement 5. But note that the order of statement 1 and Statement 2, Statement 4, and statement 5 are not guaranteed.
and volatile keyword can guarantee, execute to Statement 3 o'clock, Statement 1 and statement 2 must be executed, and the execution result of statement 1 and statement 2 is visible to statement 3, Statement 4, statement 5.
So let's go back to one of the previous examples:
Thread 1: Context
= Loadcontext (); Statement 1
inited = true; Statement 2
//thread 2: While
(!inited) {sleep
()
}
dosomethingwithconfig (context);
As mentioned earlier, it is possible that statement 2 would be executed before statement 1, so long as the context has not been initialized, and thread 2 uses an uninitialized language to manipulate it, causing the program to go wrong.
This is not a problem if you decorate the inited variable with the volatile keyword, because when you execute to Statement 2 o'clock, you are sure to ensure that the context has been initialized. the principle and implementation mechanism of 4.volatile
With some use of the volatile keyword, let's look at how volatile guarantees visibility and disables command reordering.
The following passage is excerpted from the deep understanding of the Java Virtual machine:
"Observe that the assembly code generated when adding the volatile keyword and not joining the volatile keyword is found to add a lock prefix instruction when adding the volatile keyword."
The lock prefix instruction is actually equivalent to a memory barrier (also a memory fence), and the memory barrier provides 3 features:
1) through the barrier to separate the instructions before and after to ensure the order of
2 It will force the cache modification operation to be written to main memory immediately;
3 If it is a write operation, it causes the corresponding cache line in the other CPU to be invalid. 5. Use the volatile keyword scene
The Synchronized keyword is to prevent multiple threads from executing a piece of code at the same time, which can greatly affect program execution efficiency, while the volatile keyword performs better than synchronized in some cases. However, note that the volatile keyword cannot be substituted for the Synchronized keyword because the volatile keyword does not guarantee the atomic nature of the operation.
In fact, my understanding is that the above 2 conditions require that the operation be atomic, to ensure that the program using the volatile keyword executes correctly at concurrency. Summary of volatile use (Welcome to add)
Volatile use scenarios include two types:
1. The shared variable itself is already atomic
2. Not responsible for multithreaded read and write operations: including only one thread read, other threads write or multithreading simple assignment initialization
Here are a few scenarios for using volatile in Java.
1. Status Standard
Volatile Boolean flag = false;
while (!flag) {
DoSomething ();
}
public void Setflag () {
Flag = true;
}
Volatile Boolean inited = false;
Thread 1:
context = Loadcontext ();
Inited = true;
Thread 2:
while (!inited) {
Sleep ()
}
Dosomethingwithconfig (context);
2.double Check
Class singleton{
Private volatile static Singleton instance = null;
Private Singleton () {
} public
static Singleton getinstance () {
if (instance==null) {
synchronized ( Singleton.class) {
if (instance==null)
instance = new Singleton ();
}
}
return instance
}
}
reference materials:
Https://www.cnblogs.com/dolphin0520/p/3920373.html
Java Programming Idea
Deep understanding of the Java Virtual machine
http://jiangzhengjun.iteye.com/blog/652532
Http://blog.sina.com.cn/s/blog_7bee8dd50101fu8n.html
http://ifeve.com/volatile/
http://blog.csdn.net/ccit0519/article/details/11241403
http://blog.csdn.net/ns_code/article/details/17101369
Http://www.cnblogs.com/kevinwu/archive/2012/05/02/2479464.html
Http://www.cppblog.com/elva/archive/2011/01/21/139019.html
http://ifeve.com/volatile-array-visiblity/
Http://www.bdqn.cn/news/201312/12579.shtml
http://exploer.blog.51cto.com/7123589/1193399
Http://www.cnblogs.com/Mainz/p/3556430.html