Source: http://www.cnblogs.com/dolphin0520/p/3920373.html Java concurrent Programming: volatile keyword parsing
Volatile this keyword probably many friends have heard, perhaps also all used. Before Java 5, it was a controversial keyword, because using it in a program often resulted in unexpected results. After Java 5, the volatile keyword was able to regain its vitality.
Although the volatile keyword is literally easy to understand, it's not easy to use it well. Since the volatile keyword is related to the Java memory model, before we talk about volatile key, let's take a look at the concepts and knowledge associated with the memory model, and then analyze the implementation principle of the volatile keyword, Finally, several scenes using volatile keyword are given.
The following is a table of contents outline for this article:
I. Concepts related to the memory model
Two. Three concepts in concurrent programming
Three. Java Memory model
Four.. Deep analysis of Volatile keywords
Five. Use the volatile keyword scene
If there are any mistakes please forgive and welcome to criticize.
Please respect the results of the work of the author, reproduced please indicate the original link:
http://www.cnblogs.com/dolphin0520/p/3920373.html I. Concepts related to the memory model
As we all know, the computer in the execution of the program, each instruction is executed in the CPU, and the execution of instructions in the process, it is bound to involve data reading and writing. Because the temporary data in the process of running the program is stored in main memory, there is a problem, because the CPU execution speed, and the process of reading data from memory and writing data to the memory of the CPU to execute instructions more slowly than the speed, Therefore, if the operation of the data at any time through the interaction with the memory, will greatly reduce the speed of instruction execution. So there is a cache inside the CPU.
That is, when the program is running, will be the data required for the operation from main memory copy to the CPU cache, then the CPU can be calculated directly from its cache to read data and write data to it, when the operation is finished, then the cache of data flushed to main memory. Take a simple example, such as the following code:
1
i = i + 1;
When the thread executes the statement, it first reads the value of I from main memory and then copies a copy into the cache, then the CPU executes the instruction to add 1 to the I, then writes the data to the cache, and finally flushes the latest value of I in the cache to main memory.
This code runs in a single thread without any problems, but running on multiple threads can be problematic. In a multi-core CPU, each thread may run on a different CPU, so each thread has its own cache when it runs (for a single core CPU, this problem is only performed separately in the form of thread scheduling). In this paper, we take multi-core CPUs for example.
For example, there are 2 threads executing this code at the same time, if the initial value of I is 0, then we want two threads to finish after the value of I becomes 2. But that's the way it goes.
There may be a situation where, initially, two threads read the value of I to the cache of their respective CPUs, then thread 1 adds 1 operations, and then writes the latest value of I 1 to memory. At this point, the value of I in thread 2 's cache is still 0, and after the 1 operation, I has a value of 1, and then thread 2 writes the value of I to memory.
The value of the final result I is 1, not 2. This is the famous cache consistency issue. It is commonly said that this variable accessed by multiple threads is a shared variable.
That is, if a variable has a cache on multiple CPUs (which typically appears when multithreaded programming), there may be a problem with cache inconsistencies.
To address the problem of cache inconsistencies, there are generally 2 ways to do this:
1 by the way of adding lock# lock on the bus
2) through the Cache consistency protocol
These 2 ways are provided at the hardware level.
In the early CPUs, the problem of cache inconsistency was solved by adding lock# locks on the bus. Because the CPU and other parts of the communication is done through the bus, if the bus with lock# lock, that is, blocking other CPU access to other parts (such as memory), so that only one CPU can use the variable memory. For example, in the example above, if a thread is executing i = i +1, if, during the execution of this code, a lcok# lock is signaled on the bus, the other CPUs will be able to read the variable from the memory where the variable I resides and then do so only after waiting for the code to complete execution. This solves the problem of cache inconsistency.
However, there is a problem with the above approach, which is inefficient because the other CPUs cannot access memory during the lock bus.
So there is a cache consistency protocol. The best known is the Intel Mesi protocol, which guarantees that copies of shared variables used in each cache are consistent. Its core idea is that when the CPU writes the data, if it discovers that the variable of the operation is a shared variable, that is, a copy of the variable on the other CPU, it signals that the other CPU will set the cached row of the variable to an invalid state, so when the other CPU needs to read the variable, A cached row that caches the variable in its own cache is found to be invalid, and it is re-read from memory. two. Three concepts in concurrent programming
In concurrent programming, we typically encounter the following three problems: atomicity, visibility, and order. Let's take a look at these three concepts: 1. Atomic Sex
Atomicity: That is, an operation or multiple operations are either fully executed and the process of execution is not interrupted by any factor or executed.
A classic example is the issue of bank account transfer:
For example, from account A to the account B to 1000 yuan, it must include 2 operations: from the account a minus 1000 yuan, to account B plus 1000 yuan.
Imagine what the consequences would be if these 2 operations were not atomic in nature. If you subtract 1000 yuan from account A, the operation stops abruptly. And then from B out of 500 yuan, take out 500 yuan, and then execute to account B plus 1000 yuan operation. This will lead to account a although minus 1000 yuan, but account B did not receive this turn over 1000 yuan.
So these 2 operations must be atomic to ensure that there are no unexpected problems.
Also reflect what will happen in concurrent programming.
For the simplest example, consider what happens if the assignment of a 32-bit variable does not have atomic sex.
1
i = 9;
If a thread executes to this statement, I assume that assigning a value to a 32-bit variable includes two procedures: assignment for a low 16-bit value and a high 16-bit assignment.
Then there is the possibility that when a low 16-bit value is written and suddenly interrupted, and then another thread reads the value of I, the wrong data is read. 2. Visibility
Visibility means that when multiple threads access the same variable, a thread modifies the value of the variable, and other threads can immediately see the modified value.
For a simple example, look at the following code:
Thread 1 executes the code
int i = 0;
i = ten;
Thread 2 executes code
j = i;
If execution thread 1 is CPU1, execution thread 2 is CPU2. The above analysis shows that when thread 1 executes I =10, it first loads the initial value of I into the CPU1 cache, then assigns 10, then the value of I in the CPU1 cache becomes 10, but it is not immediately written to main memory.
At this point, thread 2 executes j = i, which first reads the value of I from main memory and loads it into the CPU2 cache, noting that the value of I in memory at this time is 0, then the value of J is 0 instead of 10.
This is the visibility problem, and thread 2 does not immediately see the value modified by thread 1 after it has been modified by thread 1 on the variable i. 3. Order of
Ordering: The order in which the program executes is executed in the order of the Code. For a simple example, look at the following code:
int i = 0;
Boolean flag = false;
i = 1; Statement 1
flag = true; Statement 2
The code above defines an int type variable, defines a Boolean type variable, and then assigns the two variables separately. In code order, statement 1 is in front of statement 2, so the JVM will guarantee that statement 1 will be executed before statement 2 when the code is actually executed. Not necessarily, for what. Instruction reordering may occur here (instruction Reorder).
Here's an explanation of what a command reordering is, in general, the processor in order to improve the efficiency of the program, the input code may be optimized, it does not guarantee that the execution sequence of each statement in the program is consistent with the order in the code, but it will ensure that the final execution of the program results and code execution results are consistent.
For example, in the above code, statement 1 and statement 2 who first executed the final program results did not affect, then it is possible in the execution, statement 2 first executed and statement 1 after execution.
Note, however, that while the processor will reorder the instructions, it guarantees that the final result of the program will be the same as the sequential execution of the code. Let's look at one of the following examples:
int a = ten; Statement 1
int r = 2; Statement 2
A = a + 3; Statement 3
r = a*a; Statement 4
This code has 4 statements, and a possible order of execution is:
Well, that's not going to be the order of execution: statement 2 statement 1 Statement 4 Statement 3
No, because the processor will consider data dependencies between instructions when it is reordered, and if an instruction instruction 2 must use the result of instruction 1, then the processor will guarantee that instruction 1 will execute before instruction 2.
Although reordering does not affect the results of program execution within a single thread, multithreading does. Let's look at an example:
Thread 1: Context
= Loadcontext (); Statement 1
inited = true; Statement 2
//thread 2: While
(!inited) {sleep
()
}
dosomethingwithconfig (context);
In the code above, because statement 1 and statement 2 have no data dependencies, they may be reordered. If a reordering occurs, the thread 1 execution executes the statement 2 first, and this is 2 that the initialization work is done, then jumps out of the while loop to execute the Dosomethingwithconfig (context) method. When the context is not initialized, it causes a program error.
As you can see from the above, instruction reordering does not affect the execution of individual threads, but it affects the correctness of concurrent execution of threads.
In other words, in order for the concurrent program to execute correctly, it is necessary to ensure atomicity, visibility, and order. As long as one is not guaranteed, it can cause the program to run incorrectly. three. Java Memory model
Some of the problems that may arise in the memory model and concurrent programming are discussed earlier. Let's take a look at the Java memory model and examine what guarantees the Java memory model provides and what methods and mechanisms are available in Java to ensure the correctness of program execution in multithreaded programming.
In the Java Virtual Machine specification, an attempt is made to define a Java memory model (Java Memory model,jmm) to mask memory access differences between individual hardware platforms and operating systems to achieve a consistent memory access for Java programs under various platforms. So what does the Java memory model specify, which defines the access rules for variables in a program, and, to a large extent, defines the order in which programs are executed. Note that in order to achieve better performance, the Java memory model does not limit the execution engine using the processor's registers or cache to elevate the instruction execution speed, nor does it restrict the compiler from reordering the instructions. That is, there are also problems with cache consistency and instruction reordering in the Java memory model.
The Java memory model stipulates that all variables exist in main memory (similar to the physical memory described above), and each thread has its own working memory (similar to the previous cache). All operations of a thread on a variable must be performed in working memory, not directly on main memory. And each thread cannot access the working memory of another thread.
For a simple example: in Java, execute the following statement:
1
i = 10;
The execution thread must first assign to the cache row of the variable I in its own worker thread before writing to the master. Instead of writing the value 10 directly into main memory.
So what does the Java language itself provide for atomicity, visibility, and ordering? 1. Atomic Sex
In Java, read and assign operations to variables of the base data type are atomic operations, that is, these operations are not interruptible, executed, or not executed.
Although the above sentence seems simple, but it is not so easy to understand. Look at the following example I:
Please analyze which of the following actions are atomic operations:
x = ten; Statement 1
y = x; Statement 2
x + +; Statement 3
x = x + 1; Statement 4
At first glance, some friends may say that the actions in the 4 statements above are atomic operations. In fact, only statement 1 is atomic and the other three statements are not atomic operations.
Statement 1 assigns a value of 10 directly to X, which means that the thread executing the statement writes the value 10 directly into the working memory.
Statement 2 actually contains 2 operations, it first reads the value of x, and then writes X's value to the working memory, although the value of x and the value of x to the working memory 2 operations are atomic operations, but the combination is not atomic operation.
Similarly, X + + and × = x+1 include 3 operations: Reading the value of x, adding 1 operations, and writing a new value.
So the above 4 statements only have the atomic nature of the operation of statement 1.
That is, only a simple read, assignment (and must assign a number to a variable, and the reciprocal assignment between variables is not atomic) is the atomic operation.
But here's one thing to note: Under 32-bit platforms, the reading and assigning of 64-bit data is done through two operations and cannot guarantee its atomicity. But it seems that in the latest JDK, the JVM has ensured that reading and assigning 64-bit data is also atomically manipulated.
As you can see from the above, the Java memory model only guarantees that basic reads and assignments are atomic operations, and that if you want to achieve the atomicity of a wider range of operations, you can do so by synchronized and lock. Since synchronized and lock can guarantee that only one thread executes the code block at any one time, there is no atomic problem in nature, which guarantees atomicity. 2. Visibility
For visibility, Java provides volatile keywords to ensure visibility.
When a shared variable is volatile decorated, it guarantees that the modified value is immediately updated to main memory, and that it reads the new value in memory when another thread needs to read it.
Ordinary shared variables do not guarantee visibility, because when the normal shared variable is modified and when it is written to main memory is indeterminate, when other threads go to read, the memory may still be the original old value, so the visibility can not be guaranteed.
In addition, visibility can be ensured through synchronized and lock, and synchronized and lock ensure that only one thread acquires the lock at the same time and then executes the synchronization code, and the modifications to the variable are flushed to main memory before the lock is released. Therefore, visibility can be guaranteed. 3. Order of
In the Java memory model, the compiler and the processor are allowed to reorder the instructions, but the reordering process does not affect the execution of single-threaded threads, but it affects the correctness of concurrent execution of multithreading.
In Java, you can use the volatile keyword to ensure a certain "order" (Specific principles in the next section). In addition, synchronized and lock can be used to ensure order, it is clear that synchronized and lock guarantee that each moment is a thread to execute synchronization code, which is the equivalent of the thread in order to execute the synchronization code, the natural guarantee of order.
In addition, the Java memory model has some innate "order", that is, there is no need to be guaranteed by any means of the order, this is often called the happens-before principle. If the order of execution of two operations cannot be inferred from the Happens-before principle, then they cannot guarantee their order, and the virtual machines can reorder them arbitrarily.
Here is a specific introduction to the Happens-before principle (first occurrence principle):
Program Order rules: In a thread, in the order of code, writing in front of the operation occurs first in the following operation
Locking rule: A unlock operation first occurs after the same lock operation is encountered
Volatile variable rule: a write operation on a variable first occurs after a read operation that faces this variable
Pass rule: If action A first occurs in action B, and Operation B occurs first in Operation C, you can conclude that operation a first occurs in Operation C
Thread Start rule: the Start () method of the Thread object occurs first in each action of this thread
Thread Break rule: The call to the thread interrupt () method occurs first in the code of the interrupted thread to detect the occurrence of the interrupt event
Thread finalization rule: All operations in a thread occur first in the thread's termination detection, and we can detect that the thread has terminated execution by means of the Thread.Join () method end, and the return value of thread.isalive ()
Object Finalization rule: Initialization of an object occurs first at the beginning of his finalize () method
These 8 principles are excerpted from the in-depth understanding of the Java Virtual machine.
Of these 8 rules, the first 4 rules are more important, and the last 4 rules are obvious.
Now let's explain the first 4 rules:
My understanding of program order rules is that the execution of a piece of program code appears to be orderly in a single thread. Note that while this rule mentions that "writing in front of the action takes place first in the following operation", this should be the order in which the program appears to execute in code order, because the virtual machine may order the program code to reorder. Although reordering, the result of the final execution is consistent with the result of the sequential execution of the program, which will only reorder instructions that do not have data dependencies. Therefore, it is important to understand that in a single thread, program execution appears to execute in an orderly fashion. In fact, this rule is used to ensure that the program performs the correctness of the results in a single thread, but does not guarantee that the program executes in multiple threads.
The second rule is also easier to understand, that is, whether in a single thread or multiple threads, the same lock, if out of a locked state, must be released before the lock operation can proceed.
The third rule is a more important rule, and it will be the focus of the following text. The intuitive explanation is that if a thread writes a variable first and then a thread reads it, then the write operation will definitely take place in the read operation.
The fourth rule is actually embodies the happens-before principle to have the transitivity. Four. In-depth analysis of volatile keywords
In front of a lot of things, in fact, is to tell the volatile keyword to pave the next, then we will enter the theme. two-layer semantics for 1.volatile keywords
Once a shared variable (a class's member variable, a class's static member variable) is volatile modified, it has two-layer semantics:
1 ensures the visibility of different threads when manipulating the variable, that is, a thread modifies the value of a variable, which is immediately visible to other threads.
2) The instruction reordering is prohibited.
Read a piece of code first, if thread 1 executes first, thread 2 executes after:
Thread 1
Boolean stop = false;
while (!stop) {
dosomething ();
}
Thread 2
stop = true;
This code is a typical piece of code that many people might use when interrupting threads. But in fact, is this code going to run exactly right? Is that the thread is bound to be interrupted? Not necessarily, perhaps for most of the time, this code can break the thread, but it may also cause a disconnection (although this is a small possibility, it can cause a dead loop if this happens).
The following explains why this code could cause a thread to be disconnected. As explained earlier, each thread has its own working memory during the run, so thread 1 will copy the value of the stop variable to its working memory when it is running.
So when thread 2 changes the value of the stop variable, but before it can be written into main memory, and thread 2 turns to do something else, thread 1 does not know that the thread 2 changes to the stop variable, so it continues to loop.
But with volatile it becomes different:
First: Using the volatile keyword forces the modified value to be written to main memory immediately;
Second: Use the volatile keyword, when the thread 2 is modified, will cause the thread 1 in the working memory of the cache variable stop cache line invalid (reflected to the hardware layer, is the CPU L1 or L2 cache of the corresponding cache line invalid);
Third: Because the cached row of the cached variable stop in thread 1 's working memory is invalid, thread 1 reads the main memory again when the value of the variable stop is read.
Then thread 2 modifies the stop value (there are 2 operations here, of course). Modifying the value in the thread 2 working memory and then writing the modified value to memory invalidates the cached row of the cached variable stop in the working memory of thread 1, and then the thread 1 reads and finds that its own cache line is invalid. It waits for the main memory address of the cache row to be updated, and then reads the most recent value to the corresponding main memory.
So thread 1 reads the latest correct value. 2.volatile guaranteed atomicity.
It is known from above that the volatile keyword guarantees the visibility of the operation, but volatile can guarantee that the operation of the variable is atomic.
Let's look at an example:
public class Test {public
volatile int inc = 0;
public void Increase () {
inc++;
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1) //Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Let's think about the output of this program. Maybe some friends think it's 10000. But in fact, running it will find that the results are inconsistent each time, is a number less than 10000.
Maybe some friends will have questions, no, the above is a self-adding to Variable Inc, and since volatile guarantees visibility, the modified value can be seen in every thread after it has been added to the Inc, so there are 10 threads doing 1000 operations each, Then the final Inc value should be 1000*10=10000.
There is a misunderstanding here, the volatile keyword can ensure that the visibility is not wrong, but the above procedure is wrong in the failure to guarantee atomicity. Visibility only guarantees that the most recent value is read at a time, but volatile cannot guarantee the atomicity of the operation of the variable.
As mentioned earlier, the self-add operation is not atomic, which includes reading the original value of the variable, adding 1 operations, and writing to the working memory. This means that the three child operations of the self augmentation operation may be split and executed, which may result in the following:
If a time variable inc has a value of 10,
Thread 1 makes the variable self operation, thread 1 reads the original value of the variable inc first, then thread 1 is blocked;
Thread 2 Then increases the variable by itself, and thread 2 reads the original value of the variable inc, because thread 1 only reads the variable Inc and does not modify the variable, so it does not invalidate the cached row of the cached Variable Inc. in the working memory of thread 2. So thread 2 goes directly to main memory to read the value of INC, finds the value of INC 10, then adds 1, and writes 11 to the working memory and finally writes to main memory.
Thread 1 then goes on to add 1, since the value of the INC has been read, noting that at this point in the working memory of thread 1, the value of Inc. is still 10, so thread 1 to the INC 1 operation after the value of INC is 11, and then 11 to the working memory, and finally to main memory
Then two threads had a single self increase operation, Inc. only increased by 1.
Explained here, may have the friend to have the question, not ah, the front is not to guarantee that a variable will invalidate the cached line when modifying the volatile variable. And then the other thread reads it and reads the new value, yes, that's right. This is the volatile variable rule in the Happens-before rule above, but note that thread 1 is blocked after reading the variable, and the INC value is not modified. Then, although volatile can guarantee that the value of thread 2 to Variable Inc is read from memory, thread 1 is not modified, so thread 2 does not see the modified value at all.
The root of this is that the self augmentation is not atomic, and volatile does not guarantee that any manipulation of the variable is atomic.
The above code can be changed to any of the following to achieve the effect:
Adopt synchronized:
public class Test {
public int inc = 0;
Public synchronized void Increase () {
inc++;
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1) //Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Using Lock:
public class Test {
public int inc = 0;
Lock lock = new Reentrantlock ();
Public void Increase () {
lock.lock ();
try {
inc++;
} finally{
Lock.unlock ();
}
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1) //Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Adopt Atomicinteger:
public class Test {
public Atomicinteger inc = new Atomicinteger ();
Public void Increase () {
inc.getandincrement ();
}
public static void Main (string[] args {
final Test test = new test ();
for (int i=0;i<10;i++) {
new Thread () {public
void run () {for
(int j=0;j<1000;j++)
Test.increase ();
};
Start ();
}
while (Thread.activecount () >1) //Ensure that all previous threads are finished
Thread.yield ();
System.out.println (test.inc);
}
Under the Java 1.5 java.util.concurrent.atomic package, some atomic operations classes are provided, that is, the addition of the basic data type (plus 1 operation), the self subtraction (minus 1 operation), and the additive operation (plus a number), the subtraction operation (minus one number) is encapsulated, Ensure that these operations are atomic operations. Atomic is an atomic operation using CAs (Compare and Swap), where CAs is actually implemented using the CMPXCHG instructions provided by the processor, while the processor executing the CMPXCHG instruction is an atomic operation. 3.volatile can guarantee the order of it.
In the preceding mentioned volatile keyword can prohibit order reordering, so volatile can guarantee to some extent order.
The volatile keyword prohibits order reordering with two layers of meaning:
1 when the program executes the read or write operation to the volatile variable, all the changes in the previous operation must have been made, and the result is visible to the subsequent operation;
2 When the instruction is optimized, the statements that are accessed by the volatile variable cannot be executed behind it, nor can the statements following the volatile variable be executed before it.
Perhaps the above is the comparison of the winding, to give a simple example:
X, y is a volatile variable
//flag is volatile variable
x = 2; Statement 1
y = 0; Statement 2
flag = true; Statement 3
x = 4; Statement 4
y =-1; Statement 5
Because the flag variable is a volatile variable, the order reordering process does not place statement 3 in front of statement 1, Statement 2, or statement 3, after statement 4 and statement 5. But note that the order of statement 1 and Statement 2, Statement 4, and statement 5 are not guaranteed.
and volatile keyword can guarantee, execute to Statement 3 o'clock, Statement 1 and statement 2 must be executed, and the execution result of statement 1 and statement 2 is visible to statement 3, Statement 4, statement 5.
So let's go back to one of the previous examples:
Thread 1: Context
= Loadcontext (); Statement 1
inited = true; Statement 2
//thread 2: While
(!inited) {sleep
()
}
dosomethingwithconfig (context);
As mentioned earlier, it is possible that statement 2 would be executed before statement 1, so long as the context has not been initialized, and thread 2 uses an uninitialized language to manipulate it, causing the program to go wrong.
This is not a problem if you decorate the inited variable with the volatile keyword, because when you execute to Statement 2 o'clock, you are sure to ensure that the context has been initialized. the principle and implementation mechanism of 4.volatile
With some use of the volatile keyword, let's look at how volatile guarantees visibility and disables command reordering.
The following passage is excerpted from the deep understanding of the Java Virtual machine:
"Observe that the assembly code generated when adding the volatile keyword and not joining the volatile keyword is found to add a lock prefix instruction when adding the volatile keyword."
The lock prefix instruction is actually equivalent to a memory barrier (also a memory fence), and the memory barrier provides 3 features:
1) It ensures that instruction reordering does not place the following instructions before the memory barrier, nor does the preceding instructions line up behind the memory barrier, that is, when the instruction to the memory barrier is executed, the operations in front of it are all completed;
2 It will force the cache modification operation to be written to main memory immediately;
3 If it is a write operation, it causes the corresponding cache line in the other CPU to be invalid. Five. Use the volatile keyword scene
The Synchronized keyword is to prevent multiple threads from executing a piece of code at the same time, which can greatly affect program execution efficiency, while the volatile keyword performs better than synchronized in some cases. However, note that the volatile keyword cannot be substituted for the Synchronized keyword because the volatile keyword does not guarantee the atomic nature of the operation. Generally, the following 2 conditions must be used for volatile:
1 write operations on variables that do not depend on the current value
2 The variable is not included in the invariant with other variables
In fact, these conditions indicate that these valid values that can be written to the volatile variable are independent of the state of any program, including the current state of the variable.
In fact, my understanding is that the above 2 conditions require that the operation be atomic, to ensure that the program using the volatile keyword executes correctly at concurrency.
Here are a few scenarios for using volatile in Java. 1. Status Mark Quantity
Volatile Boolean flag = false;
while (!flag) {
dosomething ();
}
public void Setflag () {
flag = true;
}
Volatile Boolean inited = false;
Thread 1: Context
= Loadcontext ();
Inited = true;
Thread 2: While
(!inited) {sleep
()
}
dosomethingwithconfig (context);
2.double Check
Class singleton{
Private volatile static Singleton instance = null;
Private Singleton () {
} public
static Singleton getinstance () {
if (instance==null) {
synchronized ( Singleton.class) {
if (instance==null)
instance = new Singleton ();
}
}
return instance
}
}
As for why it is necessary to write this please refer to:
"Double check in Java (double-check)" http://blog.csdn.net/dl88250/article/details/5439024
and http://www.iteye.com/topic/652440
Resources:
Java Programming Idea
Deep understanding of the Java Virtual machine
http://jiangzhengjun.iteye.com/blog/652532
Http://blog.sina.com.cn/s/blog_7bee8dd50101fu8n.html
http://ifeve.com/volatile/
http://blog.csdn.net/ccit0519/article/details/11241403
http://blog.csdn.net/ns_code/article/details/17101369
Http://www.cnblogs.com/kevinwu/archive/2012/05/02/2479464.html
Http://www.cppblog.com/elva/archive/2011/01/21/139019.html
http://ifeve.com/volatile-array-visiblity/
Http://www.bdqn.cn/news/201312/12579.shtml
http://exploer.blog.51cto.com/7123589/1193399
Http://www.cnblogs.com/Mainz/p/3556430.html
Author: Haizi
Source: http://www.cnblogs.com/dolphin0520/
This blog is not marked reproduced in the article returned to the author Haizi and blog Park, welcomed the reprint, but without the consent of the author must retain this paragraph of the statement, and in the article page obvious location to the original connection, otherwise retain the right to pursue legal responsibility.