Deep understanding of the volatile keyword

Source: Internet
Author: User
Tags cas sleep valid visibility volatile
Volatile this keyword may have been heard by many friends, and may have been used. Before Java 5, it was a controversial keyword, because using it in a program often led to unexpected results. After Java 5, the volatile keyword was able to regain its vitality.

Although the volatile keyword is simple to understand literally, it is not an easy task to use well. Since the volatile keyword is related to the memory model of Java, before we tell about the volatile key, let's take a look at the concepts and knowledge associated with the memory model, and then analyze how the volatile keyword is implemented. Finally, several scenarios with volatile keywords are given.


I. Related concepts of the memory model

As we all know, when the computer executes the program, each instruction is executed in the CPU, while executing the instruction process, it is bound to involve the reading and writing of the data. Since the temporary data in the program is stored in main memory (physical RAM), there is a problem, because the CPU is executing fast, and the process of reading data from memory and writing data to memory is much slower than the CPU executing instructions. Therefore, if the operation of the data at any time through the interaction with the memory, it will greatly reduce the speed of instruction execution. So there's a cache in the CPU.
That is, when the program is running, the data required for the operation is copied from the main memory to the CPU cache, then the CPU can be calculated directly from its cache to read data and write data to it, when the end of the operation, then the cache of data flushed to main memory. A simple example, such as the following code:

i = i + 1;

When the thread executes this statement, the value of I is read from main memory and then copied to the cache, then the CPU executes the instruction to add 1 to I, then writes the data to the cache, and finally flushes the most recent value of I in cache to main memory.
There is no problem with this code running in a single thread, but running in multi-threading can be problematic. In multi-core CPUs, each thread may run on a different CPU, so each thread runs with its own cache (this is actually the case for a single-core CPU, but it is performed separately in the form of thread scheduling). In this paper we take the multi-core CPU as an example.
For example, there are 2 threads executing this code, if the initial value of I is 0, then we want two threads to execute after the value of I becomes 2. But that's the way it goes.
There may be a situation where, initially, two threads read the value of I in the cache of their respective CPUs, then thread 1 adds 1, then writes the latest value of I to memory 1. At this point in the cache of thread 2 The value of I is still 0, after adding 1 operations, I is the value of 1, and then thread 2 writes the value of I to memory.
The value of the final result I is 1, not 2. This is a well-known cache consistency issue. It is commonly said that the variable accessed by multiple threads is a shared variable.
That is, if a variable exists in multiple CPUs (typically in multithreaded programming), there may be a problem with cache inconsistencies.
To address the problem of cache inconsistencies, there are generally 2 workarounds:
1) by adding a lock# lock on the bus
2) through Cache consistency protocol
These 2 approaches are available at the hardware level.
In the early CPU, the problem of cache inconsistency was solved by adding lock# lock on the bus. Because the CPU and other components communicate through the bus, if the bus plus lock# lock, that is, blocking other CPU access to other parts (such as memory), so that only one CPU can use this variable memory. For example, if a thread executes i = i +1 in the above example, if a lcok# lock is signaled on the bus during the execution of this code, then the other CPU can read the variable from the memory where the variable I resides and then perform the appropriate operation only after the code is fully executed. This solves the problem of cache inconsistency.
However, there is a problem with the above approach because the other CPUs are unable to access the memory during the locking of the bus, resulting in inefficiency.
So there is a cache consistency protocol. The best known is the Intel Mesi protocol, which guarantees that a copy of the shared variables used in each cache is consistent. The core idea is that when the CPU writes the data, if the variable that is found to be an action is a shared variable, that is, a copy of the variable exists in the other CPU, a signal is signaled to the other CPU that the cache row of the variable is invalid, so that when the other CPU needs to read the variable, The cache line that caches the variable in its own cache is not valid, and it is re-read from memory.


Two. Three concepts in concurrent programming

In concurrent programming, we typically encounter the following three problems: atomicity, visibility, order. Let's take a look at these three concepts in detail:
1. atomicity
Atomicity: That is, one operation or multiple operations are either executed completely and the execution process is not interrupted by any factor, or it is not executed.
A classic example is the bank account transfer problem:
For example, from account A to account B to 1000 yuan, then must include 2 operations: from account a minus 1000 yuan, to account B plus 1000 yuan.
Imagine what the consequences would be if the 2 operations did not have atomic properties. If you subtract $1000 from account A, the operation suddenly stops. Then from B took out 500 yuan, remove 500 yuan, and then to account B plus 1000 yuan operation. This will result in account a although minus 1000 yuan, but account B did not receive this turn over the 1000 yuan.
So these 2 operations must be atomic to ensure that there are no unexpected problems.
It also reflects what happens in concurrent programming.
For the simplest example, consider what happens if the process of assigning a 32-bit variable is not atomic.

i = 9;

If a thread executes to this statement, I assume for the time being that a 32-bit variable assignment includes two procedures: a low 16-bit assignment, and a high 16-bit value.

Then there is the possibility that when a low 16-bit value is written, it is suddenly interrupted, and then a thread reads the value of I, which is the wrong data.   

2. Visibility

Visibility means that when multiple threads access the same variable, a thread modifies the value of the variable, and other threads can immediately see the modified value.

For a simple example, look at the following code:

Thread 1 executes the code
int i = 0;
i = ten;
 
Thread 2 Executes the code
j = i;

If thread 1 is CPU1, execution thread 2 is CPU2. From the above analysis, when thread 1 executes I =10 this sentence, I will first load the initial value into the CPU1 cache, and then assign a value of 10, then in the CPU1 cache I value becomes 10, but not immediately write to the main memory.
At this point, thread 2 executes j = i, it will go to main memory to read the value of I and load into the cache of CPU2, note that at this point in memory I is the value of 0, then the value of J will be 0, instead of 10.
This is the visibility issue, thread 2 does not immediately see the value modified by thread 1 after thread 1 has modified the variable i.

3. Order
Ordering: The order in which the program executes is executed in the order of the Code. For a simple example, look at the following code:

int i = 0;              
Boolean flag = false;
i = 1;                Statement 1  
flag = true;          Statement 2

The above code defines an int variable, defines a Boolean variable, and assigns two variables to each. In code order, statement 1 is in front of statement 2, then the JVM will actually execute this code to ensure that statement 1 must be executed before statement 2. Not necessarily, for what. Command reordering (instruction Reorder) may occur here.
The following explains what is Order reordering, in general, the processor may optimize the input code in order to improve the efficiency of the program, and it does not guarantee that the execution order of the individual statements in the program is consistent with the order in the code, but it will ensure that the results of the final execution of the program and the execution of the code sequence are consistent.
For example, in the above code, statement 1 and statement 2 who first executed the final program results have no effect, then it is possible to execute the procedure, statement 2 executes first and statement 1 after execution.
Note, however, that although the processor will reorder the instructions, it will ensure that the final result of the program is the same as the code order execution, so what does it guarantee? Let's look at one more example:

int a = ten;    Statement 1
int r = 2;    Statement 2
A = a + 3;    Statement 3
r = a*a;     Statement 4

This code has 4 statements, so the possible order of execution is:


Then it is impossible to do this order of execution: statement 2 statement 1 Statement 4 Statement 3
Not possible, because the processor will consider the data dependencies between instructions when reordering, and if an instruction instruction 2 must use the results of instruction 1, then the processor will guarantee that instruction 1 will execute before instruction 2.

Although reordering does not affect the results of program execution within a single thread, it is multithreaded. Let's look at an example:

Thread 1:
context = Loadcontext ();   Statement 1
inited = true;             Statement 2
 
//thread 2:
while (!inited) {
  sleep ()
}
dosomethingwithconfig (context);
In the above code, because statement 1 and statement 2 have no data dependencies, they may be reordered. If a reordering occurs, thread 1 executes the statement 2 first, and at this point 2 will assume that the initialization work is completed, then will jump out of the while loop, to execute the Dosomethingwithconfig (context) method, When the context is not initialized, it causes the program to fail.
As can be seen from the above, command reordering does not affect the execution of a single thread, but it affects the correctness of concurrent execution of threads.
In other words, it is necessary to ensure atomicity, visibility, and order in order for concurrent programs to execute correctly. As long as one is not guaranteed, it may cause the program to run incorrectly.


Three. Java Memory model
Some of the problems that may arise in the memory model and concurrent programming are discussed earlier. Let's take a look at the Java memory model and look at what guarantees the Java memory model provides for us and what methods and mechanisms are available in Java to ensure the correctness of program execution in multithreaded programming.
In the Java Virtual Machine specification, an attempt was made to define a Java memory model (MODEL,JMM) to mask the memory access differences between the various hardware platforms and operating systems to enable Java programs to achieve consistent memory access across various platforms. So what does the Java memory model prescribe, which defines the access rules for variables in the program, and, to a large extent, defines the order in which the program executes. Note that for better execution performance, the Java memory model does not restrict the execution engine from using the processor's registers or caches to increase instruction execution speed, nor does it restrict the compiler from reordering instructions. In other words, in the Java memory model, there is also the problem of cache consistency and instruction reordering.
The Java memory model stipulates that all variables are present in main memory (similar to the physical memory mentioned earlier), and each thread has its own working memory (similar to the previous cache). All the operations of a thread on a variable must be done in working memory, not directly on main storage. And each thread cannot access the working memory of other threads.
To give a simple example: in Java, execute the following statement:

I  = 10;
The execution thread must first assign a value to the cache row where the variable I resides in its own worker thread before writing to the host. Instead of writing the value 10 directly into main memory.

So what are the guarantees for atomicity, visibility, and ordering of the Java language itself?

1. atomicity
In Java, read and assign operations to variables of the base data type are atomic operations, meaning that these operations are not interrupted, executed, or not executed.
The above sentence, though seemingly simple, is not so easy to understand. Take a look at the following example I:
Please analyze which of the following actions are atomic operations:

x = ten;         Statement 1
y = x;         Statement 2
x + +;           Statement 3
x = x + 1;     Statement 4
At first glance, some friends may say that the actions in the 4 statements above are atomic operations. In fact, only statement 1 is an atomic operation, and the other three statements are not atomic in nature.
Statement 1 is to assign a value of 10 directly to X, which means that the thread executes the statement and writes the value 10 directly to the working memory.
Statement 2 actually contains 2 operations, it first to read the value of x, and then write the value of x to the working memory, although reading the value of x and the value of x to the working memory of the 2 operations are atomic operations, but together is not atomic operation.
Similarly, X + + and × = x+1 include 3 operations: reads the value of x, adds 1 operations, and writes a new value.
So the above 4 statements only have the atomicity of statement 1 operations.
That is, only a simple read, assignment (and must assign a number to a variable, the reciprocal assignment of a variable is not an atomic operation) is the atomic operation.
But here's one thing to note: Under 32-bit platforms, the reading and assignment of 64-bit data is done by two operations and cannot be guaranteed to be atomic. But it seems that in the latest JDK, the JVM has guaranteed that reading and assigning 64 bits of data is also atomic.
As can be seen from the above, the Java memory model only guarantees that basic reading and assignment are atomic operations, and if you want to achieve the atomicity of a wider range of operations, you can do so by synchronized and lock. Since synchronized and lock can guarantee that only one thread executes the block at any one time, there is no atomic problem, which guarantees atomicity.
2. Visibility
For visibility, Java provides the volatile keyword to guarantee visibility.
When a shared variable is modified by volatile, it guarantees that the modified value is immediately updated to main memory, and when other threads need to read it, it will read the new value in RAM.
The common shared variable does not guarantee visibility, because when a common shared variable is modified, it is indeterminate when it is written to main memory, and when other threads go to read it may be the original old value at this time and therefore cannot guarantee visibility.
In addition, visibility is ensured through synchronized and lock, and synchronized and lock ensure that only one thread acquires the lock at the same time and executes the synchronization code, and that changes to the variable are flushed to main memory before the lock is released. Visibility can therefore be guaranteed.
3. Order
In the Java memory model, the compiler and processor are allowed to reorder instructions, but the reordering process does not affect the execution of a single-threaded procedure, but it can affect the correctness of multithreaded concurrency execution.
In Java, you can use the volatile keyword to ensure a certain "order" (the specific principle is described in the next section). It is also possible to maintain order through synchronized and lock, and it is clear that synchronized and lock ensure that each time a thread executes the synchronous code, which is the equivalent of allowing the thread to execute the synchronization code in order, naturally guaranteeing order.
In addition, the Java memory model has some innate "order", that is, no need to be guaranteed by any means of order, which is often referred to as the Happens-before principle. If the order of execution of two operations cannot be deduced from the happens-before principle, then they cannot guarantee their order, and the virtual machine can reorder them arbitrarily.
The following is a detailed introduction to the following Happens-before principles (the first occurrence principle):
Program Order rules: Within a thread, in code order, the preceding operation precedes the operation that is written in the back
Locking rule: A unlock operation occurs after the face of the same lock amount lock operation
Volatile variable rule: The write operation of a variable precedes the read operation that faces the variable.
Delivery rule: If operation a precedes operation B, and Operation B precedes Operation C, it can be concluded that operation a precedes operation C
Thread Start rule: the Start () method of the thread object takes precedence over each action of this thread
Thread Break rule: The call to the thread interrupt () method occurs when the code of the interrupted thread detects that the interrupt event occurred
Thread termination rule: All operations in a thread occur first in thread termination detection, and we can detect that the thread has terminated execution by means of the Thread.Join () method end, Thread.isalive () return value
Object Finalization rule: Initialization of an object occurs at the beginning of his finalize () method
These 8 principles are excerpted from the in-depth understanding of Java virtual machines.
Of these 8 rules, the first 4 rules are more important, and the last 4 rules are obvious.
Let's explain the first 4 rules:
For procedural order rules, my understanding is that the execution of a program code looks orderly in a single thread. Note that although this rule mentions "writing in front of operations that occur in the following operations", this should be the order in which the program appears to execute in code order, because the virtual machine may order the program code to reorder. Although it is reordered, the result of the final execution is consistent with the results of the program's sequential execution, which only re-sorts instructions that do not have data dependencies. Therefore, it is important to understand that in a single thread, program execution appears to be executed in an orderly manner. In fact, this rule is used to ensure that the program executes the results in a single thread, but there is no guarantee that the program will perform correctly in multiple threads.
The second rule is also easier to understand, that is, in a single thread or multi-threading, if the same lock is in a locked state, then the lock must be released before continuing the lock operation.
The third rule is a more important rule, and it is the content that will be emphasized in the following article. The intuitive explanation is that if a thread goes first to write a variable and then a thread goes to read it, the write operation will definitely take place in the read operation.
The fourth rule is actually embodies the happens-before principle has the transitive nature.


Four. In-depth analysis of volatile keywords
In front of a lot of things, in fact, is to tell about the volatile keyword to pave the mat, then we will go to the topic.
Two-tier semantics for 1.volatile keywords
Once a shared variable (a member variable of a class, a static member variable of a class) is modified by volatile, then there are two layers of semantics:
1) ensures the visibility of different threads operating on this variable, that is, one thread modifies the value of a variable, which is immediately visible to other threads.
2) command reordering is prohibited.
Look at a piece of code first, if thread 1 executes first, thread 2 executes:

Thread 1
Boolean stop = false;
while (!stop) {
    dosomething ();
}
 
Thread 2
stop = true;

This code is a typical piece of code, and many people may use this notation when they break a thread. But in fact, this piece of code will run completely correctly. That is bound to break the thread. Not necessarily, perhaps most of the time, this code can break the thread, but it can also lead to the inability to break threads (although this is a very small possibility, but as soon as this happens it will cause a dead loop).
The following explains why this code may cause the thread to fail. As explained earlier, each thread has its own working memory as it runs, and when thread 1 runs, it copies a copy of the value of the stop variable into its working memory.
Then when thread 2 changed the value of the stop variable, but before it could write to the main memory, thread 2 went to do something else, and thread 1 would have been looping because it did not know that thread 2 had changed the stop variable.
But it becomes different after the volatile modification:
First: Using the volatile keyword forces the modified value to be immediately written to main memory;
Second: With the volatile keyword, when thread 2 is modified, it causes thread 1 to work in-memory cache variable stop cache row is invalid (reflected to the hardware layer, the CPU L1 or the corresponding cache line in the L2 cache is invalid);
Third: Because thread 1 works in-memory cache variable stop cache line is invalid, so thread 1 reads the value of the variable stop again to read the main memory.
Then thread 2 when modifying the stop value (of course, this includes 2 operations, modify the value in the thread 2 working memory, and then write the modified value to memory), will make thread 1 in the work in-memory cache variable stop cache row is invalid, and then threads 1 read, found that their cache row is invalid, It waits for the cache line corresponding to the main memory address to be updated, and then goes to the corresponding main memory to read the latest value.
Then thread 1 reads the latest correct value.
2.volatile guaranteed atomicity.
Knowing the volatile keyword above guarantees the visibility of the operation, but volatile guarantees that the operation of the variable is atomic.
Let's look at an example:
public class Test {public
    volatile int inc = 0;
     
    public void Increase () {
        inc++;
    }
     
    public static void Main (string[] args) {
        final Test test = new test ();
        for (int i=0;i<10;i++) {
            new Thread () {public
                void run () {for
                    (int j=0;j<1000;j++)
                        Test.increase ();
                };
            }. Start ();
        }
         
        while (Thread.activecount () >1)  //Ensure that the previous thread finishes executing
            thread.yield ();
        System.out.println (test.inc);
    }
}

Let's see what the output of this program will look like. Maybe some friends think it's 10000. But in fact running it will find that each run results in an inconsistent, a number less than 10000.
Maybe some friends will have doubts, no Ah, the above is the variable Inc self-increment operation, because volatile guarantees the visibility, then in each thread after the INC self-increment, in other threads can see the modified value ah, so there are 10 threads have 1000 operations respectively, Then the value of the final Inc should be 1000*10=10000.
There is a misunderstanding here, the volatile keyword can ensure that the visibility is not wrong, but the above procedure is not able to guarantee the atomic nature. Visibility only guarantees the most recent value per read, but volatile has no way of guaranteeing the atomicity of the operation of the variable.
As mentioned earlier, the self-increment operation is not atomic, it includes reading the original value of the variable, adding 1 operations, writing to the working memory. That is, the three sub-operations of the self-increment operation may be split and executed, which may cause the following to occur:
If the value of the variable Inc is 10 at a time,
Thread 1 self-increment the variable, thread 1 reads the original value of the variable Inc first, and then thread 1 is blocked;
Thread 2 then self-increment the variable, thread 2 also reads the original value of the variable inc, because thread 1 only reads from the variable Inc, and does not modify the variable, so it does not cause the cache line of the working in-memory cache variable INC to be invalid for thread 2. So thread 2 will go directly to main memory to read the value of INC, find the value of INC 10, and then add 1 operations, and 11 write to the working memory, and finally write to main storage.
Then thread 1 then adds 1, since the value of the INC has been read, note that at this point the value of the in-Memory inc of Thread 1 is still 10, so that Threads 1 to Inc 1, and then 11 to the working memory, and finally to main storage.
Then, after two threads had a single self-increment operation, Inc increased by 1.
Explained here, there may be friends have doubts, no ah, the front is not guaranteed that a variable when modifying a volatile variable, will invalidate the cache line. Then the other threads read the new value, yes, that's right. This is the volatile variable rule in the Happens-before rule above, but note that after thread 1 reads the variable, it is blocked, and the INC value is not modified. Then, although volatile ensures that the value read of the thread 2 pair of variable Inc is read from memory, thread 1 is not modified, so thread 2 does not see the modified value at all.
The source is here, the self-increment operation is not atomic, and volatile does not guarantee that any operation on the variable is atomic.
Change the above code to any of the following to achieve the effect:
Using synchronized:

public class Test {
    public  int inc = 0;
    
    Public synchronized void Increase () {
        inc++;
    }
    
    public static void Main (string[] args) {
        final Test test = new test ();
        for (int i=0;i<10;i++) {
            new Thread () {public
                void run () {for
                    (int j=0;j<1000;j++)
                        Test.increase ();
                };
            }. Start ();
        }
        
        while (Thread.activecount () >1)  //Ensure that the previous thread finishes executing
            thread.yield ();
        System.out.println (test.inc);
    }
}
Using Lock:

public class Test {
    public  int inc = 0;
    Lock lock = new Reentrantlock ();
    
    Public  void Increase () {
        lock.lock ();
        try {
            inc++;
        } finally{
            Lock.unlock ();
        }
    }
    
    public static void Main (string[] args) {
        final Test test = new test ();
        for (int i=0;i<10;i++) {
            new Thread () {public
                void run () {for
                    (int j=0;j<1000;j++)
                        Test.increase ();
                };
            }. Start ();
        }
        
        while (Thread.activecount () >1)  //Ensure that the previous thread finishes executing
            thread.yield ();
        System.out.println (test.inc);
    }
}
Using Atomicinteger:

public class Test {
    public  Atomicinteger inc = new Atomicinteger ();
     
    Public  void Increase () {
        inc.getandincrement ();
    }
    
    public static void Main (string[] args) {
        final Test test = new test ();
        for (int i=0;i<10;i++) {
            new Thread () {public
                void run () {for
                    (int j=0;j<1000;j++)
                        Test.increase ();
                };
            }. Start ();
        }
        
        while (Thread.activecount () >1)  //Ensure that the previous thread finishes executing
            thread.yield ();
        System.out.println (test.inc);
    }
}

Some atomic operation classes are provided under the Java 1.5 java.util.concurrent.atomic package, namely, the self-increment of the base data type (plus 1 operations), the decrement (minus 1 operations), and the addition operation (plus one number), the subtraction operation (minus one number) is encapsulated, Ensure that these operations are atomic operations. Atomic is the use of CAs to achieve atomic operations (Compare and Swap), the CAs are actually implemented using the CMPXCHG instructions provided by the processor, and the processor execution CMPXCHG instruction is an atomic operation.
3.volatile can guarantee the order of it.
It is mentioned that the volatile keyword can prohibit order reordering, so volatile can be guaranteed to a certain degree of order.
The volatile keyword prohibit command reordering has two layers of meaning:
1) When the program performs a read or write operation to a volatile variable, the changes in its preceding operation must have been made, and the result is already visible to the subsequent operation;
2) in the case of instruction optimization, you cannot put the statements that are accessed by volatile variables behind them, and you cannot put the statements that follow the volatile variable in front of them.
Maybe it's more about the above, for a simple example:

X, Y is non-volatile variable
//flag is the volatile variable
 
x = 2;        Statement 1
y = 0;        Statement 2
flag = true;  Statement 3
x = 4;         Statement 4
y =-1;       Statement 5

Because the flag variable is a volatile variable, then in the process of order reordering, the statement 3 will not be placed in statement 1, Statement 2 before the statement 3 is not put into statement 4, statement 5 after. Note, however, that the order of statement 1 and Statement 2, Statement 4, and statement 5 are not guaranteed.
And the volatile keyword guarantees that execution to the statement 3 o'clock, Statement 1 and statement 2 must be completed, and statement 1 and statement 2 execution results to statement 3, Statement 4, Statement 5 is visible.
So let's go back to one of the examples above:

Thread 1:
context = Loadcontext ();   Statement 1
inited = true;             Statement 2
 
//thread 2:
while (!inited) {
  sleep ()
}
dosomethingwithconfig (context);
In the preceding example, it is mentioned that it is possible that statement 2 will be executed before statement 1, then it may cause the context to not be initialized, and thread 2 will use the uninitialized context to operate, resulting in a program error.
If the inited variable is decorated with the volatile keyword, this problem does not occur because the context is guaranteed to be initialized when executing to statement 2 o'clock.

Principle and implementation mechanism of 4.volatile
Some of the uses of the volatile keyword are described earlier, so let's look at how volatile guarantees visibility and suppresses command reordering.
The following is an excerpt from the in-depth understanding of Java Virtual machines:
"Observing the addition of the volatile keyword and the assembly code generated when the volatile keyword was not added, a lock prefix instruction is added when adding the volatile keyword"
The lock prefix instruction is actually equivalent to a memory barrier (also a memory fence), and the memory barrier provides 3 functions:
1) It ensures that the command reordering does not place the instructions behind the memory barrier and does not queue the preceding instruction behind the memory barrier, i.e., the operation in front of the memory barrier is completed when the command is executed;
2) It forces the modification of the cache to be immediately written to main memory;
3) If it is a write operation, it causes the corresponding cache line in other CPUs to be invalid.
Five. Scenarios using the volatile keyword

The Synchronized keyword is to prevent multiple threads from executing a piece of code at the same time, which can affect program execution efficiency, while the volatile keyword has better performance than synchronized in some cases. Note, however, that the volatile keyword cannot replace the synchronized keyword because the volatile keyword does not guarantee the atomicity of the operation. In general, the following 2 conditions are required to use volatile:
1) The write operation to the variable does not depend on the current value
2) The variable is not included in the invariant with other variables
In fact, these conditions indicate that these valid values that can be written to a volatile variable are independent of the state of any program, including the current state of the variable.
In fact, my understanding is that the above 2 conditions need to ensure that the operation is atomic, so that programs that use the volatile keyword can execute correctly when concurrency occurs.
Here are a few scenarios for using volatile in Java.
1. Status Mark Amount

Volatile Boolean flag = false;
 
while (!flag) {
    dosomething ();
}
 
public void Setflag () {
    flag = true;
}

Volatile Boolean inited = false;
Thread 1:
context = Loadcontext ();  
Inited = true;            
 
Thread 2:
while (!inited) {
sleep ()
}
dosomethingwithconfig (context);


2.double Check

Class singleton{
    Private volatile static Singleton instance = null;
     
    Private Singleton () {
         
    } public
     
    static Singleton getinstance () {
        if (instance==null) {
            synchronized ( Singleton.class) {
                if (instance==null)
                    instance = new Singleton ();
            }
        }
        return instance;
    }
}

As for why this is required, please refer to:
Double check in Java (double-check) http://blog.csdn.net/dl88250/article/details/5439024 and http://www.iteye.com/topic/652440
Resources:
"In-depth understanding of Java virtual machines"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.