volatile definition
The Java programming language allows threads to access shared variables, and in order to ensure that shared variables are updated accurately and consistently, threads should ensure that the variable is obtained separately through exclusive locks. The Java language provides volatile, which in some cases is more convenient than locking. If a field is declared as Volatile,java the thread memory model ensures that all threads see the value of the variable as consistent. the role of volatile
Let's first talk about the role of the volatile keyword. It guarantees the "visibility" of shared variables in multiprocessor development. Visibility means that when a thread modifies a shared variable, another thread can read the modified value. If the volatile variable modifier is used appropriately, it is less expensive to use and execute than synchronized because it does not cause the thread context to switch and schedule. Volatile code example single case mode (reorder)
public class Singleton {public
static volatile Singleton Singleton;
/**
* Constructor private
/Private
Singleton () {}
/**
* Single Instance implementation
* @author Fuyuwei
* May 14, 2017 morning 10:0 7:07
* @return
/public static Singleton getinstance () {
if (Singleton = = null) {
Synchronized (singleton) {
if (singleton = null) {
singleton = new Singleton ();
}}} Return singleton
}
}
We know that instantiating an object after allocating memory space, initializing an object, assigning the address of the memory space to the corresponding reference, the singleton pattern above can be interpreted as allocating memory space, assigning the memory address to the corresponding application, initializing the object. The above code if we do not add volatile in the concurrency environment may appear singleton multiple instantiations, if thread A into the GetInstance method, found Singleton==null, and then add locks through the new singleton to instantiate, Then release the lock, we know that the new singleton in the JVM is actually divided into 3 steps, if the thread ah after the release of the lock did not have time to notify other threads, when thread B into the getinstance found Singleton==null will be again instantiated. Visibility of
One thread modifies the shared variable value while another thread cannot see it. The main cause of visibility problems is that each thread has its own cache area-the thread working memory. Volatile keyword can effectively solve this problem
public class Volatile {int m = 0;
int n = 1;
public void Set () {m = 6;
n = m;
public void print () {System.out.println ("M:" +m+ ", N:" +n);
public static void Main (string[] args {while (true) {final Volatile v = new Volatile ();
New Thread (New Runnable () {@Override public void run () {try {
Thread.Sleep (1000);
catch (Interruptedexception e) {e.printstacktrace ();
} v.set ();
}). Start ();
New Thread (New Runnable () {@Override public void run () {try {
Thread.Sleep (1000);
catch (Interruptedexception e) {e.printstacktrace ();
} v.print (); }). Start (); }
}
}
Normally m=0,n=1;m=6,n=6, by running we found M=0,N6 (it takes a long time to run)
M:6,n:6
m:6,n:6
m:6,n:6
m:6,n:6
m:6,n:6
m:6,n:6
m:6,n:6 m:6,n:6 m:6,n:6 m : 6,n:6
m:0,n:1
m:6,n:6
m:6,n:6
m:6,n:6
m:6,n:6 m:6,n:6 m:6,n:6 M:6,n:6
m:6,n:6
m:6,n:6
m:0,n:6
m:6,n:6
m:6,n:6
m:6,n:6 m:0,n:1 m:0,n:1 M:0,n:1
m:6,n:6
m:0,n:1
m:0,n:1
m:0,n:1
m:6,n:6 m:6,n:6 m:6,n:6 m:0,n : 1
m:6,n:6
M:6,n:6
There are two main differences between write operations for volatile variables and ordinary variables:
(1) Modifying the volatile variable will force the modified value to be refreshed in the main memory.
(2) Modifying the volatile variable causes the value of the corresponding variable in the working memory of other threads to fail. Therefore, the value of the variable will need to be read again from the values in main memory. volatile does not guarantee the original rationality
package com.swk.thread; public class Volatile {private Volatile int m = 0;
public void incr () {m++;
public static void Main (string[] args) {final Volatile v = new Volatile (); for (int i=0;i<1000;i++) {new Thread (new Runnable () {@Override public void R
Un () {try {thread.sleep (1000);
catch (Interruptedexception e) {e.printstacktrace ();
} v.incr ();;
}). Start (); try {thread.sleep (10000);//Ensure that 1000 loops are completed} catch (Interruptedexception e) {e.
Printstacktrace ();
} System.out.println (V.M); }
}
Output: 950, not the 1000 we imagined, if we add synchronized to INCR, the output is 1000.
The reason is also very simple, i++ is actually a composite operation, including three steps:
(1) Reading the value of I.
(2) to I plus 1.
(3) Write the value of I back to memory.
Volatile cannot guarantee that these three operations are atomic, and we can guarantee the atomicity of +1 operations by Atomicinteger or synchronized. volatile Bottom implementation
Before we understand the principle of volatile implementation, let's look at the CPU terminology and instructions associated with its implementation principle.
term |
English word |
term description |
memory barrier |
memory barries |
is a set of processor directives for implementing the order limit for memory operations |
Buffer Rows |
Cache Line | The
the smallest storage unit that can be allocated in the cache. When the processor fills out the cache line, it loads the entire cache line, requiring the use of multiple main memory cycles |
Atomic Operations |
Atomic Operations |
One or more columns that are not interruptible |
buffer row fill |
cache line fill |
when processing its recognized read operand in memory is cacheable, the processor reads the entire cache line to the appropriate cache |
Cache hit |
cache hit |
if the memory location of the cache row fill operation is still the next processor-accessible address, the processor reads from the cache, not from memory |
write hit |
write hit |
when the processor operand is written back to a memory cache area, he first checks to see if the cached memory address is in the cache row, if there is a valid slow Save line, the processor writes the operand back to the cache instead of writing to memory |
write missing |
write misses the cache |
a valid cache line is written To a nonexistent memory region |
How is volatile to ensure visibility? Let's get the JIT compiler-generated assembler instructions under the X86 processor to see what the CPU does when it writes to volatile. The Java code is as follows:
Private Valatile Singleton instance = new Singleton ();
Turn the assembly code into the following
0x01a3de1d:movb $0x0,0x1104800 (%esi); 0x01a3de24:lock addl $0x0, (%ESP);
Lock Addl $0x0, (%ESP) When a volatile variable is decorated, and the lock prefix instruction causes two things under the multi-core processor
1 writes the data of the current processor cache line back to the system memory.
2 This write back to the memory of the operation will make the other CPU cache the memory address of the data is invalid.
To improve processing speed, the processor does not communicate directly with memory, but first reads the system memory data to the internal cache (L1,L2 or others) before operating, but the operation is not known when it will be written to memory. If you write to a variable that declares volatile, the JVM sends a lock prefix to the processor and writes the data from the cached row of the variable to the system memory. However, even if you write back to memory, if the value of the other processor cache is still old, then performing the calculation operation will be problematic. Therefore, under multiprocessor, in order to ensure that the cache of each processor is consistent, a cache consistency protocol is implemented, and each processor checks that its cached value is expired by sniffing the data propagated on the bus, and when the processor finds that the corresponding memory address of its cache line is modified, The current processor's cache line is set to an invalid state, and when the processor modifies the data, it will again read the data into the processor cache from the system's memory. volatile use scenes One thread writes, multiple threads read
Volatile Boolean shutdownrequested;
...
public void shutdown () {shutdownrequested = true;}
public void DoWork () {while
(!shutdownrequested) {
//do Stuff
}
}
implement "low cost read-write lock" with volatile and synchronized
Volatile allows multiple threads to perform read operations, so when you use volatile to guarantee read code paths, you get a higher degree of sharing than executing all the code paths with locks-like read-write operations.
public class Cheesycounter {
private volatile int value;
public int GetValue () {return value;}
public synchronized int increment () {return
value++;
}
}