A long assignment in Java is not an atomic operation, because the 32-bit first, then 32-bit, two-step operation, and Atomiclong assignment is atomic operation, why? Why can volatile replace simple locks without guaranteeing atomicity? This involves volatile, which is one of the magic words in Java that I think has never been clearly explained in the Java specification, which is described in the official document of Sun's JDK as volatile:
The Java programming language provides a second mechanism, volatile fields, that's more convenient than locking for some purposes. A field may is declared volatile, in which case the Java Memory Model ensures that all threads see a consistent value for The variable.
This means that if a variable is added with the volatile keyword, it tells the compiler and the JVM's memory model that the variable is shared across all threads, and each time the JVM reads the most recently written value and makes its newest value visible to all CPUs. volatile seems to be sometimes a substitute for a simple lock, and it seems that adding the volatile keyword saves the lock. However, it is said that volatile does not guarantee atomicity (Java programmers are familiar with the phrase: volatile is only used to ensure that the variable is visible to all threads, but not atomicity). Is it not contradictory to each other?
Do not use volatile in getandoperate situations where only set or get scenes are suitable for volatile
do not use volatile in getandoperate occasions (this is not atomic, need to be locked), only set or get scene is suitable for volatile .
Volatile no atomicity example: Atomicinteger self-increment
For example, you let a volatile integer increment (i++), in fact, to be divided into 3 steps: 1) Read the value of the volatile variable to local; 2) increase the value of the variable, 3) write the value of the local back, so that the other threads are visible. These 3-step JVM directives are:
1234 |
mov 0xc (%r10),%r8d ; Load inc %r8d ; Increment mov %r8d, 0xc (%r10) ; Store lock addl $ 0x0 ,(%rsp) ; StoreLoad Barrier |
Note the last step is a memory barrier.
What is a memory barrier (Barrier)?
Memory barrier is a CPU instruction. Basically, it's such an instruction: a) to ensure the order in which certain operations are performed, and b) to affect the visibility of some data (which may be the result of some instruction execution). The compiler and CPU can reorder the instructions in the same way that the output is guaranteed, allowing performance to be optimized. Inserting a memory barrier is equivalent to telling the CPU and the compiler to execute before the command must be executed before the command must be executed. Memory barrier Another effect is to force updates to a different CPU cache at a time. For example, a write barrier flushes the data written before the barrier to the cache so that any thread that attempts to read the data will get the latest value, regardless of which CPU core or CPU is executing it.
What is the relationship between memory barrier (barrier) and volatile? As mentioned in the above virtual machine directive, if your field is Volatile,java the memory model will insert a write barrier directive after the write operation, inserting a read barrier command before the read operation. This means that if you write to a volatile field, you must know: 1. Once you have finished writing, any thread that accesses this field will get the most recent value. 2, before you write, will ensure that all previous events have occurred, and any updated data values are also visible, because the memory barrier will be the previous write values are flushed to the cache.
Why is volatile not atomic?
Understand the memory barrier this CPU instruction, back to the previous JVM directive: from load to store to memory barrier, a total of 4 steps, the last step of the JVM let the value of this latest variable is visible on all threads, The final step is to have all CPU cores get the latest values, but the middle steps (from load to store) are unsafe, and if the other CPUs modify the value, they will be lost. The following test code can actually test the self-increment of the voaltile without atomicity:
+ View Codevolatile No atomicity example: Singleton single-instance mode implementation
This is a thread-insecure singleton (singleton mode) implementation, although volatile is used:
1234567891011121314 |
public
class
wrongsingleton {
private
static
volatile
wrongsingleton _instance =
null
;
private
wrongsingleton() {}
public
static
wrongsingleton getInstance() {
if
(_instance ==
null
) {
_instance =
new
wrongsingleton();
}
return
_instance;
}
}
|
The following test code can test that the thread is unsafe:
+ View Code
The reason is naturally the same as the above example. Because volatile guarantees the visibility of a variable to a thread, it does not guarantee atomicity .
Attached: The correct thread-safe pattern of a single case:
123456789 |
@ThreadSafe
public
class SafeLazyInitialization {
private
static
Resource resource;
public
synchronized
static
Resource getInstance() {
if
(resource ==
null
)
resource =
new
Resource();
return
resource;
}
}
|
Another way to do this:
12345 |
@ThreadSafe public class EagerInitialization { private static Resource resource = new Resource(); public static Resource getResource() { return resource; } } |
How lazy initialization is written:
123456789 |
@ThreadSafe
public
class
ResourceFactory {
private
static
class
ResourceHolder {
public
static
Resource resource =
new
Resource();
}
public
static
Resource getResource() {
return
ResourceHolder.resource ;
}
}
|
Two-check lock/double Checked locking (anti-pattern)
12345678910111213141516 |
public
class
SingletonDemo {
private
static
volatile
SingletonDemo instance =
null
;
//注意需要volatile
private SingletonDemo() { }
public
static
SingletonDemo getInstance() {
if
(instance ==
null
) {
//二次检查,比直接用独占锁效率高
synchronized
(SingletonDemo .
class
){
if
(instance ==
null
) {
instance =
new SingletonDemo ();
}
}
}
return
instance;
}
}
|
Why do atomicxxx have atomicity and visibility?
Taking Atomiclong, it solves both the problem of the atomicity of the volatile atom and the visibility. How did it do that? The CAs (compare and Exchange) Directive referred to in the non-blocking synchronization algorithm and the CAS (Compare and swap) No lock algorithm is of course the above. In fact, the source of Atomiclong also used volatile, but only to read or write, see Source:
1234567891011121314151617 |
public
class
AtomicLong
extends
Number
implements
java.io.Serializable {
private
volatile long
value;
/**
* Creates a new AtomicLong with the given initial value.
*
* @param initialValue the initial value
*/
public AtomicLong(
long
initialValue) {
value = initialValue;
}
/**
* Creates a new AtomicLong with initial value {@code 0}.
*/
public
AtomicLong() {
}
|
Its CAS source code core codes are:
123456789 |
int
compare_and_swap (
int
* reg,
int
oldval,
int
newval)
{
ATOMIC();
int
old_reg_val = *reg;
if
(old_reg_val == oldval)
*reg = newval;
END_ATOMIC();
return
old_reg_val;
}
|
The virtual machine directives are:
1234 |
mov 0xc (%r11),%eax ; Load mov %eax,%r8d inc %r8d ; Increment lock cmpxchg %r8d, 0xc (%r11) ; Compare and exchange |
Because CAs is based on optimistic locking, that is, when writing, if the old value of the register is not equal to the present value, indicating that there are other CPUs in the modification, then continue to try. So this guarantees the atomicity of the operation.
Reference Link: http://www.cnblogs.com/Mainz/p/3556430.html
Why is the volatile keyword in Java not atomic