Overview
The main goal of the Java memory model is to define access rules for the variables in the program, that is, to store the variables in the virtual machine in memory and take the variables out of memory as the underlying details. The variables here are thread-shared and have a competitive problem.
The Java memory model stipulates that all variables are stored in the main memory, each thread also has its own working memory to hold a copy of the main memory copies of the variables used by the thread, and all operations on the variable (read, assign, etc.) must be done in working memory and not directly read and write to the variables in main memory. (Note: The working memory or local memory mentioned here is virtual, essentially including registers, caches, or intermediate memory).
There is no direct access between different threads to the other's working memory variables, the transfer of variable values between threads need to be done through the main memory, the interaction of the three relationships as shown in the figure:
How to interact
So how does memory interact with each other? JMM defines 8 operations to complete, and the virtual machine guarantees that each operation is atomic.
Lock: A variable acting on the main memory that identifies a variable as a thread-exclusive state
Unlock: A variable that acts on the main memory, releasing a variable in a locked state and releasing the variable before it can be Cheng by another line.
Read: A variable that acts on the main memory, transferring the value of a variable from main memory to the working memory of the thread.
load: A variable that acts on working memory, putting the value of a variable obtained by the read operation into a copy of a variable in working memory
Use : A variable that acts on working memory, passing the value of a variable in working memory to the execution engine, which is performed when the virtual machine needs to be used.
Assign: A variable that acts on working memory, assigning a value received by an execution engine to a variable in working memory, which is performed when the virtual opportunity is to the byte code that assigns the variable a value.
Store: transfer A variable value in working memory to main memory
Write: A variable acting on the main memory that puts the value of a variable in the store operation from working memory into a variable in the main memory.
reordering
From the above you can roughly see the flow of communication between threads, but when executing a program, the compiler and the processor often reorder the instructions, which is not necessarily the execution of the program written. This is often to improve performance and increase concurrency. But for multithreaded programs, this tends to result in inconsistent results, so we need to synchronize by Synchronized,volatile. Memory Barrier
Because of the reordering, in order to ensure the visibility of memory, the Java compiler inserts a memory barrier instruction in the generated sequence of instructions to prevent certain types of handlers from being sorted, jmm the memory barrier directives into the following four categories:
Barrier type |
Instructions sample |
Description |
Loadload barriers |
Load1; Loadload; Load2 |
Ensure that the LOAD1 data is loaded before the Load2 and all subsequent load instructions are loaded. |
Storestore barriers |
Store1; Storestore; Store2 |
Ensure that Store1 data is visible to other processors (flushed to memory) prior to storage of Store2 and all subsequent storage instructions. |
Loadstore barriers |
Load1; Loadstore; Store2 |
Ensure that the LOAD1 data is loaded before the Store2 and all subsequent storage instructions are flushed to memory. |
Storeload barriers |
Store1; Storeload; Load2 |
Ensure that the Store1 data becomes visible to other processors (referring to flushing to memory) prior to loading of Load2 and all subsequent load instructions. Storeload barriers makes the memory access instruction after the barrier complete before all memory access instructions (storage and load instructions) are completed. |
Happens-before
In JMM, if the result of an operation is to be visible to another operation, there must be a happens-before relationship between the two operations, either between one thread or two threads.
The Happens-before rules are as follows:
1, program Order rule : Each operation in a thread, happens-before to any subsequent operation in the thread.
2, monitor lock rule : A monitor unlock, happens-before in the subsequent lock of this monitor.
3,volatile rule : For volatile domain write, happens-before in any subsequent to this volatile read.
4, transitivity : If a happens-before b,b happens-before C, then a Happens-before c.
Happens-before is not to say that the order of execution time on two code of operation is to ensure that the result of one operation is visible to another, and that the result is sequential.
Data Dependencies
If two operations access the same variable, and two of the operations have one write operation, there is a data dependency between the two operations.
Data dependencies fall into three categories:
read after writing:
A=1; B=a;
Write after writing:
A=1; A=1;
Read and write:
B=a; A=1;
So-called data dependency means that when a reordering occurs, the results change, so the compiler and the processor do not change the execution order of the two operations that have data dependencies when they are reordered.
as-if-serial semantics:
Regardless of the reordering, the execution results of the (single-threaded) program cannot be changed. So to follow this semantics, compilers and processors do not reorder operations that have data dependencies.
Control dependencies:
if (flag)//---1
int i= a*a; -----2
There is a control dependency between Operation 1 and Operation 2, so in multithreaded programs, the compiler and the processor start guessing execution
The processor can calculate the results of the a*a in advance, and then put them into a reordering buffer hardware cache, and write the results to the variable I when the condition of operation 1 is true. So we'll find that we've reordered two of these operations here, so we've broken the semantics of multithreaded routines.
However, for single-threaded, reordering has control-dependent operations and does not alter execution results, but in multiple threads, reordering has control-dependent operations that may alter execution results.
Sequential Consistency:
Data competition occurs when a program is not using synchronization, so it can result in a change in results.
When synchronization is used, this is a program with no data competition. If the program is using synchronization correctly, the executing program will have sequential consistency: that is, the execution result of the program is exactly the same as that performed in the sequential conformance model.
The so-called sequential consistency model is a theoretical reference model and has two main characteristics:
all operations in a thread must be performed in the Order of the program;
regardless of whether the program is synchronized, all threads can see only a single order of execution, and in the sequential consistency model, each operation is atomic and must be immediately visible to other threads.
Note:
JMM does not guarantee that the 64-bit long and double variables (without volatile modification) read and write are atomic, while the memory consistency model guarantees atomicity for all read and write operations.
Because it takes a lot of overhead to read and write 64-bit long and double on some 32-bit processors, Java does not insist on having to be atomic to both. When the JVM is running on these processors, a 64-bit long/double write operation is divided into two 32-bit write operations that are not guaranteed to be atomic, so it is possible to read a half of the error when reading.
volatile
Volatile is considered to be a weak-level synchronized, in other words, to the volatile variable a single read-write operation, using the same lock to synchronize these individual operations. Let's take a look at the effect of volatile: using volatile:
Class Volatilefeaturesexample {
//use volatile to declare a 64-bit long variable
volatile long vl = 0L;
public void set (long l) {
VL = l; The write of a single volatile variable is public
void Getandincrement () {
vl++; The read/write of a composite (multiple) volatile variable is public
long get () {return
VL; Read} for a single volatile variable
Replace volatile with synchronized
Class Volatilefeaturesexample {
long vl = 0L; 64-bit long normal variable
//write to single ordinary variable with same lock sync public
synchronized void set (long l) {
VL = l;
}
public void Getandincrement () {//normal method call
Long temp = get (); Call synchronized Read Method
temp + 1L; General write operation
Set (temp); Invoke a synchronized write method
} public
synchronized long get () {
//to read a single common variable with the same lock sync return
VL
}
}
As we can see, the effect of a single read and write operation on a volatile is synchronized with the same lock used for reading and writing to a normal variable.
We found that in time 64-bit long and double, as long as it is volatile, then the reading and writing of the variable is atomic, and volatile++ is then a composite operation, so it is not atomic. the nature of the volatile (visibility and atomicity)
Visibility: When reading a volatile variable, you can always see the other thread's last write to the variable, which means that each read is the latest value.
Atomicity: Atomic for any single volatile variable operation
We can obtain from the visibility, volatile can establish a certain sense of happens-before relationship, because its write takes precedence over reading. Execution Process
When thread a writes a volatile variable, JMM flushes the value in the local memory of the variable to the main memory, so the value in the local memory is the same as in the main memory.
When thread B reads a volatile,jmm, the thread's local memory is set to be invalid and read directly from the main memory so that the value read is just written
That is, when thread B reads the volatile variable, all operations on the variable before thread A are visible to thread B
In other words, we can say this:
Thread A writes a volatile variable that actually sends a message to the next thread B that will read the volatile variable, and thread B reads the volatile variable to receive the message from thread A.
So thread a writes volatile, thread B reads volatile, and can be thought of as thread A sends a message to thread B through main memory. How to achieve
We know the reordering, so will the volatile variable be reordered, and jmm limit the reordering of this variable:
Whether you can reorder |
Second action |
First action |
General Read/write |
Volatile read |
Volatile write |
General Read/write |
|
|
NO |
Volatile read |
NO |
NO |
NO |
Volatile write |
|
NO |
NO |
From the table above we can see that:
If the second action is volatile write, no matter what the first action is, it will not be sorted by the compiler or the handler. Make sure that the write operation before volatile is not queued to follow.
If the first action is volatile read, no matter what the second operation is, it will not be reordered.
If the first action is volatile write, and the second is volatile read, you cannot reorder.
Volatile in order to implement this rule, the compiler adds a memory barrier to the bytecode instruction before and after the byte code is generated, preventing a particular type of processing from being sorted.
The rules are as follows:
Add the Storestore barrier to the write operation for each volatile
= = = "No prior normal write operations and volatile write operations are reordered
Add a storeload barrier to the write operation for each volatile
= = = Prohibit the following normal read/write operations and volatile write operations reordering
Add the Loadload barrier to the back of each volatile read operation
= = To prevent the subsequent normal read operation and volatile read operation of the reordering
Add the Loadstore barrier to the back of each volatile read operation
= = To prevent the subsequent write operation and volatile read operations reordering
In practice, some of the barriers can be omitted, and the establishment of the barrier and the processor also have a great relationship, such as X86 only have a storeload barrier, because it allows only write-read operation of the reordering.
Lock
As mentioned above, locks can guarantee strong mutexes and synchronicity.
Locks are also similar to volatile, and can be established happens-before relationships
For example, thread a only releases the lock, thread B can acquire the lock, so the modification of the shared variable before the thread releases the lock is visible after the line B gets the lock.
Lock and volatile
Volatile is only atomic to the reading and writing of a single volatile variable, and the lock is powerful to ensure that the entire critical area code execution is atomic, so relatively volatile scalability and performance is relatively good.
Memory Semantics
When thread a releases the lock, JMM flushes the shared variable in the local memory corresponding to the thread to the main memory.
When thread B acquires the lock, jmm the thread's local memory to an invalid, reading the shared variable directly from the main memory.
Like volatile, the lock has its own semantics, and thread a releases the lock by sending a message to the thread that will acquire the lock.
Thread B Gets the lock and receives the message sent by thread A
This process is actually two threads through the main memory of the communication lock implementation
Usually by the lock method in the Reentrantlock, this lock is generally divided into fair and unfair locks, the default is not fair lock.
Fair lock at the end of release lock write volatile variable state, read volatile variable first when acquiring lock, so according to volatile happens-before rule, the thread that releases the lock writes volatile to the thread that gets the lock.
Non-fair locks call Compareandsetstate (CAS), which is eventually transferred to the local method, with some inherent rules that make CAS have both volatile read and volatile write memory semantics.
Fair and unjust locks:
Fair and unfair locks in the release, the last to write a volatile variable state
Fair lock reads this volatile first when acquiring
Instead of a fair lock, the volatile variable, first updated with CAs, is used to update the memory semantics of both volatile read and volatile writes.
Final
for final domain reads and writes, compilers and processors need to follow two reordering rules:
1. Writing to the final field in the constructor, and then assigning references to the constructed object to other references, cannot be reordered.
2. First read a reference containing the final field and then read the final field, which cannot be reordered between the two
write the final field's reordering rule to prevent the final field from being reordered to a constructor
After the final field is written, the compiler inserts a storestore barrier before the constructor return, which prohibits the final field from being sorted out of the constructor,
This guarantees that any other thread, when referencing an object, has the final domain of the object initialized correctly (but the normal domain may not be initialized).
The compiler inserts a loadload barrier in front of the Read final field operation.
This ensures that, before reading the final domain of an object, the object reference containing the final domain is read first, and if the reference is not NULL, the final domain of the referenced object has been correctly initialized.
If the final field is a reference type, a constraint is added:
The write to the member domain of the final reference object within the constructor, and assigning a reference to the constructed object to another reference variable outside the constructor, cannot be reordered.
A reordering rule that writes the final field can ensure that the before a reference variable is visible to another thread, the object to which the reference variable points is already correctly initialized in the constructor, but there is also a guarantee that the constructor's reference cannot be made visible to other threads until the constructor returns. That is, an object reference cannot overflow in a constructor.
Let's look at an example:
public class Finalreferenceescapeexample {
final int i;
static Finalreferenceescapeexample obj;
Public Finalreferenceescapeexample () {
i = 1; 1 Write final domain
obj = this; 2 This reference "Escape"
} public
static void writer () {
new finalreferenceescapeexample ();
}
public static void Reader {
if (obj!= null) { //3
int temp = OBJ.I; 4
}}}
Thread A executes the write method, and thread B executes the Read method.
Operation 2 makes the object visible to thread B before the construct is completed, so it is possible to reorder operations 2 and 1, and then thread B will not be able to correctly read the value of the final field
the final realization
As mentioned above, let's sum up:
Writing the final field reordering rules requires the compiler to final</