Overview
The main goal of the Java memory model is to define the access rules for each variable in the program, that is, the underlying details of storing variables in the virtual machine into memory and removing variables from memory. The variables here are thread-shared and there is a competition problem.
The Java memory model specifies that all variables are stored in main memory, that each thread has its own working memory to hold a copy of the master memory of the variable used by the thread, that all operations of the thread on the variable (read, assigned, and so on) must be made in working memory and not directly read and write to the variables in main memory. (Note: The working memory or local memory that is said here is virtual, essentially including registers, caches, or intermediate memory)
There is no direct access to variables in the other's working memory between different threads, and the transfer of variable values between threads needs to be done through main memory, the interaction of the three relationships:
How to Interact
So how does memory interact with each other? JMM defines 8 operations to complete, and the virtual machine guarantees that each operation is atomic.
Lock: A variable that acts on the main memory, which identifies a variable as a thread-exclusive state
Unlock: A variable that acts on the main memory, releasing a variable that is in a locked state, and the released variable can be Cheng by another line.
Read: A variable that acts on the main memory, transferring the value of a variable from main memory to the working memory of the thread.
Load: A variable that acts on the working memory and places the value of the variable obtained by the read operation into a copy of the variable in the working memory
Use : A variable acting on the working memory that passes the value of the variable in the working memory to the execution engine, which is performed when the virtual machine needs to be used.
Assign: A variable that acts on the working memory, assigns the value received by an execution engine to a variable in the working memory, and performs this action when the virtual opportunity is to the byte code that assigns the variable a value.
Store: transfer A variable value from working memory to main memory
Write: A variable that acts on the main memory, which puts the value of the variable that the store operation obtains from the working memory into a variable in the main memory.
Re-order
The flow of communication between threads can be broadly seen from above, but when executing a program, the compiler and the processor tend to reorder the instructions, which is not necessarily performed as written by the program. This is often done to improve performance and increase concurrency. However, for multi-threaded programs, this often results in inconsistent execution of the program, so we need to synchronize through synchronized,volatile and other means.
Memory barrier
Because of the reordering, in order to ensure memory visibility, the Java compiler inserts a memory barrier directive in the generated instruction sequence to suppress a particular type of handler reordering, jmm the memory barrier directive into the following four classes:
Barrier type |
instruction Example |
Description |
Loadload barriers |
Load1; Loadload; Load2 |
Ensure the loading of the LOAD1 data, before the loading of the Load2 and all subsequent load instructions. |
Storestore barriers |
Store1; Storestore; Store2 |
Ensure that the Store1 data is visible to other processors (flushed to memory), previously stored in Store2 and all subsequent storage instructions. |
Loadstore barriers |
Load1; Loadstore; Store2 |
Ensure that the LOAD1 data is loaded before the Store2 and all subsequent storage instructions are flushed to memory. |
Storeload barriers |
Store1; Storeload; Load2 |
Make sure that the Store1 data becomes visible to other processors (that is, flush to memory), before the loading of Load2 and all subsequent mount instructions. The Storeload barriers will make the memory access instruction after all memory access instructions (store and mount instructions) before the barrier is complete. |
Happens-before
In JMM, if the result of one operation needs to be visible to another, there must be a happens-before relationship between the two operations, either between a thread or between two threads.
The Happens-before rules are as follows:
1. Program Order Rule : Each action in a thread is happens-before to any subsequent action in that thread.
2, monitor lock rule : A monitor's unlock, happens-before in the subsequent locking of this monitor.
3,volatile rule : For volatile domain write, happens-before in any subsequent to this volatile read.
4, transitivity : If a happens-before b,b happens-before C, then a Happens-before c.
Happens-before is not to say that the order of execution time on the two operation codes, but that the result of one operation is visible to the other, and that the execution result is sequential.
Data dependencies
If two operations access the same variable, and one of the two operations has a write operation, there is a data dependency between the two operations.
There are three types of data dependencies:
Write after read:
A=1; B=a;
Write after writing:
A=1; A=1;
Read and write:
B=a; A=1;
The so-called data dependency means that when reordering occurs, the results change, so the compiler and the processor do not change the order of execution of the two operations that have data dependencies when they are reordered
As-if-serial semantics:
No matter how it is reordered, the execution result of the (single-threaded) program cannot be changed. So in order to follow this semantics, the compiler and processor do not reorder operations that have data dependencies.
Control dependencies:
if (flag)//---1
int i= a*a; -----2
There is a control dependency between operations 1 and 2, so in a multithreaded program, the compiler and the processor start guessing execution
The processor can calculate the result of the a*a in advance, and then put it into a reordered buffered hardware cache, and when the condition of operation 1 is true, writes the result of the calculation to the variable i. So we'll find that two operations are reordered here, so the semantics of multi-threaded threads are broken.
However, for single-threaded purposes, reordering has control-dependent operations that do not alter the execution results, but in multi-threading, reordering has control-dependent operations that may alter the execution result.
Sequential consistency:
When the program is not using synchronization, the data competition will occur, so it will result in a change.
When synchronization is used, this is a program that does not compete with data. If the program is using synchronization correctly, the executed program will have sequential consistency: that is, the execution of the program is exactly the same as in the sequential consistency model.
The so-called sequential consistency model is a theoretical reference model with two major characteristics:
All operations in a thread must be executed in the order of the program;
Regardless of whether the program is synchronized, all threads can see only a single order of execution, and in the sequential consistency model, each operation is atomic and must be immediately visible to other threads.
Note:
JMM does not guarantee the atomicity of read-write to 64-bit long and double variables (no volatile modifiers), while the memory consistency model guarantees atomicity for all read and write operations.
Because on some 32-bit processors, if the 64-bit long and double reads and writes are atomic, then it takes a lot of overhead, so Java does not have to insist on both of these atoms. When the JVM is running on these processors, a 64-bit long/double write is divided into two 32-bit writes to perform, at which point there is no guarantee of atomicity, so it is possible to read half of the error when reading.
Volatile
Volatile is seen as a weak level of synchronized, in other words, a single read and write operation for a volatile variable that synchronizes these individual operations with the same lock.
Let's look at the effects of volatile:
Use volatile:
[Java] view plaincopy
Class Volatilefeaturesexample {
Declaring a 64-bit long variable with volatile
Volatile long vl = 0L;
Public void Set (long L) {
VL = L; //write for single volatile variable
}
Public void Getandincrement () {
vl++; //composite (multiple) volatile variable read/write
}
Public long Get () {
return VL; //single volatile variable read
}
}
Replacing volatile with synchronized
[Java] view plaincopy
Class Volatilefeaturesexample {
Long VL = 0L; ///64-bit long type common variable
Write to a single normal variable synchronously with the same lock
Public synchronized Void set (long L) {
VL = L;
}
Public void Getandincrement () { //Common method call
Long temp = get (); //Call a synchronized Read method
temp + = 1L; //Normal write operation
Set (temp); //Call a synchronized write method
}
Public synchronized Long Get () {
Read the same lock synchronization for a single common variable
return VL;
}
}
As we can see, a single read and write operation on volatile is the same as the same lock used to read and write to a common variable.
We found that in time 64 bits long and double, as long as it is volatile, then the read and write of the variable is atomic, and volatile++ is now compound operation, so there is no atomicity.
Nature of volatile (visibility and atomicity)
Visibility: When reading a volatile variable, you can always see the last write of the variable by other threads, which means that each read is the latest value.
Atomicity: Atomicity of any single volatile variable operation
We can get from the visibility, volatile can establish a certain sense of happens-before relationship, because its writing takes precedence over reading.
Execution process
When thread a writes a volatile variable, JMM flushes the variable's local memory value to main memory, so the value in local memory is consistent with the main memory.
When thread B reads a volatile,jmm, the local memory of the thread is invalidated and read directly from the main memory so that the read value is the one that was just written.
In other words, when thread B reads the volatile variable, all operations on this variable are visible to thread B before thread A.
In other words, we can say:
When thread a writes a volatile variable, it actually sends a message to the next thread B that will read the volatile variable, and thread B reads the volatile variable to receive a message from thread A.
So thread a writes volatile, and thread B reads volatile, which can be seen as thread A sends a message to thread B through main memory.
How to Achieve
We know the reordering, so if we re-order the volatile variables, JMM limits the reordering rules for this variable:
Is it possible to reorder |
A second action |
First action |
General Read/write |
Volatile read |
Volatile write |
General Read/write |
|
|
NO |
Volatile read |
NO |
NO |
NO |
Volatile write |
|
NO |
NO |
From the table above we can see:
If the second operation is a volatile write, no matter what the first action is, it will not be reordered by the compiler or handler. Ensure that the write operation before volatile is not queued to the back.
If the first action is a volatile read, no matter what the second action is, it will not be reordered.
If the first operation is volatile and the second is a volatile read, it cannot be reordered.
Volatile in order to implement this rule, when generating bytecode, the compiler adds a memory barrier to the bytecode instruction before and after it, and suppresses the ordering of certain types of handlers.
The rules are as follows:
Added a storestore barrier to each volatile write operation
= = = "Prohibit re-ordering of previous normal write operations and volatile writes
Added a storeload barrier to each volatile write operation
= = = Disables the reordering of subsequent normal read/write operations with volatile write operations
Added a loadload barrier to each volatile read operation
= = = Prevents re-ordering of subsequent normal read operations and volatile read operations
Added a loadstore barrier to each volatile read operation
= = = Prevents subsequent write operations from being reordered with volatile read operations
In practice, some barriers can be omitted, and the setting of the barrier is very much related to the processor, as X86 only has a storeload barrier, as it allows only the reordering of write-read operations.
Lock
As mentioned above, locks can guarantee strong mutex and synchronization.
Locks, like volatile, can also establish happens-before relationships
For example, if thread a only releases the lock, thread B can obtain a lock, so the thread releases the changes to the shared variables before the lock is released, and the threads B gets the lock is visible.
Lock and volatile
Volatile is only atomic to read and write operations on a single volatile variable, and the lock is powerful to ensure that the entire critical area code execution is atomic, so the relative volatile scalability and performance is better.
Memory semantics
When thread a releases the lock, JMM flushes the shared variables in the local memory that the thread corresponds to in the main memory.
When thread B acquires the lock, JMM will set the local memory corresponding to the thread to be invalid and read the shared variable directly from the main memory.
Like volatile, the lock has its own semantics, and when thread a releases the lock, it actually sends a message to the thread that is going to get the lock.
Thread B obtains the lock and receives the message sent by thread A
The process is actually two threads communicating through main memory
Implementation of Locks
Usually implemented by the lock method in Reentrantlock, this lock is generally divided into fair and non-fair locks, the default is the non-fair lock.
The fair lock writes the volatile variable state at the end of the release lock, the volatile variable is read first when the lock is acquired, so the volatile happens-before rule that releases the lock is visible to the thread that gets the lock.
The unfair lock calls Compareandsetstate (CAS), which is eventually transferred to the local method, and some inherent rules make the CAS have both volatile read and volatile write memory semantics.
Fair lock and non-fair lock:
Fair lock and non-fair lock at the end of the release, write a volatile variable state
The fair lock reads this volatile first when it is acquired
The non-fair lock, when acquired, first updates the volatile variable with CAS, which has both volatile read and volatile write memory semantics.
Final
For final domain reads and writes, the compiler and processor need to follow two reordering rules:
1, write to the final field in the constructor, and then assign the reference to the object to another reference, the two cannot be reordered.
2, first read a reference containing the final domain and then read the final field, the two cannot be re-ordered
Write final field reordering rules prevent the final field from being sorted out of the constructor
The compiler inserts a storestore barrier before the constructor return after the final field is written, which prevents the final field from being sorted out of the constructor,
This guarantees that any other thread, when referencing the object, the final domain of the object is initialized correctly (but the normal domain may not be initialized).
The compiler inserts a loadload barrier before the read final domain operation.
This ensures that the object reference containing the final field must be read before the final field of an object is read, and if the reference is not NULL, the final domain of the referencing object has been properly initialized.
If the final field is a reference type, a constraint is added:
Write to the member domain of the final reference object within the constructor, and assign the reference to the constructed object to another reference variable outside of the constructor, which cannot be reordered.
The collation of the final field ensures that the object pointed to by the reference variable has been properly initialized in the constructor before the reference variable is visible to other threads, but there is also a guarantee that the reference to the constructed object cannot be seen by other threads until the constructor returns. That is, the object reference can no longer overflow in the constructor.
Let's look at an example:
[Java] view plaincopy
Public class Finalreferenceescapeexample {
Final int i;
static Finalreferenceescapeexample obj;
Public Finalreferenceescapeexample () {
i = 1; //1 write final field
obj = this ; //2 This reference in this "escape"
}
Public static void writer () {
New Finalreferenceescapeexample ();
}
Public static void reader {
if (obj! = null) { //3
int temp = OBJ.I; //4
}
}
}
Thread A executes the write method, and thread B executes the Read method.
Action 2 makes the object visible to thread B before it is constructed, so it is possible to reorder operations 2 and 1, and then thread B will not be able to read the value of the final field correctly
The final implementation
As stated above, let's summarize:
A reorder rule that writes a final field requires the compiler to insert a stroestore barrier before the constructor return after the final field is written.
The reordering rule for a read final field requires the compiler to insert a loadload barrier before the operation of the final domain
Summarize
JMM the reordering of happens-before requirements is divided into two types:
JMM requires the compiler and processor to disallow this sort of reordering, which alters the results of the program execution.
JMM does not require the compiler or processor to reorder the results of the program execution (can be reordered)
It can also be said that JMM follows a principle:
As long as you do not change the execution results of the program (single-threaded and multithreaded correctly synchronized programs), the compiler and processor can be optimized.
Analysis of Java memory model