Suppose a thread assigns a value to a variable: variable = 3;
The memory model needs to fix a problem: "Under what conditions, the thread reading variable will see this value as 3?" ”
This may seem natural, but if there is a lack of synchronization, there are many factors that make it impossible for a thread to see the results of another thread immediately or forever. Such as:
1. The order of instructions generated in the compiler can be different from the order in the source code, and the compiler will save the variables in registers instead of memory;
2. The processor can execute instructions in a disorderly or parallel manner;
3. The cache may change the order in which the write variables are submitted to the main memory;
4. And the values stored in the processor's local cache are not visible to other processors.
These factors make it impossible for one thread to see the latest value of a variable and cause memory operations in other threads to appear to be executing in a disorderly order.
The Java language Specification requires a serial-like semantics to be maintained in the JVM thread: All of the above operations are allowed as long as the final result of the program is the same as the result executed in a strict serial environment.
This is really a good thing, because the computer's performance gains in recent years have been largely attributed to these reordering measures.
In a single-threaded environment, we cannot see all of these underlying technologies, which have no effect other than increasing the execution speed of the program.
In a multithreaded environment, maintaining the serialization of a program can result in significant performance overhead. For threads in concurrent applications, they perform their own tasks for most of the time, so coordinated operations between threads can only slow down the application without any benefit. Only when multiple threads are sharing data must the actions between them be reconciled, and the JVM relies on a synchronization operation to find out when these reconcile operations will occur.
The JVM sets a minimum set of guarantees that specifies when write operations to variables will be visible to other threads. The JVM is designed with a tradeoff between predictability and ease of application of the program, enabling a high-performance JVM to be implemented on a variety of mainstream processor architectures.
The memory model of the platform
In a multiprocessor architecture with shared memory, each processor has its own cache and is regularly reconciled to the main memory.
Different levels of cache consistency are provided in different processor architectures, some of which provide minimal assurance that different processors are allowed to see different values from the same storage location at any point in time.
It will take a lot of overhead to make sure that every processor knows what other processors are doing at any point in time. This information is unnecessary for most of the time, so the processor will appropriately loosen the storage consistency guarantee in exchange for performance gains.
The memory model defined in the schema tells the application what guarantees it can get from the memory system, in addition to defining special instructions (called Memory fences) that enable additional storage coordination guarantees when data is shared. In order for Java developers not to care about the differences between memory models on different architectures, Java also provides its own memory model, and the JVM masks the difference between the JVM and the underlying platform memory model by plugging in the memory fence at the appropriate location.
Suppose: Imagine that there is only a unique sequence of operations executed in the program, regardless of the processor on which the operation is performed, and that each time the variable is read, the value of the variable that was last written to it in the execution sequence (any processor) is obtained.
This optimistic model is known as serial consistency, and developers often err on the assumption that there is serial consistency, but this serial consistency is not provided in any modern multiprocessor architecture, as does the JVM.
In multiprocessor and compilers that support shared memory, there are some strange situations when sharing data across threads, unless the memory fence is used to prevent these situations from occurring. in a Java program, however, you do not need to specify the location of the memory fence, but only use synchronization correctly to find out when the shared state will be accessed.
Re-order/**
* @author83921
* In the absence of proper synchronization, it is difficult to infer the behavior of the simplest concurrent program. In the example:
* It's easy to imagine how the output (1,0), (0,1), or (T1) can be done before T2 starts, and T2 can be done before the T1 starts, or they are alternately executed.
* but also output (0,0), because there is no data flow dependency between the actions in each thread, these operations can be executed in order, even if the operations are executed sequentially, but
* This may also occur in different timing of cache flushing to main memory, where the assignment of T1 may be performed in reverse order in the T2 perspective.
* You can imagine the order of execution in T2 [X=b, B=1, Y=a, A=1]
* It is very difficult to enumerate all possible execution results for this simple example, and the reordering of memory levels makes the program's behavior unpredictable.
* While it is easy to make sure that synchronization is used correctly in the program, synchronization restricts the way that the compiler, the runtime, and the hardware reorder memory operations so that they do not break the visibility guarantees provided by the JVM when reordering.
*/
Publicclassdemo{
Static intx = 0, y = 0;
StaticintA = 0, b = 0;
PublicStaticvoidMain (string[] args)throwsinterruptedexception {
Thread T1 =NewThread (NewRunnable () {
PublicvoidRun () {
A = 1;
x = b;
}
});
Thread T2 =NewThread (NewRunnable () {
PublicvoidRun () {
b = 1;
y = A;
}
});
T1.start ();
T2.start ();
T1.join ();
T2.join ();
SYSTEM.OUT.PRINTLN (x + "--" +y);
}
} Java Memory model
The Java memory model is defined by various operations, including read/write operations on variables, locking and deallocation of monitors, and thread initiation and merging operations.
The JVM defines a partial-order relationship for all operations in the program, called Happens-before. To ensure that the thread executing action B sees the result of operation a (whether a and B are executed in the same thread), the happens-before relationship must be met between A and B. If this relationship is absent, then the JVM can reorder them arbitrarily.
16th chapter. Java memory model