This paper attempts to explain JMM and its abstract model, but it is not only an introduction, but also the reason why the JMM memory model can be abstracted clearly.
First, the concept of JMM;
The abstraction of the JMM divides the memory memory model into the thread-private local memory and all thread-shared main storage;
Thirdly, the JMM abstract model causes the memory visibility of shared variables in concurrent programming, why does it cause the problem? What are the benefits of choosing such an abstract model? What are the ways to deal with this problem?
First, JMM
JMM literal translation is the Java memory model, his deeper description, "JMM is a language-level memory model, by shielding each system platform differences, to the program ape to provide a system platform consistent memory model, he depicts the shared variables the abstract process of in-memory access Specifies when the results of a shared variable for a thread operation are visible to other threads . ”
Second, JMM abstract
Java's concurrency model is based on memory sharing, which is a common state of shared memory for thread communication, which is essentially a read and write operation for shared variables.
In addition to the Java concurrency model described above, there are two clues to the abstraction of the Java memory model. The first, "Speed clue", follows speed from fast to slow, "cpu> Register > Cache > Memory > Disk Cache > Disk", the cache is designed to mitigate the difference in speed between registers and memory, and the same disk cache alleviates the difference in speed between memory and disk; , "Bus transaction thread", in a multi-CPU system, at any time, only one CPU can operate memory, this series of CPU and memory competition and communication through the bus transaction (including read and write things) to describe, such as.
Based on the above two clues (the details are described in the next section), the abstraction of the Java memory model can be described as " each running thread has a private local memory, all threads share memory, and all threads read the shared variable first from the local memory, if not the All threads write to the local memory first, and write back to main storage at the appropriate time . The abstract model of JMM looks as described.
The above content is derived from the abstract model of JMM based on memory concurrency model, speed clue and bus transaction clue. There are still some areas that are not clear and need more detailed explanations below.
Three, more details
Why does the abstract result of JMM be divided into thread-private local memory and all thread -shared storage?
1. Correspondence of system model and JMM
Before we talk about more details, we first force the hardware and JMM abstract models depicted in the speed clues to correspond, and in fact JMM's abstraction is based on this, to understand this part with the principle of computer composition.
2. Concurrency issues caused by local memory
The first clear premise is that Java's concurrency model is based on memory read-write shared variables to communicate, between the threads to communicate, only by reading and writing main memory, that is, jmm through the control of each thread's local ram and the interaction between main storage, to the program ape to provide memory visibility guarantee. Based on this description, in fact, JMM can jump open local memory, the direct operation of main storage thread communication will be simpler and more reliable. The introduction of local memory can also lead to some bad results, as described in, A/b two threads parallel operation an object that has a shared variable (which is actually an externally accessible instance variable), when initialization of the object is done by default initialization of A and B variables, i.e. a=b=0, in heap memory (main storage), Before the start of the operation, AB two threads will get a copy of the AB variable exists in the local memory, when executed to A=1, the 1 is assigned to the local memory of a, note that the local memory, rather than a in memory, the same b=2, but also only assigns 2 to the B thread in the B variable and does not write to main memory, At this point, the operation of the XY assignment is taken from main memory, resulting in a possible execution result, a=1,b=2,x=0,y=0.
However, the direct operation of main memory, there will be no such result, because at any time there will only be a CPU to get bus transactions, it means that at any moment, only one thread can manipulate shared variables, but based on this model of code execution speed who can endure? To illustrate this, the "Speed clue" cache alleviates the speed mismatch between memory and registers, and the "mitigation" process is to save the content flushed to main memory into the cache, accumulating to a certain amount and then flashing it to memory once (other threads are visible), This benefit can be imagined by the principle of buffer stream speed in Java traditional IO stream, as described in "Bus transaction thread", centralized brush main memory, reduce the number of bus transaction competition and bus occupancy, thereby obtaining higher performance.
3. Problem solving
How to solve? This is the meaning of synchronization in multi-threading. Do those things in sync? Synchronization guarantees atomicity, mutual exclusion, memory visibility, and sequential systems (no reordering). Synchronized content also has a lot to talk about, not just a simple use of synchronized or lock, but more to understand the memory semantics of synchronized and lock expression, this part will take time to write alone.
Iv. Supplementary
This article is only about Java concurrency programming at the beginning of some simple content, and other more content is not involved, the subsequent time will be gradually improved.
Note:
If there are mistakes in this article, please do not hesitate to correct, thank you!
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
"Concurrent Programming" Jmm:java memory model abstraction