What we're talking about when we're talking about JMM (Java memory model)

Source: Internet
Author: User
Tags volatile

In the previous few, we talked about the usage and the underlying implementation of synchronized, final, and voilate, all without a topic-java memory models (Java memory model, abbreviated JMM). The Java memory model is the basis of ensuring thread safety, and mainly describes the atomic, visibility and ordering limitations of the full sequence synchronization action in the program when different threads access shared global variables.

1. Definition

  Wikipedia defines: The Java memory model describes how threads in the Java programming language interact through memory. Together with the description of single-threaded execution of code, the memory model provides the semantics of the Java PR Ogramming language

The main idea is that the Java memory model describes how multiple Java threads interact with memory, and as with single-threaded execution, the memory model in a multithreaded scenario provides a reasonable and correct Java programming semantics.

The JSR133 specification was developed by the JSR133 Expert Group and was first implemented in Java5.0. The specification describes in detail the multi-threaded and memory interaction semantics, becomes part of Java specification, improves the original Java semantics error, the ambiguous part, guarantees the Java cross-platform.

The definition of the memory model in JSR133 is as follows:

Given a program and a sequence of execution trajectories for the program, the memory model describes whether the execution trajectory is a legitimate execution of the program. In the case of Java, the memory model checks each read in the execution track and then, according to a specific rule, verifies that the read was observed to be legitimate. The memory model describes the possible behavior of a program. The JVM implementation is free to generate the desired code, as long as the results from all final execution of the program can be predicted through the memory model. This provides ample freedom for a large number of code transformations, including reordering of actions (action) and unnecessary synchronization removal.

An advanced, informal overview of the memory model shows that it is a set of rules that specifies when a thread's write operation is visible to another thread. In layman's terms, read operation R usually sees any write operation W write value, meaning that W does not occur after R, and W does not appear to be overwritten by another write Operation W ' (from the angle of R).

When the word read is used in this memory model specification, it refers only to the action (action) of the Read field or array element. The semantics of other operations, such as reading the length of an array, performing a subject conversion, and virtual method calls, are not directly affected by data contention. It is the responsibility of the JVM to ensure that data contention does not result in incorrect behavior such as the length of an array that returns an error or the invocation of a virtual method that results in a segment error.

Memory semantics determine the value that can be read at every point in the program. The Action (action) in each individual thread must appear to be controlled by the semantics of the thread, not including the value that the read operation sees as determined by the memory model. When referring to this scenario, we say that the program adheres to line range (Intra-thread) semantics.

2. JMM Approximate model

To facilitate understanding of JMM, JSR133 proposes an approximate model-happen-before memory model, which is a formal definition of the necessary non-sufficient condition, compared to the definition of the formal model's exact formulation. First of all, we need to explain the definition of synchronous action, which refers to the locking unlock of the lock, the Voilate object reading, the thread action and whether the probe thread is finished or not. Synchronous actions correspond to the synchronization edge (Synchronize-with edge), which can be understood as a non-overlapping barrier between synchronous actions, including the following points:

    The unlock action on the
    1. one tube m synchronizes-with (synchronizes with) all subsequent locking actions on M (the following is defined in terms of the synchronization order). The
    2. writes to the volatile variable v synchronizes-with all subsequent read operations on V (which are defined in the following order, based on the synchronization sequence). The
    3. is used to start the action of a thread synchronizes-with the first action in the new startup thread. The last action of the
    4. thread T1 synchronizes-with the thread T2 any action that detects whether the T1 is terminated. T2 may do this by calling T1.isalive () or performing a join action on the T1.
    5. If the thread T1 interrupts the interrupt operation of the thread t2,t1 synchronizes-with any other thread (including T2) at any time to determine whether the T2 was interrupted. This can be achieved by throwing a interruptedexception or calling thread.interrupted with thread.isinterrupted. The
    6. writes the default value (0,false or null) for each variable synchronizes-with the first action in each thread.
    7. Although it may seem strange to write a default value for a variable in the object before the object is allocated, conceptually, the program starts creating the object with the default initial value. Therefore, the default initialization action for any object Happens-before any other action in the program (except for writing the default value). When the
    8. invokes an object's finalization method, it implicitly reads the object's reference. There is a happens-before edge between the end of the constructor of an object and the read of the reference. Notice that all the freeze operations of the object Happen-before the starting point of the happens-before edge at the front.

Let us elaborate on these limitations, the first article shows that the enhancement (synchronized bottom implementation monitor) locking unlock has a synchronization relationship, including two points, the same thread can be locking on the pipe to repeat the lock, ensure the same number of unlocks, for has been added lock tube, The thread blocks and hangs. The second volatile variable of the write operation will be in the other thread to the volatile variable read operation is immediately reflected, generally through the memory consistency protocol implementation, the general write operation will write the object's cache to memory, and the other kernel cache invalidation, so as to read the memory of the latest modified values, The implementation of these two articles has been described in more detail in the previous article. The third and fourth are all the threads that describe the thread's first action before the thread is started, and the last action of the threading line (inquire terminates??? This is not very understanding) before terminating, do not reorder. The fifth article describes the interrupt operation of thread A to thread B and the interrupt detection action of thread B, and the two action mutex occurs. Sixth seventh describes the default value of a variable before the first action or other action in the thread, but sometimes the compiler optimizes the default value of the variable but assigns it, as long as the variable is not in use. The eighth article shows that the object's fields are initialized in the constructor and the object reference is returned, and the final field guarantees that the initialization assignment is not guaranteed until the object reference is returned, not the final field.

The global order of all the synchronization actions is called the synchronization order, while the Synchronize-with Edge and program order form the Happen-before order, which is happen-before memory model. The Java memory model is a subset of the Happen-before memory model, because the Happen-before model often violates causality, and the most deadly weakness is "the appearance of values in a vacuum." For a formal specification of the Java memory model, please refer to the 7th chapter of JSR133 (I also see Foggy--! )

3. Summary

In general, the concept of the memory model was first proposed by the Java language, which played a significant role in ensuring the robustness of Java semantic integrity, and the C + + and other languages also referenced such memory models on the lock multithreading concurrency model. Understanding JMM can give us a deeper understanding of the logic of parallel execution in Java multithreading and memory interaction, thus writing robust and efficient concurrent programs.

  

What we're talking about when we're talking about JMM (Java memory model)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.