Java Concurrency Programming (quad) Java memory model

Source: Internet
Author: User
Tags volatile

Related articles
Java Concurrent Programming (i) Thread definitions, states, and properties
Java concurrent Programming (ii) synchronization
Java Concurrency Programming (iii) volatile domains

Preface

Before we talked about threading, synchronization, and the volatile keyword, we need to understand Java's memory model for concurrent programming of Java, because the communication between Java threads is completely transparent to the engineer, and the memory visibility problem can easily confuse the engineers. In this article, we will mainly talk about the concepts of Java memory model.

1. Shared memory and message delivery

There are two kinds of communication mechanisms between threads: Shared memory and message passing, and in the concurrency model of shared memory, the common state of the program is shared between threads, and the threads communicate implicitly through the public state in the write-read memory. In the concurrency model of message passing, there is no public state between threads, and the threads must communicate explicitly by sending messages explicitly.
Synchronization refers to the mechanism that a program uses to control the relative order of operations between different threads. In the shared-memory concurrency model, synchronization is performed explicitly. An agent must explicitly specify that a method or piece of code requires mutually exclusive execution between threads. In the concurrency model of message passing, synchronization is implicit because the message must be sent before the message is received.
Java concurrency employs a shared-memory model, where communication between Java threads is always implicit and the entire communication process is completely transparent to the engineer.

abstraction of the 2.Java memory model

In Java, all instance fields, static fields, and array elements are stored in heap memory and heap memory is shared between threads (this article uses the term "shared variables" to refer to instance fields, static fields, and array elements). Local variables, method definition parameters, and exception handler parameters are not shared between threads, they have no memory visibility issues, and are not affected by the memory model.
Communication between Java threads is controlled by the Java memory model (this is referred to as JMM), and JMM determines when a thread's write to a shared variable is visible to another thread. From an abstract point of view, JMM defines the abstract relationship between threads and main memory: Shared variables between threads are stored in main memory, each thread has a private local memory, and local memory stores the thread to read/write a copy of the shared variable. Local memory is an abstract concept of JMM and is not really there. It covers caching, write buffers, registers, and other hardware and compiler optimizations. The Java memory model is abstracted as follows:

From the point of view, if you want to communicate between thread A and thread B, you have to go through the following 2 steps:

    1. Thread A flushes the updated shared variables in local memory A to the main memory.
    2. Thread B goes to main memory to read shared variables that have been updated before thread A.
3. Reordering from source code to instruction sequence

In order to improve performance when executing programs, the compiler and processor often reorder instructions. There are three types of reordering:

    1. The re-ordering of compiler optimizations. The compiler can reschedule the execution of a statement without changing the semantics of the single-threaded procedure.
    2. Reordering of instruction-level parallelism. Modern processors use instruction-level parallelism to overlap multiple instructions. If there is no data dependency, the processor can change the order in which the statement corresponds to the machine instruction execution.
    3. reordering of memory systems. Because the processor uses the cache and read/write buffers, this makes the load and storage operations appear to be in a disorderly sequence.
      From the Java source code to the final actual execution of the sequence of instructions, the following three kinds of reordering are experienced:

The above 1 belongs to the compiler reordering, and 2 and 3 are processor reordering. These reordering can cause memory visibility issues with multithreaded programs. For compilers, the compiler-JMM of a compiler prevents a particular type of compiler from being reordered (not all compilers are forbidden). For processor reordering, the JMM collation of a handler requires the Java compiler to insert a specific type of memory barrier directive when generating a sequence of instructions, to suppress a particular type of handler reordering by means of a memory barrier directive (not all handlers are suppressed).
JMM is a language-level memory model that ensures that programmers are guaranteed consistent memory visibility over different compilers and different processor platforms by prohibiting certain types of compilers from reordering and processing.

4.happens-before Introduction

Happens-before is the core concept of JMM, and for Java engineers, understanding Happens-before is the key to understanding JMM.

The design intent of JMM

There are two key factors to consider in the design of JMM:

    1. Engineers use the memory model, hoping that the memory model is easy to understand and program, and engineers want to write code based on a strong memory model.
    2. Compiler and processor-to-memory implementations want the memory model to bind them as little as possible, and the compiler and the processor want to implement a weak memory model.

These two factors are contradictory, so the JSR-133 Expert group design needs to consider a good balance point: On the one hand to provide engineers with sufficient memory visibility, on the other hand to the compiler and processor restrictions should be as loose as possible.

Let's give an example:

int a=10;   //Aint b=20;   //Bint c=a*b;  //C

Above is a simple multiplication operation, and there are 3 happens-before relationships:

    1. A Happens-before B
    2. B Happens-before C
    3. A Happens-before C

Of these three happens-before relationships, 2 and 3 are required, but 1 is unnecessary. Therefore, jmm the Happens-before requirements to prohibit the reordering into two categories:

    1. Will change the reordering of the results of the program execution.
    2. Does not change the reordering of the results of the program execution.

JMM has adopted different strategies for reordering these two different properties:

    1. For reordering that alters the execution of the program, JMM requires that the compiler and processor must disallow this reordering.
    2. For reordering that does not change the results of the program execution, JMM requires that the compiler and processor do not require that this reordering be allowed.
the definition and rules of Happens-before

JSR-133 uses the concept of Happens-before to specify the order of execution between two operations, since both operations can be within a thread or between different threads. As a result, JMM can provide the engineer with a cross-thread memory visibility guarantee through happens-before relationships.

The Happens-before rules are as follows:
1. Program Order rules: Each action in a thread is happens-before to any subsequent action in that thread.
2. Monitor lock rule: Unlocks a monitor lock, happens-before to the locking of the subsequent lock on the monitor.
3. Volatile variable rule: write to a volatile field, happens-before to any subsequent reading of this volatile field.
4. Transitivity: If a happens-before B, and b happens-before C, then a happens-before
C.

5. Sequential consistency

Sequential consistent memory model is a theoretical reference model, in the design, the memory model of the processor and the memory model of the programming language will be referenced in the sequential consistent memory model.

data Competition and sequential consistency

Data contention exists when the program is not synchronized correctly. Data competition refers to writing a variable in one thread, reading the same variable in another thread, and writing and reading that are not sorted by synchronization.
When the code contains data competition, the execution of the program often produces counterintuitive results. If a multithreaded program can be synchronized correctly, this program will be a non-data competition.
JMM the following guarantees for the memory consistency of correctly synchronized multithreaded threads:
If the program is synchronized correctly, the execution of the program will have sequential consistency (sequentially consistent), that is, the execution result of the program is the same as that of the program in the sequential consistent memory model. The synchronization here refers to the generalized synchronization, including the correct use of common synchronization primitives (Synchronized,volatile and final).

Sequential Consistency Model

Sequential consistent memory model is a theoretical reference model idealized by computer scientists, which provides a very strong guarantee of memory visibility for programmers. The sequential consistent memory model has two major features:

    1. All operations in a thread must be executed in the order of the program.
    2. (regardless of whether the program is synchronized) all threads can see only a single order of operation execution. In the sequential consistent memory model, each operation must be atomically executed and immediately visible to all threads.

The sequential consistent memory model provides the programmer with the following views:

Conceptually, the sequential consistency model has a single global memory that can be connected to any thread by a switch that swings around. At the same time, each thread must perform a memory read/write operation in the order of the program. As we can see, at most one thread can be connected to memory at any point in time. When multiple threads are executing concurrently, the switch device in the diagram can serialize all the memory read/write operations of all threads.

Each operation in the sequential consistent memory model must be immediately visible to any thread, but there is no guarantee in JMM. The unsynchronized program in JMM not only the overall order of execution is unordered, but also the order in which all threads see the execution of operations may be inconsistent. For example, when the current thread caches the written data in local memory and is not flushed to the main memory, the write is only visible to the current thread, and viewed from the perspective of other threads, it is considered that the write operation has not been executed by the current thread at all. This write operation can be visible to other threads only after the thread has flushed the data written in local memory to the main memory. In this case, the execution order of the operations that the current thread and other threads see is inconsistent.

sequential consistency of synchronization programs

Let's take a look at how the correctly synchronized programs have sequential consistency.

class SynchronizedExample {int0;booleanfalse;publicsynchronizedvoidwriter() {    1;    true;}publicsynchronizedvoidreader() {    if (flag) {        int i = a;        ……    }}}

In the example code above, suppose a thread executes the writer () method, and the B thread executes the reader () method. This is a properly synchronized multithreaded program. According to the JMM specification, the execution result of the program will be the same as the execution result of the program in the sequential consistency model. Here is a comparison of the execution timing of the program in two memory models:

In the sequential consistency model, all operations are executed serially in the order of the program. In JMM, the code within the critical section can be reordered (but JMM does not allow the code in the critical section to "Escape" beyond the critical section, which destroys the semantics of the monitor). JMM will do some special processing at two key points when exiting the monitor and entering the monitor, allowing the thread to have the same memory view as the sequential consistency model at both points in time. Although thread A is reordered within a critical section, thread B cannot "observe" the reordering of thread A in the critical section because of the nature of the monitor's mutex execution. This reordering not only improves the efficiency of execution, but also does not change the execution result of the program.
From here we can see the basic policy of JMM in concrete implementation: without changing (correctly synchronized) program execution results, as far as possible for the compiler and processor optimization open the door.

sequential consistency of unsynchronized programs

JMM does not guarantee that the execution result of the unsynchronized program is consistent with the execution result of the program in the sequential consistency model. Because the unsynchronized program executes in the sequential consistency model, it is unordered in its entirety and its execution results are unpredictable. It is meaningless to ensure that the results of the unsynchronized program execution in two models are consistent.
As with the sequential consistency model, the unsynchronized program executes in JMM and is unordered in its entirety, and its execution results are unpredictable.
At the same time, there are several differences in the execution characteristics of the unsynchronized program in the two models:

    1. The sequential consistency model guarantees that operations within a single thread are executed in the order of the program, while JMM does not guarantee that operations within a single thread will be executed in the order of the program (such as the reordering of multithreaded programs correctly synchronized above in the critical section).
    2. The sequential consistency model guarantees that all threads see only a consistent sequence of operations execution, while JMM does not guarantee that all threads will see a consistent sequence of operations execution.
    3. JMM does not guarantee that read/write operations on 64-bit long and double variables are atomic, and the sequential consistency model guarantees atomicity for all memory read/write operations.

For the third difference: on some 32-bit processors, there is a significant overhead if a read/write operation on 64-bit data is required to be atomic. In order to take care of this processor, the Java language specification encourages but does not force the JVM to have atomicity on the read/write of a 64-bit long variable and a double type variable. When the JVM is running on such a processor, a read/write operation of a 64-bit long/double variable is split into two 32-bit read/write operations to execute. These two 32-bit read/write operations may be assigned to different bus transactions, at which point the read/write to this 64-bit variable will not be atomic.
When a single memory operation is not atomic, it can have unintended consequences. Please see below:

As shown, suppose processor a writes a long variable, and processor B reads the long variable. A 64-bit write operation in processor A is split into two 32-bit writes, and the two 32-bit writes are assigned to different write transactions. At the same time, the 64-bit read operation in processor B is split into two 32-bit read operations, and the two 32-bit read operations are assigned to the same read transaction execution. When processor A and b press the timing to execute, processor B will see an invalid value that is only half written by processor a.

Resources:
The art of Java concurrent programming
In-depth understanding of the Java memory Model (i)--basic

Java Concurrency Programming (quad) Java memory model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.