Java Memory Model Analysis

Source: Internet
Author: User

1. multithreading Basics

Thread communication refers to the mechanism by which threads exchange information. There are two communication mechanisms: Memory Sharing and message transmission.Memory SharingIt refers to the implicit communication between threads through the public state in the write-Read Memory (Java );Message transmissionThere is no public status between threads, and the threads must explicitly communicate (Erlang) by sending messages ).

  SynchronizationA program is used to control the relative sequence of operations between different threads. In the memory sharing mechanism, synchronization is explicit, while in the message transmission mechanism, synchronization is implicit (the message must be sent before the message is received ).

2. Java Memory Model Abstraction

In Java, all instance domains, static domains, data elements, and other shared variables are stored in heap memory and shared among internal threads. Local variables, method-defined parameters, and exception handling parameters are not shared between threads (there is no memory visibility problem or the memory model ).

The Java Memory Model (JMM) determines that writing shared variables by a thread is visible to another thread ., Control the communication between Java threads. JMM defines the abstract relationship between the thread and the main memory: Shared variables between threads are stored in the main memory, each thread has a private local memory (local memory is an abstract concept of JMM and does not actually exist. It covers cache, write buffer, registers, and other hardware and compiler optimizations). The local memory stores a copy of the shared variables in the thread to read/write. By controlling the interaction between the main memory and the local memory of each thread, JMM provides java programmersMemory visibilityYes.

 

Modern processors use a write buffer to temporarily save data written to memory. The write buffer can ensure that the command line runs continuously. It can avoid the delay caused by the pause of the processor and waiting for data writing to the memory. At the same time, you can refresh the write buffer in batches and Merge multiple writes to the same memory address in the write buffer to reduce the consumption of the memory bus. Although the write buffer has so many advantagesThe write buffer on each processor is only visible to the processor.. This feature will have an important impact on the execution sequence of memory operations: the execution sequence of the processor's read/write operations on the memory is not necessarily the same as the actual read/write operation sequence in the memory!

3. reorder

To improve performance, compilers and processors often reorder commands when executing programs. There are three types of sorting:

A) the compiler optimizes the re-sorting. The compiler can reschedule the statement execution sequence without changing the semantics of a Single-threaded program.

B) command-level parallel re-sorting. Modern processors use the Instruction-Level Parallelism (ILP) technology to execute multiple commands in an overlapping manner. If there is no data dependency, the processor can change the execution sequence of the statements corresponding to the machine commands.

C) memory system re-sorting. Because the processor uses cache and read/write buffer, loading and storage operations may appear to be executed in disorder.

JMM is a language-level memory model. It ensures that different compilers and different processor platforms prohibit specific types of compilers from being reordered and processors being reordered, provides consistent memory visibility for programmers.

In a single-threaded program, reordering operations with control dependencies exists, the execution results will not be changed (this is also the reason why the as-if-serial semantics allows re-sorting of operations with control dependencies); but in multi-threaded programs, re-sorting operations with control dependencies may change the execution results of programs.

4. sequential consistency Memory Model

Sequence consistency provides strong memory visibility for programmers: ① All operations in a thread must be executed in the program order. ② (Whether or not the program is synchronized) All threads can only see a single operation execution sequence. In the sequential consistency memory model, each operation must be performed atomically and immediately visible to all threads.

 

In terms of concept, the sequence consistency model has a single global memory, which can be connected to any thread through a left-right swing switch. At the same time, each thread must execute memory read/write operations in the program order. We can see that at most one thread can be connected to the memory at any time point. When multiple threads run concurrently, the switch device in the figure serializes all memory read/write operations of all threads.

5. volatile

When we declare the shared variable as volatile, the read/write operations on this variable will be very special. A good way to understand the volatile feature is to use the same monitor lock to synchronize a single read/write operation on the volatile variable. When writing a volatile variable, JMM will refresh the shared variable in the local memory corresponding to the thread to the main memory. When reading a volatile variable, JMM will set the local memory corresponding to this thread to invalid. The thread will then read the shared variable from the main memory. Volatile can be used to solve the problem of memory visibility.

6. About locks

Locks are the most important synchronization mechanism in java concurrent programming. In addition to mutex execution in the critical section, the lock release thread can send messages to the thread that obtains the same lock. When the thread releases the lock, JMM refreshes the shared variables in the local memory corresponding to the thread to the main memory. When the thread gets the lock, JMM will invalidate the local memory corresponding to the thread. So that the code in the critical section protected by the monitor must read the shared variable from the main memory. The lock not only ensures the atomicity of code execution, but also solves the problem of memory visibility.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.