Deep analysis Java Memory Model

Source: Internet
Author: User

Outline:

Java memory Model volatile keywords long and double variables special rules • Atomicity, visibility and ordering • Antecedent occurrence principle Java and threading

1.java memory model

The Java Virtual Machine specification attempts to define a Java memory model (MODEL,JMM) that masks memory access differences between various hardware and operating systems to enable Java programs to achieve consistent memory access across various platforms.

Main memory vs. working memory:

The main goal of JMM is to define access rules for each variable in the program, and to store variables in the virtual machine in memory and the underlying details of the variables taken out of memory.

Note: Variables here differ from those described in Java programming, which include instance fields, static fields, and elements that make up an array object, but do not include local variables and method parameters, which are thread-private and not shared.

The Java memory model specifies that all variables are stored in the main memory. Each thread also has its own working memory (working memories), where the thread's working memory holds a copy of the variable master memory that is used by the thread thread, and all operations on the variable must be made in working memory and not directly read and write to the variables in the main memory. There is no direct access to variables in the other's working memory between different threads, and the transfer of variable values between threads needs to be done through main memory.

Here the main memory, working memory and Java memory area in the heap, stack, method area, etc. are not the same level of memory division.

Inter-memory interaction operations:

The Java memory model defines 8 operations to complete, and the virtual machine implementation must ensure that all 8 operations are atomic and non-split (for variables of double and long, the load,store,read,write operation allows exceptions on some platforms).

Inter-memory Interoperability – 8 basic operations: Lock (Lock) Unlock (unlock) read (read) load (load) use (with) Assign (Assignment) store write (write).

Inter-Memory Interoperability – rules:

Does not allow one of the read and Load,store and write operations to appear separately;

A thread is not allowed to discard its most recent assign operation;

A thread is not allowed to synchronize data from the working memory of the thread back to main memory for no reason (no assign operation has occurred);

A new variable can only be "born" in main memory, and does not allow direct use of an uninitialized variable in working memory;

A variable allows only one thread to lock on it at the same time, but the lock operation can be repeated multiple times by the same thread, and after performing the lock multiple times, the variable will be unlocked only if the same number of unlock operations are performed;

If you perform a lock operation on a variable, that will empty the value of the variable in the working memory, and the value of the initial variable of the load or assign operation needs to be re-executed before the execution engine uses the variable.

If a variable is not locked by the lock operation beforehand, it is not necessary to perform a unlock operation on it, nor is it allowed to unlock a variable locked by another thread;

Before performing a unlock operation on a variable, this variable must be synchronized back to main memory (perform store,write operation);

2.Volatile

When a variable is defined as volatile, there are two characteristics, the first is to ensure that the variable's visibility to all threads ("visibility" here means that when a thread modifies the value of the variable, the new value is immediately known to other threads)

Volatile Use principle:

The result of the operation does not depend on the current value of the variable, or can ensure that only a single thread modifies the value of the variable;

Variables do not need to participate in invariant constraints with other state variables;

Operation scenarios that satisfy the above rules, using volatile to ensure atomicity without locking

The second semantics of using volatile variables is to prohibit command reordering ( The semantics of the re-ordering of the volatile masking commands are not fully repaired until after JDK1.5, even though declaring variables as volatile in the previous JDK still does not completely avoid the problem caused by reordering (mainly the problem of reordering the code before and after the volatile variable), which is also the J before JDK1.5 No safe use of DCL in Ava for single-case reasons )

Give a simple example here.

In this paper, we use the singleton model to introduce briefly

This is the most common pattern, double check, but why do you want to add the volatile keyword to instance?

This involves the initialization of a class, the initial session of the class can be easily divided into three steps, class object initialization, class objects in the heap to allocate memory, the heap allocated memory points to the class object. The operation of the new object is not an operation at the instruction level. And the instruction operation is not sequential, There is no guarantee that the instruction will be executed first. When the allocated memory in the instruction heap points to a class object, the object points to not NULL, but the object is not initialized and is not guaranteed.

(Note: By comparison, it is found that the key change is a volatile modified variable, after the assignment (the front mov%eax,0x150 (%esi) is an assignment operation) more performed a "Lock Addl $0x0, (%ESP)" operation, this operation is equivalent to a memory barrier, A memory barrier is not required when only one CPU accesses memory, but if there are two or more CPUs accessing the same piece of memory, and one of them is observing another, a memory barrier is required to ensure consistency. The instruction "Addl $0x0, (%ESP)" is obviously an empty operation, the key is the lock prefix, query IA32 manual, its role is to make this CPU cache write memory, the write action will also cause other CPU invalidate its cache. So an empty operation allows the modification of the previous volatile variable to be immediately visible to other CPUs. )

In a single thread, Java's first principles (described in this article) ensure that the code executes sequentially, following the process of the code. But in multithreading, Java's antecedent principle cannot be guaranteed. The above singleton mode, instance==null in multi-threading, Unable to determine if instance is new. It is possible that the instance object points to the allocated memory in the heap, but has not yet performed the initialization. So in the second Judgment Instance==null, Instance still may not have completed the new operation. But with the volatile keyword, it is guaranteed that the instance will be accessed by other threads after new succeeds.

Special rules for 3.long and double variables:

Allow virtual machine implementation to choose the atomicity of the four operations that do not guarantee the 64-bit data type load,read,store,write, that is, the non-atomic agreement of Long and double (note: In real development, Commercial virtual machines in the current platform almost all choose to treat 64-bit data read and write operations as atomic operations, do not need to declare long and double type variables specifically as volatile)

Atomicity, Visibility and ordering:

Atomicity: Access reading and writing of basic data types are atomic (exceptions: Long and double).

Visibility: When a thread modifies the value of a shared variable, other threads can immediately know the change.

Order: In this thread, all operations are ordered, and if one thread observes another, all operations are unordered.

4. First occurrence principle (Happens-before)

The main basis for determining whether or not the data is competitive and whether the thread is secure.

1. Rules of Procedure Order: in a separate thread, the operation is performed Happen-before (time) after the execution of the program code, in the order in which it is executed.

2. Manage locking rules: A unlock Operation Happen-before back (in chronological order, the same below) lock operation on the same lock.

3.volatile variable rule: A write operation to a volatile variable happen-before the subsequent read operation on the variable.

4. Thread Start rule: the Start () method of the thread object Happen-before every action of this thread.

5. Thread termination rule: All operations of the thread happen-before the termination detection of this thread, which can be detected by means of the end of the Thread.Join () method, the return value of thread.isalive (), and so on, to detect that the thread has terminated execution.

6. Thread Break rule: The call to the thread interrupt () method Happen-before occurs when the code of the interrupted thread detects an outage.

7. Object Finalization rule: Initialization of an object (completion of the constructor execution) Happen-before the beginning of its finalize () method.  

8. Transitivity: If operation a happen-before operation B, Operation b happen-before Operation C, then a happen-before operation C can be obtained

5.Java and Threads

A thread is a more lightweight scheduling unit than a process, and in the Java API all the key methods of the thread class are declared as native. There are three main ways to implement threads: using kernel threading implementations, using user thread implementations, and loads lightweight process blending with user lines. Kernel thread implementation: Kernel thread (Kernel-level thread,klt) is a thread that is supported directly by the operating system kernel, which is the kernel to perform thread switching, the kernel dispatches the thread by manipulating the scheduler, and is responsible for mapping the thread's tasks to each processor. Each kernel thread can be considered a single clone of the kernel, so that the operating system has the ability to handle multiple things at the same time. Because of kernel thread support, each lightweight process becomes a separate dispatch unit, even if a lightweight process is blocked in the system call, it does not affect the process to continue working. Because of kernel-based threading implementations, various threading operations (create, refactor, synchronize) require system calls. System calls are expensive and need to be switched back and forth between the user model and the kernel model. LWP consumes a certain amount of memory space, and the number of system support LWP is limited. User thread implementation: As long as a thread is not a kernel thread, it can be assumed that the advantage of user thread is that it does not require system kernel support, and the disadvantage is that there is no system kernel support, Thread,ut. All thread operations require the user program to process itself. In addition to the previous multithreaded programs that do not support multi-threaded operating systems (such as DOS) and the few that have special needs, fewer programs are used. User line loads lightweight process hybrid implementation: threads In addition to relying on kernel threading implementation and fully user program implementation, There is also an implementation of using kernel threads and user threads. UT's creation, switching, and destruction operations are still inexpensive and can support the scaling of UT concurrency. An LWP supported by the operating system serves as a bridge between UT and KLT, allowing the kernel-provided thread scheduling and processor mapping, and UT's system calls to be done through LWP, reducing the risk of the process being completely blocked. Java thread Implementation, JDK1.2 is based on a "green thread" user thread. In the current JDK version, the operating system supports the threading model that determines how the threads of a Java virtual machine are mapped. Both the Windows version and the Linux version are implemented using a one-to-one threading model, and a Java thread maps to a lightweight process. In the Solaris platform, the Solaris version of the JDK provides proprietary virtual machine parameters (-xx:uselwpsynchronization and-XX) because the threading features of the operating system can support both one-to-one and many-to-many threading models: useboundthreads) to explicitly specify the kind of threading model that virtual machines use. Java thread Scheduling: Thread scheduling refers to the process of assigning a processor to a thread, and there are two main scheduling methods, namely, cooperative thread scheduling and preemptive thread scheduling. If you use a collaborative scheduling multithreaded system, the execution time of the thread is controlled by the thread itself, and after the thread has executed its own work, the system is actively notified to switch to another thread. If a multi-threaded system with preemptive scheduling is used, then each thread will be assigned execution time by the system, and the thread's switchover is not determined by the thread itself. The thread-scheduling method used by Java is preemptive scheduling. The Java language defines the following States, at any point in time, where a thread can have only one state new; runable; Waiting; Timed waiting; Blocked; Terminated; Thread safety: When multiple threads access an object, it is not necessary to consider the scheduling and alternation of these threads in the run-time environment, the need for additional synchronization, or any other assistance operation by the caller, the behavior of invoking this object can get the correct result, and that object is thread-safe. In the Java language (after JDK1.5, the Java memory model has been modified after that), immutable (immutable) objects must be thread-safe, no matter the method implementation of the object or the caller of the method, there is no need to take additional thread security measures. Absolute thread safety satisfies a class to achieve regardless of the runtime environment, the caller does not need any additional synchronization measures, relative thread safety is our usual sense of thread safety, it needs to ensure that the individual operation of this object is thread-safe, we do not need to do extra safeguards when calling, However, for successive calls in a particular order, it may be necessary to use an additional synchronization method at the call side to ensure the correctness of the call; Thread compatibility refers to the object province is not thread-safe, but can be used to ensure that the object can be used safely in the concurrency environment by using the synchronization method at the caller's end.

Deep analysis Java Memory Model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.