Java memory Model (i)

Source: Internet
Author: User
Tags getdate

Main memory and working memory

Java Virtual machine in the process of executing a Java program, it manages the memory divided into a number of different data regions, including the method area, heap, virtual machine stack, local method stack, program counters. The method area stores class information, constants, bytecode and other data, heap memory stores all generated objects, the method area and heap memory are shared for all threads, and the virtual machine stack is unique to each thread, that is, each thread has its own virtual machine stack, and each time a thread executes, a stack frame is created on the virtual machine stack. The stack frame information includes the local variable table of the method and the operand stack.

The Java memory model divides the memory into primary memory and working memory, and the main memory is called main memory, which corresponds to the heap memory of the Java memory area, the working memory corresponds to the virtual machine stack and the local method stack, and all the threads share the main memory, but each thread has its own working memory. All objects in the main memory allocation, the thread can not directly use the contents of the main memory, must first load the contents of the main memory into the working memory before it can be used, modify the contents of the working memory, and only after synchronization to the main memory, other threads can be seen. The Java memory model looks like this:

A reference to a field

The thread cannot directly manipulate the primary memory (heap memory), so it cannot directly reference the value of the field. There are 3 things to do when a thread needs to reference a field:

    • 1) The read operation copies the value of the field to the working memory area (virtual machine stack)
    • 2) The load operation places the value of the read operation from the main memory into a variable copy of the working memory
    • 3) The use operation passes the value of the copy of the variable to the execution engine

When the thread references the same field again, the thread may refer directly to the copy of the work copy that was just copied, when the operation is used, it may be copied again from the heap memory to the working memory area (the virtual machine stack), then copied to the variable copy, and the last reference is performed (read->load- >use). The choice of these two options is determined by the Java Virtual Machine execution subsystem.

Assignment of a field

The thread cannot manipulate the primary storage directly, so it cannot assign values directly to the fields of the primary storage. The following three steps are required when a thread needs to assign a field to the main storage:

    • 1) Assign assign a value to a working copy located in the Working memory area
    • 2) store passes the value of the working memory to the main storage
    • 3) write stores the value of the variable that the store operation obtains from the working memory into the variable of the main memory

When the same thread repeatedly assigns a value to the same field, it may only assign a value to the working copy, that is, only the assign operation is performed, only the specified final result is copied to the main storage, that is, the execution of the store->write, or each assignment, will be assigned to the working copy, and then copied to the main storage, That is, execute assign->store->write. The choice of these two options is determined by the Java execution Processing system.

Inter-memory interaction operations

The Java memory model defines 8 operations to complete specific interactions between main memory and working memory, all of which are atomic and indivisible (except for long double types). These 8 operations are as follows:

    • 1) Lock (lock) A variable that acts on the main memory, which flags a variable as a thread-exclusive state
    • 2) Unlock (unlock) A variable that acts on the main memory, releasing a variable that is locked, and the released variable can be locked by another thread
    • 3) Read (read) A variable that acts on the main memory, which transfers the value of a variable from main memory to the working memory of the thread for subsequent load actions to use
    • 4) load (load) A variable that acts on the working memory, which puts the value of the variable that the read operation obtains from the main memory into a variable copy of the working memory
    • 5) use (using) a variable that acts on the working memory, which passes the value of the copy of the variable to the execution engine, which is performed whenever the virtual opportunity is to a bytecode instruction of the value of a variable that needs to be used.
    • 6) Assign (Assignment) is a variable that acts on the working memory and assigns a value received from the execution engine to the working copy variable, which is performed whenever the virtual opportunity is to a byte-code instruction that assigns a value to a variable.
    • 7) Store (store) A variable that acts on the working memory, transferring the value of the working copy variable to the main memory for subsequent write operations to use
    • 8) write (write) A variable that acts on the main memory, which puts the value of the store operation's variable from the working memory into the main memory variable

If you want to copy a variable from main memory to working memory, perform the read and load operations sequentially, and if you want to synchronize the variables from the working memory back to the main memory, perform the store and write operations sequentially. Note that the Java memory model only requires that the two operations must be executed sequentially, without guarantee that there must be sequential execution, that is, between read and load, the store and write can be inserted into other instructions, such as when accessing the variable A, b in memory. One possible order is read a, read B, load B, load a. In addition, the Java memory model also stipulates that the following rules must be met when performing the 8 basic operations described above:

    • 1) does not allow read and load, one of the store and write operations to appear separately, that is, a variable is not allowed to read from the main memory but the working memory is not acceptable, or from the working memory initiation will write but the main memory is not acceptable
    • 2) does not allow a thread to discard its most recent assign operation, where the variable must be synchronized back to main memory after it has changed in working memory
    • 3) do not allow a thread to synchronize data from the working memory of the thread back to main memory for no reason (no assign action occurred)
    • 4) A new variable can only be "born" in main memory, does not allow direct use of a variable that is not initialized (load or assign) in working memory, in other words, load and assign operations must be performed before a variable is implemented using and store operations
    • 5) A variable is allowed only one thread to lock it at the same time, but the lock operation can be repeated multiple times by the same thread, and after executing lock multiple times, the variable will be unlocked only if the same number of unlock operations are performed.
    • 6) If you perform a lock operation on a variable, the value of this variable in the working memory will be emptied, and the value of the load operation initialization variable needs to be re-executed before the execution engine uses the variable. There is no direct relationship between synchronized and the lock primitive. See two features of synchronized.
    • 7) If a variable is not locked by the lock operation in advance, it is not allowed to perform a unlock operation on it, nor is it allowed to unlock a variable locked by another thread
    • 8) before executing unlock on a variable, you must first synchronize this variable back into main memory (execute store and write operation) synchronized is not directly related to unlock primitive operation. See two features of synchronized.
Two functions of synchronized

Synchronized has two functions, thread synchronization and memory synchronization.

Thread synchronization refers to the use of synchronized to create a critical section, which allows only one thread to operate. After a thread enters the critical section, the other threads must wait at the entrance of the critical section until the critical section has been executed by the thread before it exits the critical section, and the other threads compete for the entry, who wins and who can enter the critical section to perform the operation. The scope specified by the synchronized controls the operation of the thread, which is the synchronization of the thread.

Synchronization of memory refers to the synchronization of working memory and main memory. The content stated below can be applied to both the Synchronized method and the synchronized block.

    • 1) When you want to enter synchronized

      When entering synchronized, if the working memory has a working copy that is not mapped to main memory, the value of the working copy is forced to be written to the main memory and becomes visible to other threads. Then all working copies on the working memory are discarded. After that, the thread that wants to introduce the values on the main memory must copy the values from the main memory to the working memory.

      In summary, the contents of the working memory are synchronized with the contents of the main memory.

    • 2) If you wish to exit synchronized

      If the working memory has a working copy that is not imaged to main memory, the value of the working copy is forced to be written to main memory. However, when you exit synchronized, the working copy of the working memory is not emptied, which means that the working copy on the working memory will be used directly later.

Note that the Synchronized keyword is not directly related to the lock and unlock of the memory interaction, and how to perform lock and unlock operations when entering the synchronized block and exiting the synchronized block depends on how the virtual machine is implemented.

The SYNCHRONIZED method compiles the corresponding method byte code has the attribute acc_synchronized, but SYNCHRONIZED code block compiles the generated bytecode to have monitorenter and the monitorexit instruction, The implementation of the two instructions and the method with the Acc_synchronized attribute depends on how the virtual machine is implemented, as well as the Java Virtual Machine specification and the Java language Specification

Two features of volatile

Volatile only synchronizes memory, not thread synchronization. When a thread refers to a volatile field, it usually occurs from the main memory to the working copy, and when the thread assigns a value to the volatile field, a copy from the working memory to the main memory occurs.

The Java memory model requires that the eight operations of Lock,unlock,read,load,assign,use,store and write are atomic, but for 64-bit data types (long and double), A loose rule is defined in the model that allows the virtual machine to divide the read and write operations of 64-bit data that are not volatile-modified into two-32 operations, which allows the virtual machine to not guarantee the atomicity of the Load,store,read and write operations of the 64-bit data type, This is the non-atomic protocol for long and double. If multiple threads share a long or double type of variable that is not declared volatile, and both read and modify them, then some threads may read a value that is neither the original value nor the other thread modifying the values that represent the "half variable."

In actual development, the current commercial virtual machines under various platforms treat 64-bit data read and write operations as atomic operations, so we generally do not need to use long and double variables specifically declared as volatile when writing code.

The danger of Double Checked Locking pattern

Early built-in lock synchronized performance is relatively low, so when implementing the Lazy Singleton mode, take double Checked Locking pattern mode, which improves performance by minimizing the locking of the built-in locks, as shown in the following code:

1234567891011121314151617181920212223
public class Mysystem {private static Mysystem instance = null;private Date date = new Date (); private Mysystem () {} publ IC static Mysystem getinstance () {if (instance = = NULL) {//(a) First Test  synchronized (mysystem.class) {//(b) Enter SYNCHR Onized Block    if (instance = = NULL)  //(c) Second Test  instance = new Mysystem ();//(d) Set  }//(e)//exit synchronized Block} return instance; (f)} public date GetDate () {return date;}}

Under the Condition check (a) If statement (first Test), when instance equals null, the synchronized block of (b) is entered, and the object that gets the lock is Mysystem.class, which is the Mysystem class object.

(a) The condition check is located outside the critical section, and when (c) re-executes the condition check (second Test), the synchronized can ensure that if the other thread does create the instance, this thread can see that when instance does equal null, An instance of Mysystem is generated at (d). (d) The case is generated in the critical section (b) ~ (c) and therefore does not produce more than two Mysystem instances.

The synchronized block of (b) is entered only if the instance equals null under the condition Test (a). Therefore, the second and subsequent calls to the GetInstance method will almost never enter the synchronized block. So don't worry about the performance problems with the built-in lock synchronized.

On the surface, double Checked locking mode perfectly solves the performance problem introduced by synchronized, so long as you create a good instance, you will not be able to enter synchronized block again. However, double Checked Locking pattern introduces a new problem, that is, when the singleton object is not fully constructed, other thread calls getinstance may return a singleton object, where some fields of the singleton object may be empty. Take the above program as an example, a thread calls Mysystem.getinstance (). The GetDate () method may return null, which seems a bit odd, let's analyze it, assuming that the thread execution order is as follows (just one possibility):

thread A thread B
(A-1) at (A) Determine instance = = Null  
(A-2) at (b) Enter synchronized block  
(A-3) in (c) judgment instance==null  
(A-4) in (d) Make Mysystem instance, assign to instance field  
thread is updated here, thread a copies instance field from working memory to main memory, thread B can see instance not null
  (B-1) in (a) judgment instance=null
  (B-2) in (f) Set the value of instance to G Etinstance return value
  (B-3) call getinstance return value getdate method

When you make an instance of Mysystem, the value of the New Date () is assigned to the instance field date, but this is only the assignment of thread A to the date working copy on the working memory. If thread a exits the synchronized block, the value of the Date field is definitely written to main memory, but before exiting, the value of the Date field is not guaranteed to be written to main memory. Instance=mysystem () Also, the value of the instance field before exiting synchronized may not be written to main memory, but it may also be written to main memory. In this hypothetical thread execution order, the value of the instance field has been written to main memory before exiting synchronized, but the value of the Date field is not written to main memory, which conforms to the Java Virtual Machine specification.

At this point, thread B (B-1) judges the instance!=null, so that the thread does not enter the synchronized block, and immediately returns the value of GetInstance as the return value (return). Thread B then calls the GetDate method of the return value of getinstance (B-3), the return value of getdate is the value of the Date field, and thread B refers to the value of the Date field, when thread B refers to the value of the date field of its working memory. The result date field is not in working memory and needs to be copied from main memory to working memory, whereas the value of the Date field in main memory is empty, so thread B calls Mysystem.getinstance (). The GetDate () method returns NULL.

Example code for safe implementation of the Lazy Singleton model:

1234567891011121314151617181920
public class Mysystem {private Date date = new Date (); private Mysystem () {} private static class Mysystemholder {private Static Mysystem instance = new Mysystem ();} public static Mysystem getinstance () {return mysystemholder.instance;} public Date getDate () {return date;}}

Implementation principle: The Java Virtual machine will load the class when it is loaded with a class, the call Mysystem.getinstance () method will execute return Mysystemhodler.instance, the Mysystemholder class is loaded at this time, and the Java Virtual Opportunity guarantees that the load class is thread safe, so the code above is thread-safe.

Reprint http://www.cloudchou.com/softdesign/post-631.html

Java memory Model (i)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.