Java Memory models and threads

Source: Internet
Author: User
Tags visibility

1. The efficiency and consistency of the hardware

Because the computer's storage device and the processor's operation speed have several orders of magnitude difference, and most of the operation tasks are to interact with the memory, So modern computer systems have to add a layer of read and write speeds as close as possible to the processor's speed cache as a buffer between the memory and the processor: the data to be used to copy the operation into the buffer, so that the operation can be done quickly, when the operation is finished and then back to memory from the cache, This way the processor does not have to wait for slow memory to read and write.

Cache-based storage interaction solves the contradiction between processor and memory speed, but also brings more complexity to the computer system, because he introduces a new problem: cache consistency . In multiprocessor systems, each processor has its own cache, and they share the same main memory. In order to solve the problem of consistency, the need for each processor to access the cache is followed by a number of protocols, read and write according to the protocol to operate, such protocols are MSI MESI MOSI Synapse Firefly and Dragonprotocol. Physical machines of different architectures can have dissimilar memory models, and Java has its own memory model (the memory model can be understood as abstraction of a particular memory or a process that tells the cache to read and write access under a particular operating protocol)

In addition to increasing the cache, in order to make the operating unit within the processor as fully utilized as possible, the processor may be the input code to perform a disorderly execution optimization, the processor will be after the calculation of the chaotic execution of the results of the reorganization, to ensure that the results and sequential execution of the order of the same, therefore, If there is a computation task that relies on the intermediate result of another computational task, its order is not guaranteed by the order of the Code.

2.java memory model

Java defines the memory model to eliminate the memory access differences between various hardware and operating systems, so that Java programs can achieve consistent access across all platforms. Defining the Java memory model is not an easy task, and the model must be sufficiently rigorous to allow Java concurrency to be ambiguous, but it must be loose enough to allow the virtual machine's implementation to have enough free space to take advantage of the various features of the hardware (registers, cache, etc.) for better execution speed. After a long period of validation and patching, the Java memory model has matured and perfected after JDK1.5 was released.

① main memory and working memory

The main goal of the Java memory model is to define access rules for variables in the program, that is, the underlying details of storing variables into memory and removing variables from memory in the virtual machine. The variables here are different from those described in Java programming, which include instance fields, static fields, and elements that make up an array object, but do not include local variables and method parameters, which are thread-private and not shared.

The Java memory model specifies that all variables are stored in main memory (as is the case with the main memory name in the physical hardware, but this is only part of the virtual machine), and each thread has its own working memory (which can be compared to the cache analogy of the previous processor). The thread's working memory holds the variable to the main memory copy that the thread uses, and all the operations of the thread on the variable (read, assign) must be made in working memory, not directly read and write to the variables in main memory. Variables in the other's working memory cannot be accessed directly between different threads, and the transfer of variable values between threads needs to be done in main memory, with threads, main memory, and working memory interacting as shown, and very similar.

The main memory, the working memory, and the Java heap, stack, and method area of the Java memory area are not the same level of memory partitioning.

② Inter-memory interoperability

With regard to the specific interaction protocol between main memory and working memory, i.e. how a variable is copied from main memory to working memory, how to synchronize from the working memory to the implementation details between main memory, the Java memory model defines the following eight operations to complete, and the virtual machine implementation must ensure that all eight operations are atomic. :

    • Lock: A variable that acts on the main memory and identifies a variable as a thread-exclusive state.
    • Unlock (Unlocked): Acts on the main memory variable, releasing a variable that is in a locked state, and the released variable can be locked by another thread.
    • READ: Acts on the main memory variable, transferring a variable value from main memory to the working memory of the thread for subsequent load actions to use
    • Load: A variable that acts on working memory, which places the value of a read operation from the main memory into a variable copy of the working memory.
    • Use: A variable that acts on the working memory, passing a variable value in the working memory to the execution engine, which is performed whenever the virtual opportunity is to a bytecode instruction that needs to use the value of the variable.
    • Assign (Assignment): A variable acting on a working memory that assigns a value to a working memory from the execution engine, and performs this operation whenever the virtual opportunity is assigned to a byte-code instruction that assigns a value to a variable.
    • Store: A variable acting on a working memory that transfers the value of a variable in the working memory to the main memory for subsequent write operations.
    • Write: A variable that acts on the main memory, which transfers the store operation from the value of a variable in the working memory to a variable in the main memory.

If you want to copy a variable from main memory to working memory, you will need to follow the read and load operations, and if you synchronize the variables from the working memory back to main memory, you should perform the store and write operations sequentially. The Java memory model only requires that the above operations must be executed sequentially, without guarantee that continuous execution is required. That is, between read and load, the store and write can be inserted between the other instructions, such as the main memory of the variable A, b access, the possible order is read A,read B,load b, load a. The Java memory model also stipulates that when performing the eight basic operations above, the following rules must be met:

    • Does not allow one of the read and load, store, and write operations to appear separately
    • A thread is not allowed to discard its most recent assign operation, that is, the variable must be synchronized to main memory after it has changed in working memory.
    • A thread is not allowed to synchronize data from the working memory back to main memory for no reason (no assign operation has occurred).
    • A new variable can only be born in main memory, and it is not allowed to use a variable that is not initialized (load or assign) directly in working memory. That is, the assign and load operations must be performed before a variable is implemented with the use and store operations.
    • A variable allows only one thread to lock it at the same time, and lock and unlock must appear in pairs
    • If you perform a lock operation on a variable, the value of the variable in the working memory will be emptied, and the value of the variable will need to be re-executed before the execution engine uses the variable, either the load or the assign operation.
    • If a variable is not locked by the lock operation beforehand, it is not allowed to perform a unlock operation on it, nor is it allowed to unlock a variable that is locked by another thread.
    • Before performing a unlock operation on a variable, you must first synchronize this variable into main memory (perform store and write operations).

③ rules for volatile variables

Volidate is the most lightweight synchronization mechanism. First to understand it: When a variable is defined as Volidate, it will have two characteristics, the first is to ensure that the variable visibility of all threads, where the "visibility" refers to when a thread modifies the value of this variable, the new value for other threads can be immediately informed. And ordinary variables can not do this, the ordinary convenience of the value of the county to be passed through the main memory to complete, for example, thread a modifies the value of a common variable, and then write back to the main memory, the other thread B will be written to finish after the read and write from the main memory, the new variable value will be visible to thread B. However, if the operation is not atomic, there is still no guarantee of the correctness of the volatile synchronization. You can use this keyword only in the following cases:

    • The write operation on a variable does not depend on the current value of the variable (for example, the a=0;a=a+1 operation, the whole process is initialized to 0, the value of a is added to the base of 0 plus 1, and then assigned to a itself, which is obviously dependent on the current value), or to ensure that only a single thread modifies the variable.
    • The variable is not included in the invariant condition with other state variables. (Volatile can guarantee secure access when the variable itself is immutable, such as a singleton pattern of double judgments.) But once other state variables are mixed in, concurrency is unpredictable and correctness is not guaranteed.

/** * Single-case mode based on double judgment*/ Public classSingleton {Private volatile StaticSingleton instance;  Public StaticSingleton getinstance () {if(Instance = =NULL) {synchronized (Singleton.class) {                if(Instance = =NULL) {instance=NewSingleton (); }            }        }        returninstance; }     Public Static voidMain (string[] args) {singleton.getinstance (); }}

Another feature of volatile is that it is possible to suppress instructions for reordering optimizations . The normal variable only guarantees that the correct result is obtained from the point where all the results have been copied during the execution of the method, and that the order of variable assignment operations is consistent with the order of execution in the program code. Here's an example:

Map configoptions;Char[] configtext;//This variable must be defined as volatilevolatileBoolean initialized =false;//Suppose the code executes in thread A, simulates reading the configuration information, and when the read is complete, sets initialized to True to notify other threads that the configuration can be usedConfigOptions =NewHashMap (); Configtext=readconfigfile (fileName);p rocessconfigoptions (Configtext, configoptions); initialized=true;//Suppose the following code executes in thread B, waits for initialized to true, which means that thread A has initialized the configuration information to completion while(!initialized) {Sleep ();}//using the configuration information initialized in thread aDosomethingwithconfig ();

If the initialized is not using the volatile modifier, it is possible that the last code in thread A, "initialized = True", is executed in advance because of the optimization of the instruction reflow, so that code that uses configuration information in threads B can have errors.

So, the strong point of volatile itself is that he can prevent this, although sacrificing a bit of performance, but greatly enhance the reliability of the program. But remember, don't rely on volatile, use him at the right time (as explained above), if the situation is inappropriate, use the traditional synchronized keyword to synchronize access to shared variables to ensure program correctness (the performance of this keyword continues to improve as the JVM continues to improve , future performance will slowly approximate volatile).

Special rules for defining volatile variables in the Java memory model:

    • In working memory, you must refresh the most recent value from main memory each time you use the volatile variable to ensure that the modified value of the variable v is visible to other threads.
    • In working memory, each modified value must be immediately synchronized back to main memory to ensure that other threads can see their own modifications to the variable.
    • Volatile-modified variables are not optimized by instruction Reflow, ensuring that the code executes in the same order as the program.
4. Special rules for long and double row variables

For a 64-bit data type (long and double), specifically defined in the model, a loose rule allows the virtual machine to divide the read and write of 64-bit data that is not volatile-modified into two 32-bit operations, which allows the virtual machine to not guarantee the load of the 64-bit data type, The atomicity of the store, read, and write four operations.

5. Atomicity, visibility and ordering
    • Atomicity: Atomic variable operations that are directly guaranteed by the Java memory model include read, load, assign, use, store, and write six, and we can roughly assume that access read and write for the basic data type is atomic (except long and double). The synchronization block in Java code is the Synchronized keyword, so the operation between synchronized blocks is also atomic. The internals are implemented via bytecode Directives monitorenter and monitorexit.

    • Visibility: When a thread modifies the value of a shared variable, other threads can immediately know the change. The Java memory model refreshes the variable value from main memory before the variable is read by synchronizing the new value back to main memory after the variable is modified. Keywords synchronized and final also guarantee visibility. The first synchronization block is because the secondary variable must be synchronized back to main memory before the unlock operation is performed on the variable. The final keyword's visibility means that the final field's value can be seen in other threads once it is initialized in the constructor and the constructor does not pass the this pointer.

    • Order: The synchronized and volatile keywords are used to ensure the order of the thread operations. The volatile province contains the semantics of the prohibition order reordering, while the synchronized is because a variable allows only one thread to snap to the lock at the same time. This rule determines that two synchronized blocks holding the same lock can only be entered serially.

6. Principle of antecedent occurrence

If all the ordering in the Java memory model is done only by volatile and synchronized, then some operations will become verbose. One of the key principles in the Java memory model-the first occurrence principle (Happens-before)-uses this principle as a basis to guide you in determining whether there is thread safety and competition issues.

    • Program Order rules : In the program, if a operation before the B operation (such as a code above the B code, or a program called the B program), then in this thread, the a operation will be executed before the B operation.
    • Manage locking rules : A unlock operation is performed before the lock operation that faces the same lock after it.
    • volatile variable rule : A write operation on a volatile variable must occur before the read operation on the variable.
    • thread Initiation Rule : The thread's Thread.Start () must occur before all other operations on that thread.
    • thread Termination rule : All operations in a thread precede the termination detection of that thread. You can determine whether a thread terminates by the return value of the Thread.Join () method end, Thread.isalive ().
    • thread Break Rule : The call to the thread interrupt () method must be executed before the code of the interrupted thread detects the interrupt call.
    • object Finalization Rule : Initialization of an object (a call to a constructor) must be done on the object's Finalize () method.
    • transitivity : If a precedes b,b occurs in C, then a precedes c.

Next, let's take a look at the difference between "chronological order" and "antecedent" in the following example.

Private int 0 ;  Public void setValue (int  value) {    this. Value = value;}   Public int GetValue () {    return  value;}

Assuming that threads A and B are present, thread A calls SetValue (1) First, and then thread B calls the GetValue () of the same object, what is the value returned by thread B?

Analysis: There is no synchronization block-The pipe lock rule does not apply; value is not modified by volatile, so the volatile variable rule does not apply; The subsequent thread initiation, termination, break rule, and object termination rules are not related, so we cannot determine who performed the two threads first, So we say that the operation here is unsafe for the thread.

How to fix it? You can define a set, get method as the Synchronized method, which allows you to use a pipe lock rule, or set value to a volatile variable, because the Set method's modification of value does not depend on the original value of value, which satisfies the volatile keyword usage scenario.

2. Java and threading

Threads, also known as lightweight processes, are the basic dispatch units for most modern operating systems. In the same process, multiple threads share memory space, so sufficient synchronization mechanisms are required to ensure normal access. Each thread itself has its own program counters, stacks, and local variables. The way in which thread scheduling is used in Java is preemptive and requires the operating system to allocate execution time, which cannot be determined by the thread itself (for example, in Java, only Thread.yield () can yield its own execution time, but does not provide an action that can proactively get execution time). Although the Java thread Scheduler is executed by the system, it is still possible to "recommend" the operating system to assign execution time to certain threads by setting the priority of the threads (and then, this does not necessarily guarantee high-priority execution first).

Java defines the following types of thread states, one for a thread only:

Java Memory models and threads

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.