(chapter III) Java Memory Model (middle)

Source: Internet
Author: User
Tags visibility

First, volatile memory semantics

  Characteristics of 1.1 volatile

A good way to understand the volatile characteristics is to use a single read/write for the volatile variable as a synchronization of these individual read/write operations using the same lock. The following is a concrete example of the code below:

classVolatilefeaturesexample {volatileLong VL = 0L;//declaring a 64-bit long variable with volatile     Public voidset (Long l) {VL= L;//the write of a single volatile variable    }     Public voidgetandincrement () {VL++;//read/write of composite (multiple) volatile variables    }     PublicLong Get () {returnVl//reading of a single volatile variable    }}

Assume that there are multiple threads calling the 3 methods of the above program, which is semantically equivalent to the following program.

classvolatilefeaturesexample {Long VL= 0L;//64-bit long generic variable     Public synchronized voidSet (Long l) {//write a single common variable with the same lock synchronizationVL =l; }     Public voidGetandincrement () {//Normal method InvocationLong temp = get ();//call a synchronized Read methodtemp + = 1L;//Normal write OperationSet (temp);//calling a synchronized write method    }     Public synchronizedLong get () {//read the same lock synchronization for a single common variable        returnVL; }}

As shown in the example program above, a single read/write operation of a volatile variable is synchronized with the same lock as the read/write operation of a normal variable, and they perform the same effect.

The Happens-before rule of the lock guarantees the memory visibility between the two threads that release the lock and acquires the lock, which means that the read of a volatile variable is always visible (any thread) to the last write to the volatile variable.

The semantics of the lock determines the atomicity of the execution of the critical section code. This means that even a long and double variable of 64 bits, as long as it is a volatile variable, is atomic to the read/write of the variable. In the case of multiple volatile operations or a compound operation similar to volatile++, these operations are not atomic in nature.

In short, the volatile variable itself has the following characteristics:

Visibility: Reading a volatile variable will always see (any thread) the last write to the volatile variable.

Atomicity: The read/write of any single volatile variable is atomic, but a composite operation similar to volatile++ is not atomic.

  1.2 Volatile write-read established Happens-before relationship

This is the nature of the volatile itself, and for programmers, the impact of volatile on the memory visibility of threads is more important than the characteristics of volatile itself, and we need to pay more attention to it.

Starting with JSR-133 (that is, starting with JDK 5), the write-read of a volatile variable can enable communication between threads.

From the memory semantics point of view, volatile write-read and lock release-get the same memory effect: volatile write and lock release have the same memory semantics; volatile reads have the same memory semantics as lock acquisition.

Sample code that uses volatile variables:

classVolatileexample {intA = 0; volatile BooleanFlag =false;  Public voidwriter () {a= 1;//1Flag =true;//2    }     Public voidReader () {if(flag) {//3            inti = A * A;//4        }    }    }

Assuming thread A executes the writer () method, thread B executes the reader () method. According to the Happens-before rule, the Happens-before relationship established by this process can be divided into 3 categories:

1, according to the Rules of Procedure order, 1 happens-before2,3 happens-before 4.

2, according to the volatile rule, 2 Happens-before 3.

3, according to Happens-before transitive rules, 1 happens-before 4.

The graphical representation of the above Happens-before relationship is as follows:

    

Here a thread writes a volatile variable, and the B thread reads the same volatile variable. A thread all visible shared variables before the volatile variable is written, and immediately becomes visible to the B thread after the B thread reads the same volatile variable.

  1.3 Volatile write-read memory semantics

When a volatile variable is written, jmm flushes the shared variable in the local memory corresponding to the thread to main memory.

Take the example program Volatileexample above as an example, assuming that thread a first executes the writer () method, then thread B executes the reader () method, and the initial two-thread local memory flag and a are in the initial state. Is the state of the shared variable after thread A performs a volatile write:

    

After thread A writes the flag variable, the values of the two shared variables that were updated by thread A in local memory A are flushed to main memory. The values for shared variables in local memory A and main memory are the same at this point.

The memory semantics for volatile reads are as follows:

When a volatile variable is read, JMM will place the local memory corresponding to that thread as invalid. The thread next reads the shared variable from the main memory.

Is the state of the shared variable after thread B reads the same volatile variable:

    

, the value contained in local memory B has been set to invalid after reading the flag variable. At this point, thread B must read the shared variable from main memory. The read operation of thread B will cause local memory B to be programmed with the value of the shared variable in main memory.

If we combine the volatile and volatile reading two steps, after reading thread B reads a volatile variable, the value of all shared variables that are visible to thread a before writing the volatile variable will immediately become visible to read thread B.

The following is a summary of the memory semantics for volatile and volatile reads:

> Thread A writes a volatile variable, essentially a message that thread A sends (its modifications to a shared variable) to a thread that is going to read the volatile variable.

> Thread B reads a volatile variable, essentially thread B receives a message from a previous thread (modified to a shared variable before writing the volatile variable).

> Thread A writes a volatile variable, and then thread B reads the volatile variable, which is essentially thread A sends a message to thread B through main memory.

  Implementation of 1.4 volatile memory semantics

Reordering is divided into compiler reordering and handler reordering. To implement volatile memory semantics, JMM restricts both types of reordering, making the following rules:

1. When the second operation is volatile, no matter what the first action is, it cannot be reordered. This rule ensures that operations before volatile writes are not sorted by the compiler until after the volatile write.

2, when the first operation is volatile read, no matter what the second operation is, you cannot reorder. This rule ensures that operations after the volatile read are not sorted by the compiler until the volatile read.

3, when the first action is volatile write, the second operation is volatile read, cannot be re-ordered.

To implement volatile memory semantics, the compiler inserts a memory barrier in the instruction sequence to suppress a particular type of handler reordering when generating bytecode. It is almost impossible for the compiler to find an optimal placement to minimize the number of insertion barriers. To this end, JMM adopt a conservative strategy. The following is a JMM memory barrier insertion strategy based on conservative policies.

Insert a storestore barrier in front of each volatile write operation.

Insert a storeload barrier behind each volatile write operation.

Insert a loadload barrier in front of each volatile read operation.

Insert a loadstore barrier behind each volatile read operation.

The memory barrier insertion strategy described above is very conservative, but it guarantees correct volatile memory semantics on any processor platform and in any program.

Because different processors have different "tightness" processor memory models, the insertion of memory barriers can continue to be optimized based on the specific processor memory model.

  1.5 JSR-133 Why to enhance the volatile memory semantics

In the old Java memory model prior to JSR-133, although volatile variables were not allowed to be reordered, the old Java memory model allowed volatile variables to be reordered with ordinary variables.

In the old memory model, volatile writes-read no lock-out-Gets the memory semantics that are available. To provide a mechanism to communicate with more lightweight threads than locks, the JSR-133 Expert Group decided to enhance the memory semantics of volatile: strictly restricting the compiler and processor reordering of volatile variables and ordinary variables, Ensure that volatile write-read and lock-release-get have the same memory semantics. From the compiler reordering rules and the processor memory barrier insertion strategy, as long as the reordering between the volatile variable and the normal variable can break the volatile memory semantics, this reordering is suppressed by the compiler collation and the processor memory barrier insertion policy.

Because volatile only guarantees that the read/write of a single volatile variable is atomic, the mutex execution of the lock ensures that the execution of the entire critical section code is atomic. In functionality, locks are more powerful than volatile, and volatile is more advantageous in scalability and execution performance, but caution is necessary if you want to replace locks with volatile in your program.

(chapter III) Java Memory Model (middle)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.