(GO) Java Threads Java Memory model summary

Source: Internet
Author: User
Tags flushes

Java concurrency takes the shared memory model (not the messaging model), where the common state of the program is shared between threads, and the threads communicate implicitly through the public state in the write-read memory. There is no direct data interaction between multiple threads, and the interaction between them can only be achieved by sharing variables

synchronization is performed explicitly . Programmers must explicitly specify a method or a piece of code that requires mutually exclusive execution between threads.

1. Multi-threaded communication

1.1 Memory model

Communication between Java threads is controlled by the Java Memory Model (JMM), and JMM determines when a thread's write to a shared variable is visible to another thread.

From an abstract point of view, JMM defines the abstract relationship between threads and main memory : Shared variables between threads are stored in main memory , and each thread has a private local memory memory), a copy of the shared variable is stored locally in the thread to read/write. Local memory is an abstraction of JMM, not real, and it covers caches, write buffers, registers, and other hardware and compiler optimizations. The Java memory model is abstracted as follows:

Steps for inter-thread communication:

    1. First, thread a flushes the updated shared variables in local memory A to the main memory.
    2. Then, thread B goes to main memory to read shared variables that have been updated before thread A.

    • Local memory A and B have a copy of the shared variable x in main memory.
    • Suppose initially, these three in-memory x values are 0. When thread a executes, it temporarily stores the updated X value (assuming a value of 1) in its own local memory a.
    • When thread A and thread B need to communicate (how to fire?) -implicitly), thread a first flushes the modified X value in its local memory into main memory, when the X value in main memory becomes 1.
    • Then, thread B goes to main memory to read the x value of the updated thread A, at which point the X value of the local memory of thread B also becomes 1.

Overall, these two steps are essentially thread a sending a message to thread B, and the communication process must go through main memory. JMM provides a memory visibility Guarantee for Java programmers by controlling the interaction between main memory and the local memory of each thread.

1.2 Visibility, order

For example, an object of the Count class is shared among multiple threads, and the object is created in main memory (heap memory), each with its own local memory (line stacks), and the working memory stores a copy of the main memory count object when the thread operates the Count object, the Count object is first copied from main memory into working memory, then code Count.count () is executed, the NUM value is changed, and finally the main memory count is flushed with the working memory count.

When an object has replicas in more than one memory, if one memory modifies the shared variable, the other thread should be able to see the modified value, which is the visibility .

An operation assignment operation is not an atomic operation, when multiple threads are executing, the CPU is randomly scheduling the thread, we do not know which step the current program is executed to switch to the next thread, one of the most classic examples is the bank remittance problem, a bank account deposit 100, then a person from the account to take 10 yuan, At the same time another person to the account remit 10 yuan, then the balance should be 100. So this can happen at this time, a thread responsible for withdrawals, B thread responsible for remittances, a from the main memory read to 100,b from the main memory read to 100,a perform minus 10 operations, and the data flushed to the main memory, when the main memory data 100-10=90, and B memory execution plus 10 operation, and the data flushed to the main memory , the final main memory data 100+10=110, obviously this is a serious problem, we want to ensure that a thread and B thread in order to execute , the first withdrawal after the remittance or the first remittance after the withdrawal, this is ordered .

1.3 Synchronized and volatile

One thread executes the mutex code procedure as follows:

    1. Obtain the synchronous lock;
    2. Empty working memory;
    3. Copy object copies from main memory to working memory;
    4. Execute code (calculation or output, etc.);
    5. Refresh main memory data;
    6. Release the sync lock.

Therefore,synchronized not only guarantees the concurrent order of multithreading, but also ensures the memory visibility of multithreading.

volatile is the second way of Java multithreading , and according to JLS, a variable can be modified by volatile, in which case the memory model ensures that all threads can see consistent variable values

classTest {Static volatile inti =0, j =0; Static voidOne () {i++; J++; }        Static voidBoth () {System. out. println ("i="+ i +"j="+j); }    }    

Plus volatile can directly respond to the changes in the shared variables I and J to the main memory, so that the values of I and J can be consistent, but we can not guarantee that the execution of the method of the thread is at I and J to what extent obtained, so volatile can guarantee the memory visibility, There is no guarantee of concurrency ordering .

If there is no volatile, the code executes the following procedure:

    1. Copy the variable i from main memory to working memory;

    2. Refresh main memory data;

    3. Change the value of I;
    4. Copy the variable J from main memory to working memory;

    5. Refresh main memory data;

    6. Change the value of J;

2. Reorder

JMM is a language-level memory model that ensures that programmers are guaranteed consistent memory visibility over different compilers and different processor platforms by prohibiting certain types of compilers from reordering and processing.

For compiler punch ordering, the compiler jmm for a specific type of compiler (not all compilers are forbidden).

For processor reordering, the JMM collation of the handler will require the Java compiler to insert a specific type of memory barrier when generating the sequence of instructions (memories Barriers,intel called the fence) instruction, A memory barrier directive prevents a particular type of handler from being reordered (not all handlers are disabled).

Extended:

In order to improve performance when executing programs, the compiler and processor often reorder instructions. There are three types of reordering:

    1. The re-ordering of compiler optimizations. The compiler can reschedule the execution of a statement without changing the semantics of the single-threaded procedure.
    2. Reordering of instruction-level parallelism. Modern processors use instruction-level parallel technology (Instruction-level Parallelism, ILP) to overlap multiple instructions. If there is no data dependency, the processor can change the order in which the statement corresponds to the machine instruction execution.
    3. reordering of memory systems. Because the processor uses the cache and read/write buffers, this makes the load and storage operations appear to be in a disorderly sequence.


The above 1 belongs to the compiler reordering, and 2 and 3 are processor reordering. These reordering can cause memory visibility issues with multithreaded programs.

2.1 Data Dependencies

If two operations access the same variable, and one of the two operations is a write operation, there is a data dependency between the two operations. The following three types of data dependencies are:

Name code example Description
Read after writing A = 1;b = A; After you write a variable, read the position again.
Write and write after A = 1;a = 2; After writing a variable, write the variable again.
Write after reading A = B;b = 1; After reading a variable, write the variable.

In the above three cases, the execution result of the program will be changed as long as the order of execution of the two operations is reordered.

As mentioned earlier, the compiler and the processor may reorder the operations. when the compiler and processor are reordered, data dependencies are observed , and the compiler and processor do not change the order in which the two operations exist that have data dependencies.

Note that the data dependencies described here are only for sequences of instructions executed in a single processor and for operations performed in a single thread, and data dependencies between different processors and between different threads are not considered by the compiler and processor .

2.2 As-if-serial semantics

As-if-serial semantics means: No matter how reordering (compilers and processors in order to improve parallelism), the execution results of (single-threaded) programs cannot be changed. Compilers, runtime, and processors must adhere to as-if-serial semantics.

Cases

Double pi  3.14;    // A   double R   1.0;     // B   Double // C  
The data dependencies for the above three operations are as follows:

As shown, there is a data dependency between A and C, and there is a data dependency between B and C. So in the final sequence of instructions, C cannot be reordered to the front of A and B (c to the front of a and B, the results of the program will be changed). However, there is no data dependency between A and B, and the compiler and the processor can reorder the order of execution between A and B. Is the two execution order of the program:

As-if-serial semantics protects a single-threaded program, adheres to the as-if-serial semantic compiler, and the runtime and the processor create an illusion for programmers who write single-threaded programs: single-threaded procedures are executed in the order of the program. as-if-serial semantics make it unnecessary for single-threaded programmers to worry about reordering, or to worry about memory visibility issues .

2.3 Happens-before

Starting with JDK5, Java uses the new JSR-133 memory model . JSR-133 presents the concept of Happens-before, which illustrates the memory visibility between operations . If the result of one operation needs to be visible to another, there must be a happens-before relationship between the two operations . The two actions mentioned here can be either within a thread or between different threads. The Happens-before rules that are closely related to programmers are as follows:

    • Program Order rules: Each action in a thread is happens-before to any subsequent action in that thread.
    • Monitor lock rule: the unlocking of a monitor lock is happens-before to the locking of the subsequent lock on the monitor.
    • Volatile variable rule: writes to a volatile field, happens-before to any subsequent reading of this volatile field.
    • Transitivity: If a happens-before B, and B happens-before C, then a Happens-before c.

Note that there is a happens-before relationship between the two operations, which does not mean that the previous operation must be performed before the next operation! Happens-before only requires that the previous operation (the result of execution) be visible to the latter operation, and that the previous operation is in order before the second operation (the first is visible to and ordered before the second). The definition of Happens-before is subtle, and the latter will specify why Happens-before is so defined.

"Example" according to the program order rules of Happens-before, there are three happens-before relationships in the sample code that calculates the area of a circle:

    1. A Happens-before B;
    2. B Happens-before C;
    3. A Happens-before C;

The 3rd Happens-before relationship here is derived from the transitive nature of Happens-before.

here a Happens-before B, but actually executes when B can be executed before a ( see the order of execution after reordering above). A happens-before B,JMM does not require a must be executed before B. JMM only requires that the previous operation (the result of the execution) be visible to the latter operation, and that the previous operation precedes the second operation in order. The execution result of operation A here does not need to be visible to Operation B, and the result of the reordering of operations A and B is consistent with the results of operation A and Operation B performed in Happens-before order. In this case, JMM would consider the reordering to be not illegal (not illegal), jmm allow such reordering.

In computers, software technology and hardware technology have a common goal: to develop parallelism as much as possible without changing the results of program execution. Compilers and processors follow this goal, and from the definition of happens-before we can see that JMM also follow this goal.

The effect of 2.4 reordering on multithreading

Now let's see if reordering will change the execution results of multi-threaded threads. "Example":

classReorderexample {intA =0; Boolean flag=false;  Public voidwriter () {a=1;//1Flag =true;//2} PublicvoidReader () {if(flag) {//3            inti = A * A;//4...} }  }  

The flag variable is a token used to identify whether the variable a has been written. This assumes that two threads A and B,a first execute the writer () method, and then the B thread then executes the reader () method. Thread B can see if thread a writes to the shared variable A in operation 1 when it performs Operation 4 o'clock.

The answer is: not necessarily seen.

Because operations 1 and 2 do not have data dependencies, the compiler and the processor can reorder the two operations, and similarly, action 3 and Operation 4 have no data dependencies (?). ), the compiler and the processor can also reorder the two operations. Let's take a look at what the effect might be when Operation 1 and Operation 2 reorder. Take a look at the following program execution timing diagram:

As shown, operation 1 and Operation 2 are reordered. When the program executes, thread a first writes the tag variable flag, and then thread B reads the variable. Because the condition is true, thread B reads the variable A. At this point, variable A is not written by thread A at all, where the semantics of multithreaded routines are broken!

Let's take a look at what happens when action 3 and Operation 4 Reorder (with this reordering, you can control dependencies by the way). The following is the execution sequence diagram of the program after Operation 3 and Operation 4 reordering:

In the program, action 3 and Operation 4 exist to control dependencies . When there is a control dependency in the code, it affects the degree of parallelism that the instruction sequence executes. To do this, the compiler and processor use guessing (speculation) execution to overcome the effect of control affinity on the degree of parallelism. Taking the processor's guess execution as an example, the processor executing thread B can read and calculate the a*a in advance, and then temporarily save the results to a hardware cache called the reorder buffer (ROB). When the condition of the next operation 3 is judged to be true, the result of the calculation is written to the variable i.

As we can see, guessing execution essentially reordering operations 3 and 4. Reordering here destroys the semantics of multi-threaded threads!

In a single-threaded program, reordering of existing control-dependent operations does not alter the execution result (which is also why as-if-serial semantics allow reordering of operations that have control dependencies), but in multithreaded programs, reordering of controls that have control dependencies can change the execution of a program .

3. Sequential consistency

3.1 Data Competition

Data contention exists when the program is not synchronized correctly. The Java memory Model specification defines the competition for data as follows:

    • Write a variable in a thread,
    • Read the same variable on another thread,
    • and write and read are not sorted by synchronization.

When the code contains data competition, the execution of the program often produces counterintuitive results (as in the previous chapter). If a multithreaded program can be synchronized correctly, this program will be a non-data competition.

JMM the following guarantees for the memory consistency of correctly synchronized multithreaded threads:

    • If the program is synchronized correctly, the execution of the program will have sequential consistency (sequentially consistent)-that is, the execution result of the program is the same as that of the program in the sequential consistent memory model. The synchronization here refers to the generalized synchronization, including the correct use of common synchronization primitives (Lock,volatile and final).
3.2 Sequential consistent memory model

The sequential consistent memory model has two major features:

    • All operations in a thread must be executed in the order of the program.
    • (regardless of whether the program is synchronized) all threads can see only a single order of operation execution. In the sequential consistent memory model, each operation must be atomically executed and immediately visible to all threads.

The sequential consistent memory model provides the programmer with the following views. Conceptually, the sequential consistency model has a single global memory that can be connected to any thread by a switch that swings around. At the same time, each thread must perform a memory read/write operation in the order of the program. A maximum of one thread can connect to memory at any point in time . When multiple threads are executing concurrently, the switch device in the diagram can serialize all the memory read/write operations of all threads.

For a better understanding, let's go through two of the features of the sequential consistency model to further illustrate.

Assume that there are two threads A and b executing concurrently. Where a thread has three operations, their order in the program is: A1->A2->A3. The b thread also has three operations, and their order in the program is: B1->B2->B3.

Suppose these two threads use a monitor to properly synchronize : The three operation of a thread releases the monitor after execution, and then the B thread gets the same monitor. Then the execution effect of the program in the sequential consistency model will look like the following:

assuming that the two threads do not synchronize , the following is the execution of this unsynchronized program in the sequential consistency model:

unsynchronized programs in the sequential consistency model, although the overall order of execution is unordered, all threads see only a consistent overall order of execution. For example, threads A and b see the Order of execution: B1->a1->a2->b2->a3->b3. This guarantee is achieved because each operation in the sequential consistent memory model must be immediately visible to any thread.

However, there is no such guarantee in JMM. The unsynchronized program in JMM not only the overall order of execution is unordered, but also the order in which all threads see the execution of operations may be inconsistent. For example, when the current thread caches the written data in local memory and is not flushed to the main memory, the write is only visible to the current thread, and viewed from the perspective of other threads, it is considered that the write operation has not been executed by the current thread at all. This write operation can be visible to other threads only after the thread has flushed the data written in local memory to the main memory. In this case, the execution order of the operations that the current thread and other threads see is inconsistent.

3.3 Execution characteristics of the synchronization program

Cases

classSynchronizedexample {intA =0; Boolean flag=false;  PublicSynchronizedvoidwriter () {a=1; Flag=true; }       PublicSynchronizedvoidReader () {if(flag) {inti =A;    ...... } }  }  



In the sequential consistency model, all operations are executed serially in the order of the program. In JMM, the code within the critical section can be reordered.

3.4 Execution characteristics of an unsynchronized program

For multithreaded programs that are not synchronized or are not synchronized correctly,JMM provides only minimal security: The value read when the thread executes, either the value written by a previous thread, or the default value (0,null,false), JMM guarantees that the value read by the thread read operation does not of thin air).

For minimal security, the JVM allocates objects on the heap, first clearing 0 of the memory space before allocating the objects (the two operations are synchronized internally by the JVM). As a result, the default initialization of the domain is complete when allocating objects in zeroed memory space (pre-zeroed memories).

JMM does not guarantee that the execution result of the unsynchronized program is consistent with the execution result of the program in the sequential consistency model. Because the unsynchronized program executes in the sequential consistency model, it is unordered in its entirety and its execution results are unpredictable. It is meaningless to ensure that the results of the unsynchronized program execution in two models are consistent.

As with the sequential consistency model, the unsynchronized program executes in JMM and is unordered in its entirety, and its execution results are unpredictable. At the same time, there are several differences in the execution characteristics of the unsynchronized program in the two models:

    1. The sequential consistency model guarantees that operations within a single thread are executed in the order of the program, while JMM does not guarantee that operations within a single thread will be executed in the order of the program (such as the reordering of multithreaded programs correctly synchronized above in the critical section). --The previous article has described
    2. The sequential consistency model guarantees that all threads see only a consistent sequence of operations execution, while JMM does not guarantee that all threads will see a consistent sequence of operations execution. --The previous article has described
    3. JMM does not guarantee that read/write operations on 64-bit long and double variables are atomic, and the sequential consistency model guarantees atomicity for all memory read/write operations.

About the 3rd:

The 3rd difference is closely related to the working mechanism of processor bus. In a computer, data is passed between the processor and memory through the bus. Each time the data transfer between the processor and the memory is done through a series of steps called bus transaction. The bus transaction consists of a read transaction (transaction) and a write transaction (write transaction). A read transaction transmits data from memory to the processor, the write transaction transmits data from the processor to memory, and each transaction reads/writes one or more physically contiguous words in memory. The key here is that the bus synchronizes transactions that attempt to use the bus concurrently. The bus prevents all other processors and I/O devices from performing memory read/write during the execution of a bus transaction on one processor.

On some 32-bit processors, there is a significant overhead if a read/write operation on 64-bit data is required to be atomic. In order to take care of this processor, the Java language specification encourages but does not force the JVM to have atomicity on the read/write of a 64-bit long variable and a double type variable. When the JVM is running on such a processor, a read/write operation of a 64-bit long/double variable is split into two 32-bit read/write operations to execute. These two 32-bit read/write operations may be assigned to different bus transactions, at which point the read/write to this 64-bit variable will not be atomic.

When a single memory operation is not atomic, it can have unintended consequences. Please see below:

As shown, suppose processor a writes a long variable, and processor B reads the long variable. a 64-bit write operation in processor A is split into two 32-bit writes, and the two 32-bit writes are assigned to different write transactions . At the same time, the 64-bit read operation in processor B is split into two 32-bit read operations, and the two 32-bit read operations are assigned to the same read transaction execution. When processor A and b press the timing to execute, processor B will see an invalid value that is only half written by processor a.


4. Volatile

The single read/write of the volatile variable is considered to be synchronized with the single read/write operation using the same monitor lock. to read a volatile variable, you can always see (any thread) the last write to the volatile variable.

This means that even a long and double variable of 64 bits, as long as it is a volatile variable, will have the atomicity to read and write to the variable. In the case of multiple volatile operations or a compound operation similar to volatile++, these operations are not atomic in nature.

In short, the volatile variable itself has the following characteristics:

    • Visibility. To read a volatile variable, you can always see (any thread) the last write to the volatile variable.
    • Atomicity: The read/write of any single volatile variable is atomic, but a composite operation similar to volatile++ is not atomic.
4.1 Volatile write-read established happens before relationship

Starting with JSR-133, the write-read of volatile variables allows for communication between threads.

In terms of memory semantics, volatile has the same effect as a monitor lock:volatile writes and the release of the monitor have the same memory semantics, and volatile reads have the same memory semantics as the capture of the monitor .

classVolatileexample {intA =0; volatileBoolean flag =false;  Public voidwriter () {a=1;//1Flag =true;//2    }         Public voidReader () {if(flag) {//3            inti = A;//4...} }  }  

Assuming thread A executes the writer () method, thread B executes the reader () method. According to the happens before rule, the happens before relationship established by this process can be divided into two categories:

    1. According to the Rules of Procedure order, 1 happens before 2; 3 happens before 4.
    2. According to the volatile rule, 2 happens before 3.
    3. According to the transitive rules of happens before, 1 happens before 4.

, each arrow links the two nodes that represent a happens before relationship. The black arrows represent the program order rules, the orange arrows represent the volatile rules , and the blue arrows indicate the happens before guarantees provided after the rules are combined.

Here a thread writes a volatile variable, and the B thread reads the same volatile variable. A thread all visible shared variables before the volatile variable is written, and immediately becomes visible to the B thread after the B thread reads the same volatile variable.

4.2 Volatile write-read memory semantics

the memory semantics for volatile writes are as follows:

    • When a volatile variable is written, jmm flushes the shared variable in the local memory corresponding to the thread to main memory.

Take the example program Volatileexample above as an example, assuming that thread a first executes the writer () method, then thread B executes the reader () method, and the initial two-thread local memory flag and a are in the initial state.

Is the state of the shared variable after thread A performs a volatile write. After thread A writes the flag variable, the values of the two shared variables that were updated by thread A in local memory A are flushed to main memory. At this point, the values of the shared variables in local memory A and main memory are consistent.

The memory semantics for volatile reads are as follows:

    • When a volatile variable is read, JMM will place the local memory corresponding to that thread as invalid . The thread next reads the shared variable from the main memory.

The following is the state of a shared variable after thread B reads the same volatile variable. After reading the flag variable, local memory B has been set to invalid. At this point, thread B must read the shared variable from main memory. The read operation of thread B will cause the values of local memory B and shared variables in main memory to become consistent.

Combining the two steps of volatile and volatile reading, when read thread B reads a volatile variable, the value of all shared variables visible to thread A before writing the volatile variable will immediately become visible to read thread B.

The following is a summary of the memory semantics for volatile and volatile reads:

    • Thread A writes a volatile variable, essentially thread A sends a message to a thread that is going to read the volatile variable (which modifies the shared variable).
    • Thread B reads a volatile variable, essentially thread B receives a message from a previous thread (modified to a shared variable before writing the volatile variable).
    • Thread A writes a volatile variable, and then thread B reads the volatile variable, which is essentially a thread A sends a message to thread B through main memory.
implementation of 4.3 volatile memory semantics

To implement volatile memory semantics, JMM restricts compiler reordering and handler reordering. The following is a list of volatile reordering rules jmm for the compiler:

Is it possible to reorder A second action
First action General Read/write Volatile read Volatile write
General Read/write NO
Volatile read NO NO NO
Volatile write NO NO

For example, the last cell in the third row means: In program order, when the first operation is read or write of a normal variable, if the second action is volatile, the compiler cannot reorder the two operations.

From the table above we can see:

    • When the second operation is volatile, no matter what the first action is, it cannot be reordered. This rule ensures that operations before volatile writes are not sorted by the compiler until after the volatile write.
    • When the first action is volatile read, no matter what the second action is, it cannot be reordered. This rule ensures that operations after the volatile read are not sorted by the compiler until the volatile read.
    • When the first operation is volatile, the second operation is volatile and cannot be reordered.

(GO) Java Threads Java Memory model summary

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.