Java basics --- memory analysis, java basics ---

Source: Internet
Author: User

Java basics --- memory analysis, java basics ---

Java concurrency isShared Memory Model(Rather than the message passing model), the common state of the program is shared between threads, and the public state in the write-Read Memory is used between threads.ImplicitProceedCommunication. Data Interaction cannot be directly transmitted between multiple threads. Interaction between threads can only be achieved through shared variables.

SynchronizationYesExplicit. The programmer must explicitly specify a method or code segment to be mutually exclusive between threads.

1. multi-thread communication 1.1 Memory Model

Communication between Java threads is controlled by the Java Memory Model (JMM). JMM determines when a thread writes shared variables to another thread.

From an abstract point of view, JMM definesAbstract relationship between thread and main memory: Shared variables between threads are stored inPrimary memory(Main memory), each thread has a privateLocal Memory(Local memory), the local memory stores a copy of the shared variable to read/write the thread. Local memory is an abstract concept of JMM and does not actually exist. It covers cache, write buffer, registers, and other hardware and compiler optimizations. The Java memory model is abstracted as follows:

 

Steps for inter-thread communication:

 

  • Local Memory A and B have copies of the shared variable x in the primary memory.
  • Assume that the x value in the three memories is 0 at the beginning. Thread A temporarily stores the updated x value (assuming the value is 1) in its local memory A during execution.
  • When thread A and thread B need to communicate (how to stimulate? -- Implicit), thread A will first refresh the modified x value in its local memory to the main memory. At this time, the x value in the main memory is changed to 1.
  • Then, thread B goes to the main memory to read the updated x value of thread A. At this time, the x value of thread B's local memory is also changed to 1.

On the whole, these two steps are essentially because thread A is sending messages to thread B, and the communication process must go through the main memory. By controlling the interaction between the main memory and the local memory of each thread, JMM provides memory for java programmers.VisibilityYes.

1.2 visibility and orderliness

For example, if an object of the Count class is shared among multiple threads, this object is created inPrimary memory(Heap memory), each thread has its ownLocal Memory(Thread stack), the working memory stores one of the main memory Count objectsCopyWhen the thread operates the Count object, it first copies the Count object from the main memory to the working memory, and then executes the code count. count (), changed the num value, and finally refreshed the master memory Count with the working memory Count.

When an object has copies in multiple memories, if one memory modifies the shared variable, other threads should be able to see the modified value,This is visibility.

 

A value assignment operation is not an atomic operation. When multiple threads are executed, the CPU schedules threads randomly, we do not know where the current program is executed and switch to the next thread. The most typical example is the bank remittance problem. A bank account has a deposit of 100, at this time, a person will get 10 yuan from the account, and another person will remit 10 yuan to the account, then the balance should be 100. This may happen at this time. Thread A is responsible for withdrawal, thread B is responsible for remittance, and thread A reads 100 from the main memory. Thread B reads 100 from the main memory, and A performs 10 minus operations, and refresh the data to the main memory. In this case, the data in the main memory is 100-10 = 90, while in the B memory, the data is refreshed to the main memory, and the data in the main memory is 100 + 10 = 110, obviously, this is a serious problem.Ensure orderly execution of thread A and thread B, First withdrawal, then remittance, or first remittance, then withdrawal,This is orderliness.

1.3 synchronized and volatile

The mutex code execution process of a thread is as follows:

So,Synchronized not only ensures the concurrency orderliness of multiple threads, but also ensures the memory visibility of multiple threads.

Volatile is the second Java multi-thread synchronization method.According to JLS, a variable can be modified by volatile. In this case, the memory model ensures that all threads can see consistent variable values.

class Test {        static volatile int i = 0, j = 0;        static void one() {            i++;            j++;        }        static void two() {            System.out.println("i=" + i + " j=" + j);        }    }  

 

With volatile, the changes of shared variables I and j can be directly returned to the primary memory, which ensures that the values of I and j can be consistent, however, we cannot guarantee the degree to which the thread that executes the two method is obtained when I and j are executed.Volatile can ensure memory visibility, and cannot guarantee concurrency orderliness.

 

If there is no volatile, the code execution process is as follows:

 

2. reorder

JMM is a language-level memory model. It ensures that different compilers and different processor platforms prohibit specific types of compilers from being reordered and processors being reordered, provides consistent memory visibility for programmers.

For compiler sort, the JMM compiler re-sorting rules disable the compiler re-sorting of specific types (not all compiler re-sorting should be disabled ).

For processor re-sorting, JMM's processor re-sorting rules require the java compiler to insert a specific typeMemory barrier(Memory barriers, intel calls memory fence) commands prohibit specific types of processors from being reordered by memory barriers (not all processors must be reordered ).

 

Extended:

To improve performance, compilers and processors often reorder commands when executing programs. There are three types of sorting:


The preceding values 1 belong to the compiler and 2 and 3 belong to the processor. These sorts may cause multi-thread programs to appear.Memory visibility problems.

2.1 data dependency

If two operations access the same variable and one of the two operations is a write operation, there is data dependency between the two operations. There are three types of Data Dependencies:

 

Name Sample Code Description
Post-read A = 1; B =; Write a variable and read it again.
Write after writing A = 1; a = 2; Write a variable before writing it.
Write after reading A = B; B = 1; Read a variable and write it again.

 

 

 

 

In the above three cases, the execution result of the program will be changed as long as the execution sequence of the two operations is re-ordered.

 

As mentioned above, the compiler and processor may re-Sort operations.The compiler and the processor observe data dependencies when re-sorting.The compiler and the processor do not change the execution sequence of the two operations with data dependencies.

Note that the data dependency mentioned here is only applicable to the command sequence executed in a single processor and the operations executed in a single thread,Data dependencies between different processors and between different threads are not considered by compilers and processors..

2.2 as-if-serial Semantics

The as-if-serial semantics means that the execution results of (single thread) programs cannot be changed no matter how the compiler and the processor re-order (to improve the degree of parallelism. The compiler, runtime, and processor must comply with the as-if-serial syntax.

[Example]

double pi  = 3.14;    //A  double r   = 1.0;     //B  double area = pi * r * r; //C  

The data dependency of the preceding three operations is shown in:

 

As shown in, data dependency exists between A and C, and between B and C. Therefore, in the final execution of the command sequence, C cannot be reordered to the front of A and B (C to the front of A and B, and the program results will be changed ). However, there is no data dependency between A and B, and the compiler and processor can re-sort the execution sequence between A and B. Is the two execution sequence of the program:

As-if-serial semantics protects a single-threaded program and complies with the as-if-serial syntax compiler. runtime and the processor jointly create an illusion for programmers who write a single-threaded program: A single-threaded program is executed in the program order.As-if-serial semantics eliminates the need for single-thread programmers to worry about duplicate sorting interfering with them, nor worry about memory visibility issues..

2.3 happens-before

Java uses the newJSR-133 Memory Model. JSR-133 puts forward the concept of happens-before, through which we can describe the memory visibility between operations.If the execution result of one operation needs to be visible to another operation, there must be a happens-before relationship between the two operations.. The two operations mentioned here can be within one thread or between different threads. The following are the rules that are closely related to programmers:

  • Program sequence rules: any subsequent operations of happens-before in a thread.
  • Monitor lock rules: unlock a monitor lock. happens-before then locks the monitor lock.
  • Volatile variable rules: Write a volatile domain, and happens-before reads the volatile domain in any future.
  • Transmission: If A happens-before B and B happens-before C, then A happens-before C.

Note that there is a happens-before relationship between the two operations, which does not mean that the previous operation must be executed before the next operation! Happens-before only requires the previous operation (result of execution) to be visible to the next operation, the first operation is prior to the second operation in order (the first is visible to and ordered before the second ). The definition of happens-before is very subtle. Later, we will explain why happens-before is so defined.

 

[Example] According to the program sequence rules of happens-before, the sample code for calculating the area of the circle above has three happens-before relationships:

The 3rd happens-before relationships are derived based on the transmission of happens-before.

Here A happens-before B, but in actual execution, B can be executed before A (see the execution order after the above sorting ). A happens-before B, JMM does not require A To be executed before B. JMM only requires that the previous operation (execution result) be visible to the next operation, and the previous operation is placed before the second operation in order. Here, the execution result of Operation A does not need to be visible to Operation B, and the execution result after operation A and operation B is reordered, the result is consistent with that of operation A and operation B in the order of happens-before. In this case, JMM considers this type of re-sorting not illegal (not illegal), and JMM allows this type of re-sorting.

In computer systems, software and hardware technologies share a common goal: to develop concurrency as much as possible without changing the execution results of programs. The compiler and processor follow this goal. From the definition of happens-before, we can see that JMM also follows this goal.

2.4 influence of reordering on Multithreading

Now let's take a look at whether the re-sorting will change the execution result of the multi-threaded program. [Example ]:

class ReorderExample {      int a = 0;      boolean flag = false;        public void writer() {          a = 1;                   //1          flag = true;             //2      }        Public void reader() {          if (flag) {                //3              int i =  a * a;        //4              ……          }      }  }  

A flag variable is a flag used to identify whether variable a has been written. Assume that there are two threads A and B. A first executes the writer () method, and then the B thread executes the reader () method. When thread B executes operation 4, can we see that thread A is writing to shared variable a in operation 1?

The answer is: not necessarily visible.

Since there is no data dependency between operation 1 and operation 2, the compiler and the processor can reorder these two operations. Similarly, Operation 3 and operation 4 have no data dependency (?), The compiler and the processor can also reorder these two operations. Let's take a look at the effect that may occur when operations 1 and 2 are re-ordered? See the following program execution sequence diagram:

As shown in, operations 1 and 2 are reordered. During program execution, thread A first writes the flag variable, and then thread B reads the variable. Because the condition is true, thread B reads variable. At this point, variable a is not written by thread A at all. Here, the semantics of the multi-threaded program is damaged by the re-sorting!

Next let's take a look at what will happen when operations 3 and 4 are re-ordered (with this re-sorting, we can explain the control of Dependencies by the way ). The following figure shows the execution sequence of the program after operation 3 and operation 4 are re-ordered:

In a program, Operations 3 and 4 have control dependencies. When there is a control dependency in the Code, the concurrency of the command sequence execution is affected. To this end, the compiler and the processor will use the speculative execution to overcome the impact of controlling the degree of parallelism on the degree of parallelism. Taking the speculative execution of the processor as an example, the processor of the execution thread B can read and calculate a * a in advance, and then temporarily Save the computing result to a reorder buffer (reorder buffer ROB) hardware cache. When the condition in operation 3 is true, the calculation result is written to variable I.

We can see that,Guess executionEssentially, Operations 3 and 4 are reordered. Here, the meaning of the multi-threaded program is broken!

In a single-threaded program, reordering operations with control dependencies exists, the execution results will not be changed (this is why the as-if-serial semantics allows re-sorting of operations with control dependencies); howeverIn multi-threaded programs, re-sorting operations with control dependencies may change the execution results of programs..

3. Ordered consistency: 3.1 Data Competition

 

When the program is not correctly synchronized, there will be data competition. The java Memory Model Specification defines data competition as follows:

  • Write a variable in a thread,
  • Read the same variable in another thread,
  • In addition, the write and read operations are not sorted by synchronization.

When the Code contains data competition, the execution of the Program often produces a result that violates intuition (as shown in the example in the previous chapter ). If a multi-threaded program can be correctly synchronized, this program will be a program without data competition.

JMM ensures the memory consistency of the correctly synchronized multi-thread program as follows:

  • If the program is correctly synchronized, the execution of the program will haveSequentially consistent)-- That is, the execution result of the program is the same as that of the program in the sequential consistency memory model. HereSynchronizationSynchronization in a broad sense, including the correct use of common synchronization primitives (lock, volatile, and final.
3.2 sequential consistency Memory Model

The sequential consistency memory model has two features:

  • All operations in a thread must be executed in the program order.
  • (Whether the program is synchronized or not) All threads can only see a single operation execution sequence. In the sequential consistency memory model, each operation must be performed atomically and immediately visible to all threads.

 

The sequential consistency memory model provides the following views for programmers. In terms of concept, the sequence consistency model has a singleGlobal memoryThe memory can be connected to any thread through a left-right swing switch. At the same time, each thread must execute memory read/write operations in the program order.At any point in time, only one thread can be connected to the memory.. When multiple threads run concurrently, the switch device in the figure serializes all memory read/write operations of all threads.

For better understanding, we will further describe the characteristics of the sequential consistency model through two methods.

Assume that two threads A and B are concurrently executed. Among them, thread A has three operations, and their order in the program is A1-> A2-> A3. There are also three operations in line B. The order in the program is B1-> B2-> B3.

Assume that the two threads use the monitor for correct synchronization: after the three operations of thread A are executed, the monitor is released, and then the thread B obtains the same monitor. The execution effect of the program in the sequence consistency model is shown in:

Assuming that the two threads are not synchronized, the following is the execution of the unsynchronized program in the sequence consistency model:

 

In the sequence consistency model, although the overall execution sequence of the unsynchronized program is unordered, only one consistent sequence of execution can be seen in all threads. For example, the execution sequence of threads A and B is B1-> A1-> A2-> B2-> A3-> B3. This guarantee is obtained because every operation in the sequential consistency memory model must be immediately visible to any thread.

However, this guarantee is not provided in JMM.In JMM, the execution sequence of the unsynchronized program is unordered, and the operation execution sequence seen by all threads may be inconsistent.For example, before the current thread caches written data in the local memory and refreshes the data to the main memory, this write operation is only visible to the current thread; from the perspective of other threads, we will think that this write operation has not been executed by the current thread. This write operation can be visible to other threads only after the current thread refreshes the data written in the local memory to the main memory. In this case, the execution sequence of the current thread and other threads is inconsistent.

3.3 synchronization program execution features

[Example]

class SynchronizedExample {    int a = 0;    boolean flag = false;      public synchronized void writer() {      a = 1;      flag = true;    }      public synchronized void reader() {      if (flag) {          int i = a;          ……      }    }  } 

 

64-bit write operations in processor A are split into two 32-bit write operations, and these two 32-bit write operations are allocated to different write transactions for execution. At the same time, 64-bit read operations in processor B are split into two 32-bit read operations, and these two 32-bit read operations are allocated to the same read transaction for execution. When processor A and processor B are executed in chronological order, processor B will see an invalid value that is "half written" by processor.

4. volatile

Consider a single read/write operation on the volatile variable as synchronizing these individual read/write operations using the same monitor lock.The read of a volatile variable always shows (any thread) The Final write to this volatile variable.

This means that even 64-bit long and double variables, as long as they are volatile variables, the read and write operations on the variables will be atomic. For multiple volatile operations or compound operations similar to volatile ++, these operations are not atomic in general.

In short, the volatile variable has the following features:

  • Visibility. The read of a volatile variable always shows (any thread) The Final write to this volatile variable.
  • Atomicity: read/write operations on any single volatile variable are atomic, but composite operations similar to volatile ++ are not atomic.
4.1 volatile write-read the established happens before relationship

From the JSR-133, the write-read of the volatile variable can realize the communication between threads.

From the perspective of memory semantics, volatile has the same effect as the monitor lock:Volatile writes and monitors are released with the same memory semantics. volatile reads and monitors have the same memory semantics..

class VolatileExample {      int a = 0;      volatile boolean flag = false;        public void writer() {          a = 1;                   //1          flag = true;               //2      }        public void reader() {          if (flag) {                //3              int i =  a;           //4              ……          }      }  }  

 

Assume that thread A executes the writer () method, and thread B executes the reader () method. According to the happens before rule, the happens before relationship established in this process can be divided into two types:

 

 

The two nodes linked by each arrow represent a happens before relationship. The black arrow indicates the procedural sequence rules, the orange arrow indicates the volatile rules, and the blue arrow indicates the happens before guarantee provided after these rules are combined.

Here, after thread A writes A volatile variable, thread B reads the same volatile variable. All visible shared variables before thread A writes the volatile variable will be immediately visible to thread B after thread B reads the same volatile variable.

4.2 volatile write-Read Memory Semantics

 The memory semantics written by volatile is as follows:

  • When writing a volatile variable, JMM refreshes the shared variable in the local memory corresponding to the thread to the main memory.

The above sample program VolatileExample is used as an example. Assume that thread A executes the writer () method first, and thread B executes the reader () method, in the initial state, both the flag and a in the local memory of the two threads are in the initial state.

Is the state of shared variables after thread A executes volatile writing. After thread A writes the flag variable, the values of the two shared variables updated by thread A in the local memory A are refreshed to the main memory. In this case, the value of the shared variable in the local memory A is the same as that in the main memory.

 

The memory Syntax of volatile reads is as follows:

  • When a volatile variable is read, JMM sets the local memory corresponding to this thread to invalid. The thread will then read the shared variable from the main memory.

The following describes the state of shared variables after thread B reads the same volatile variable. After reading the flag variable, the Local Memory B has been set to invalid. In this case, thread B must read the shared variable from the main memory. The read operation of thread B will make the value of the shared variable in the Local Memory B and the main memory become the same.

To combine the volatile write and volatile read steps, after reading a volatile variable from thread B, the value of all visible shared variables before writing the volatile variable to write thread A will immediately become visible to read thread B.

The following is a summary of the memory semantics of volatile write and volatile read:

  • Thread A writes A volatile variable. In essence, thread A sends A message to A thread that is going to read the volatile variable (which modifies the shared variable.
  • Thread B reads a volatile variable. In essence, thread B receives the message sent by a previous thread (modifications made to the shared variable before the volatile variable is written.
  • Thread A writes A volatile variable and then thread B reads the volatile variable. This process is essentially because thread A sends messages to thread B through the main memory.
4.3 implement volatile memory Semantics

To implement volatile memory semantics, JMM restricts the compiler and processor re-sorting respectively. The following table lists the volatile reordering rules set by JMM for the compiler:

 

Can it be reordered? The second operation
First operation Normal read/write Volatile read Volatile write
Normal read/write     NO
Volatile read NO NO NO
Volatile write   NO NO

For example, the last cell in the third row means: In the program order, when the first operation is a read or write of a common variable, if the second operation is volatile write, the compiler cannot reorder these two operations.

From the table above, we can see that:

    • When the second operation is volatile write, no matter what the first operation is, it cannot be reordered. This rule ensures that the operations before volatile write will not be reordered by the compiler after volatile write.
    • When the first operation is volatile read, no matter what the second operation is, it cannot be reordered. This rule ensures that operations after volatile read are not reordered by the compiler before volatile read.
    • When the first operation is volatile write and the second operation is volatile read, it cannot be reordered.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.