Thread safety-visibility and ordering

Source: Internet
Author: User
Tags semaphore volatile

What is the memory model of Java?

Shared variables: A variable can be used by multiple threads, so this variable is a shared variable for these threads. The Java memory model, which describes how threads interact through memory, describes the access rules for various variables (thread-shared variables) in Java programs. And the underlying details of storing variables into memory and reading out variables from memory in the JVM. In particular, there is a main memory area in the JVM (major memory or Java HEAP memory) that is shared with all threads, but the thread cannot directly manipulate the variables in the main RAM, and each thread has its own independent working memory (working memories). Save a copy of the variable used by the thread (a copy of the variable in main memory)rule: Threads must read and write to shared variables in their own working memory, not directly in main memory. Different threads cannot directly access variables in the working memory of other threads, and the transfer of variable values between threads requires main memory as a bridge.   What is the visibility of memory? Visibility: A thread is worth modifying for shared variables, and can be seen by other threads in a timely manner principle of thread visibility:
the change of thread pair shared variable want to be seen by thread two, you must perform the following two steps:① update of shared variables in working memory 1 to main memory② updates the latest shared variables in main memory to work memory 2.  instruction reordering: The order in which the code is written differs from the order in which it is actually executed, which is the optimizations that the compiler or processor makes to improve program performance.

1. Compiler-Optimized reordering (compiler Optimizations)

2. Instruction-level parallel reordering (processor optimization)

3. Re-sequencing of memory systems (processor optimization)

is it possible to rearrange the execution order of all statements? the answer is in the negative. To get this straight, let's talk about another concept: data dependencies.What is data dependency?

If two operations access the same variable, and one of the two operations is a write operation, there is a data dependency between the two operations. The following three types of data dependencies are:

name code example description
write read a = 1;b = A; after writing a variable, read the position again.
write after a = 1;a = 2; after writing a variable, write the variable.
Read write a = B;b = 1; after reading a variable, write the variable.

In the above three cases, the execution result of the program will be changed as long as the order of execution of the two operations is reordered. Therefore, when the compiler and the processor are reordered, the data dependencies are respected, and the compiler and processor do not change the order in which the two operations exist that have data dependencies. In other words: in a single-threaded environment, the final effect of instruction execution should be consistent with the effect of sequential execution, otherwise this optimization will lose its meaning. This sentence has a professional term called as-if-serial semantics (as-if-serial semantics)

int num1=1;//First Line
int num2=2;//Second Line
int sum=num1+num;//third row

    • Single thread: The first and second rows can be reordered, but the third line is not.
    • Reordering does not cause memory visibility issues for single threads
    • When a program is interleaved in multiple threads, reordering may be subject to memory visibility issues.
Visibility Analysis:

Causes of shared variables not being visible between threads:

    1. Cross-execution of threads
    2. Reordering with thread crossover execution
    3. Shared variable updated values are not updated in time between working memory and main memory

the effect of reordering on multithreadingclass Reorderexample {int a = 0;Boolean flag = false; Public void writer () {a = 1; 1flag = true; 2    } Public void Reader () {if (flag) {//3int i = a * A;//4        }    }}The flag variable is a token used to identify whether the variable a has been written. This assumes that two threads A and B,a first execute the writer () method, and then the B thread then executes the reader () method. Thread B can see if thread a writes to the shared variable A in operation 1 when it performs Operation 4 o'clock.

The answer is: not necessarily seen.

Because operations 1 and 2 do not have data dependencies, the compiler and the processor can reorder the two operations, and similarly, operations 3 and 4 do not have data dependencies, and the compiler and the processor can reorder both operations. Let's take a look at what the effect might be when Operation 1 and Operation 2 reorder.

The order of execution is: 2, 3, 4 and 1 (this is a perfectly present and reasonable order, if you don't understand, first understand how the CPU allocates time to multiple threads) 

After Operation 3 and Operation 4 are reordered, control dependencies exist because of action 3 and Operation 4. When there is a control dependency in the code, it affects the degree of parallelism that the instruction sequence executes. To do this, the compiler and processor use guessing (speculation) execution to overcome the effect of control affinity on the degree of parallelism. Taking the processor's guess execution as an example, the processor executing thread B can read and calculate the a*a in advance, and then temporarily save the results to a hardware cache called the reorder buffer (ROB). When the condition of the next operation 3 is judged to be true, the result of the calculation is written to the variable i.

As we can see, guessing execution essentially reordering operations 3 and 4. Reordering here destroys the semantics of multi-threaded threads!

  Synchronization (synchronization) means that when one thread accesses the data, other threads must not access the same data, that is, only one thread can access the data at the same time, and other threads can access it at the end of the process.
Package Com.xidian.count;import Java.util.concurrent.countdownlatch;import Java.util.concurrent.executorservice;import Java.util.concurrent.executors;import Java.util.concurrent.Semaphore ; import Com.xidian.annotations.threadsafe;import Lombok.extern. slf4j. SLF4J, @Slf4j @threadsafe Public classCountExample3 {//Total Requests     Public Static intClienttotal = the; //number of concurrently executing threads     Public Static intThreadtotal = $;  Public Static intCount =0;  Public Static voidMain (string[] args) throws Exception {Executorservice Executorservice=Executors.newcachedthreadpool (); Final Semaphore Semaphore=NewSemaphore (threadtotal); Final Countdownlatch Countdownlatch=NewCountdownlatch (clienttotal);  for(inti =0; i < clienttotal; i++) {Executorservice.execute ()- {                Try{semaphore.acquire ();                    Add ();                Semaphore.release (); } Catch(Exception e) {log.error ("Exception", E);            } countdownlatch.countdown ();        }); } countdownlatch.await();        Executorservice.shutdown (); Log.info ("count:{}", Count); }    PrivateSynchronizedStatic voidAdd () {count++; }}
View Codevolatile for visibilityeach time a volatile variable is accessed by a thread, it forces the value of the variable to be read from the main memory, forcing the thread to flush the most recent value into the main memory when the variable is changed. so different variables always see the latest values. volatile keyword:
    • Can guarantee the visibility of volatile variables
    • Cannot guarantee the atomicity of volatile variables
in-depth: implemented by adding memory barriers and preventing reordering optimizations.
    • When a write operation is performed on a volatile variable, a store barrier command is added after the write operation

      • The store directive forces the latest values to be flushed to main memory after the write operation. The CPU is also prevented from reordering the code. This guarantees that the value is up-to-date in main memory.
    • When a read operation is performed on a volatile variable, a load barrier command is added before the read operation

      • The load command empties the value in the memory cache before the read operation, and then reads the most recent value from the main memory.

Package Com.xidian.count;import Java.util.concurrent.countdownlatch;import Java.util.concurrent.executorservice;import Java.util.concurrent.executors;import Java.util.concurrent.Semaphore ; import Com.xidian.annotations.notthreadsafe;import Lombok.extern. slf4j. SLF4J, @Slf4j @notthreadsafe Public classCountExample4 {//Total Requests     Public Static intClienttotal = the; //number of concurrently executing threads     Public Static intThreadtotal = $;  Public Static volatile intCount =0;  Public Static voidMain (string[] args) throws Exception {Executorservice Executorservice=Executors.newcachedthreadpool (); Final Semaphore Semaphore=NewSemaphore (threadtotal); Final Countdownlatch Countdownlatch=NewCountdownlatch (clienttotal);  for(inti =0; i < clienttotal; i++) {Executorservice.execute ()- {                Try{semaphore.acquire ();                    Add ();                Semaphore.release (); } Catch(Exception e) {log.error ("Exception", E);            } countdownlatch.countdown ();        }); } countdownlatch.await();        Executorservice.shutdown (); Log.info ("count:{}", Count); }    Private Static voidAdd () {count++; //1. Count removes the value of count from main memory//2, +1 perform +1 operation in working memory//3. Count writes the value of count back to main memory//In a timely manner, the count is decorated with vilatile, each time from the main memory is the most recent value, but when multiple threads at the same time to fetch the latest value, perform +1 operations, when flushed to main memory, will overwrite the results, thus losing some +1 operations    }}
View CodeThe results of the program run indicate that thread safety is not guaranteed even if we use volatile modifier variables. What's that for?

Volatile implements shared variable memory visibility There is a condition that operations on shared variables must be atomic. such as num = 10; This operation is atomic, but num++ or num--is made up of 3 steps and does not have atomicity, so it is not possible.

If num=5, at this time thread A from the main memory to obtain the value of num, and execute + +, but not yet seen the modification to the main memory, and thread B to get num value, the + + operation, resulting in loss of modification, obviously executed 2 times ++,num value but only increased by 1.

volatile does not have atomicity, it does not apply to the scene of the count, then what does it apply to the scene? volatile conditions of use:
    1. Write operations on a variable do not depend on its current value

      • Not satisfied: number++, Count=count*5
      • Satisfying: Boolean variables, variables to record temperature changes, etc.
    2. The variable is not included in the invariant with other variables

      • Not satisfied: Invariant type Low<up

In conclusion, volatile is particularly suitable for thread marking , such as

comparison of synchronized and volatile;
    • Synchronized locks the operation of variables and variables, and volatile locks only the variable, and the value of the variable cannot depend on its own value, volatile is a lightweight synchronous lock
    • Volatile does not require locking, is more lightweight than synchronized, and does not block threads.
    • From a memory visibility standpoint, volatile reads are equivalent to locking, Volatilexie equivalent to unlocking.
    • Synchronized can guarantee both visibility and atomicity, while volatile only guarantees visibility and cannot guarantee atomicity.
Note: Since Voaltile is more lightweight than synchronized, the efficiency of execution is certainly higher than synchroized. When you can guarantee atomic operation, you can choose to use volatile as much as possible. Consider using synchronized when there are other atomicity that cannot guarantee its operation.

Order of

Happens-before principle, innate order, that is, do not need any additional code control can guarantee the order, Java memory Model A list of eight kinds of happens-before rules, if the order of two operations can not be pulled out of these eight rules, there is no guarantee of order.

※ Procedure Order rules: within a thread, according to the code execution, written in front of the operation first occurs in the operation after writing. ※ Locking rule: A unlock operation first occurs after the lock operation that faces the same lock ※volatile variable rule: The write operation of a variable precedes the read operation of the variable. ※ Transfer rule: if operation a precedes operation B, and Operation B is first occurred in Operation C, It is possible to conclude that operation a precedes operation C※ thread initiation principle: the Start () method of the thread object takes precedence over every action of this thread ※ thread break rule: The call to the thread interrupt () method occurs before the code of the interrupted thread detects the occurrence of the interrupt event ※ Thread termination rule: All operations in a thread occur first in thread termination detection, and we can detect that the thread has terminated by means of the Thread.Join () method end, Thread.isalive () method return value. ※ Object Finalization rule: Initialization of an object occurs at the beginning of his finalize () method

The first rule should pay attention to understand, here is only the program running results appear to be sequential execution, although the result is the same, the JVM will not be dependent on the variable value of the operation to reorder, this rule can only guarantee the order of the execution of single-threaded, not guarantee the order of multi-threading.

Summarize

Thread safety-visibility and ordering

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.