Java Concurrent Programming: the use of volatile and its principle analysis _java

Source: Internet
Author: User
Tags visibility volatile

Java Concurrent Programming Series "Not finished":

Java Concurrent Programming: Core theory

Java concurrent Programming: synchronized and its implementation principle

Java concurrent Programming: Synchronized bottom optimization (lightweight locks, bias locks)

Java Concurrent Programming: Collaboration between threads (Wait/notify/sleep/yield/join)

Java Concurrent Programming: the use and principle of volatile

I. The role of volatile

In the article "Java Concurrency Programming: core theory", we've talked about visibility, ordering, and atomicity, and usually we can solve these problems by synchronized keywords, but if we know the synchronized principle, It should be known that synchronized is a relatively heavyweight operation that has a significant impact on the performance of the system, so if there are other solutions, we usually avoid using synchronized to solve the problem. The volatile keyword is another solution to the visibility and ordering problems provided in Java. For Atomicity, it is also easy to misunderstand that a single read/write operation on a volatile variable can guarantee atomicity, such as long and double variables, but does not guarantee the atomicity of i++, since essentially i++ is read and write two times.

Second, the use of volatile

With regard to the use of volatile, we can use several examples to illustrate its usage and scene.

1. Prevent reordering

We analyze the reordering problem from one of the most classic examples. Everyone should be very familiar with the implementation of the single example mode, and in the concurrent environment of the single implementation, we can usually use the double check Lock (DCL) to achieve the way. Its source code is as follows:

Package com.paddx.test.concurrent;

public class Singleton {public
  static volatile Singleton Singleton;

  /**
   * Constructor Private, prohibit external instantiation *
  /Private Singleton () {};

  public static Singleton getinstance () {
    if (Singleton = null) {
      synchronized (Singleton) {
        if (Singleton = = NULL) {
          singleton = new Singleton ();
    }}} Return singleton
  }
}

Now let's analyze why we want to add the volatile keyword between the variable singleton. To understand the problem, first understand the object's construction process, and instantiating an object can actually be divided into three steps:

(1) Allocating memory space.

(2) Initialize the object.

(3) Assigning the address of the memory space to the corresponding reference.

However, because the operating system can reorder instructions, the above procedure may also become the following:

(1) Allocating memory space.

(2) Assigning the address of the memory space to the corresponding reference.

(3) Initializing the object

If this is the process, a multithreaded environment may expose an uninitialized object reference, leading to unpredictable results. Therefore, to prevent the reordering of this process, we need to set the variable to a variable of type volatile.

2. Realize visibility

The

Visibility problem is primarily a thread that modifies a shared variable value while another thread does not. The main cause of visibility problems is that each thread has its own cache area-the thread working memory. Volatile keyword can effectively solve this problem, we look at the following example, we can know its role:

 package com.paddx.test.concurrent;
  public class Volatiletest {int a = 1;

  int b = 2;
    public void Change () {a = 3;
  b = A;
  public void print () {System.out.println ("b=" +b+ "; a=" +a);
      public static void Main (string[] args {while (true) {final volatiletest test = new Volatiletest ();
          New Thread (New Runnable () {@Override public void run () {try {thread.sleep (10);
          catch (Interruptedexception e) {e.printstacktrace ();
        } test.change ();

      }). Start ();
          New Thread (New Runnable () {@Override public void run () {try {thread.sleep (10);
          catch (Interruptedexception e) {e.printstacktrace ();
        } test.print ();

    }). Start (); }
  }
}

Intuitively, the result of this code can only be two kinds: b=3;a=3 or b=2;a=1. But to run the code above (which may take a little longer), you'll find that there is a third result in addition to the last two:

...... 
B=2;a=1
b=2;a=1
b=3;a=3
b=3;a=3
b=3;a=1
b=3;a=3 b=2;a=1 b=3;a=3 b=3;a=3 ......

Why is there b=3;a=1 such a result? Normally, if the change method is executed first and then the Print method is executed, the output should be b=3;a=3. Conversely, if you execute the Print method first and then execute the change method, the result should be b=2;a=1. How did the b=3;a=1 result come out? The reason for this is that the first thread has modified the value a=3, but is not visible to the second thread, so this result is present. If you change both A and B to a variable of type volatile, the result of b=3;a=1 will never occur again.

3, to ensure the atomic nature

The question of atomicity has been explained above. Volatile only guarantees the atomicity of a single read/write. This question can be seen in the JLS description:

17.7 non-atomic treatment of double and long for the purposes of the Java programming language memory model, a single WR Ite to a non-volatile long or double value is treated as two separate writes:one to each 32-bit. This can be result in a situation where a thread sees the "a" a 64-bit value from one write, and the second B

Its from another write. 

Writes and reads of volatile long and double values are always atomic. Writes to and reads of references are always atomic, regardless of whether, they are implemented as 32-bit or 64-bit values 

. Some implementations may find it convenient to divide a I-write action on a 64-bit long or double value into two writ E actions on adjacent 32-bit values. For efficiency ' s sake, this behavior is implementation-specific; An implementation of the Java Virtual Machine are free to perform writes to long and double values atomically or in two par 

Ts. Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid PO Ssible complications.

The content of this passage is roughly similar to what I described earlier. Because operations for long and double two data types can be divided into high 32-bit and low 32-bit parts, the normal long or double type read/write may not be atomic. Therefore, you are encouraged to set the shared long and double variables to the volatile type, which ensures that the single read/write operation for long and double is atomic in any case.

With regard to the volatile variable for atomicity assurance, there is a problem that is easily misunderstood. Now we'll show you the problem by using the following procedure:

Package com.paddx.test.concurrent;

public class VolatileTest01 {
  volatile int i;

  public void Addi () {
    i++;
  }

  public static void Main (string[] args) throws Interruptedexception {
    final VolatileTest01 test01 = new VolatileTest01 ( );
    for (int n = 0; n < 1000 n++) {
      new Thread (new Runnable () {
        @Override public
        void Run () {
          try {
   thread.sleep (ten);
          } catch (Interruptedexception e) {
            e.printstacktrace ();
          }
          Test01.addi ();
        }
      }). Start ();
    }

    Thread.Sleep (10000);//wait 10 seconds to ensure that the above program execution is completed

    System.out.println (test01.i);
  }

You may mistakenly assume that the program is thread safe after the variable i plus the keyword volatile. You can try to run the above program. Here is the result of my local run:

It is possible that everyone is running a different result. It should be seen, however, that volatile is not guaranteed to be atomic (otherwise the result should be 1000). The reason is also very simple, i++ is actually a composite operation, including three steps:

(1) Reading the value of I.

(2) to I plus 1.

(3) Write the value of I back to memory.

Volatile cannot guarantee that these three operations are atomic, and we can guarantee the atomicity of +1 operations by Atomicinteger or synchronized.

Note: the Thread.Sleep () method has been implemented in multiple sections of the above code to increase the likelihood of concurrent problems and have no other effect.

The principle of volatile

Through the example above, we should know what volatile is and how to use it. Now let's see how the bottom of the volatile is implemented.

1, the visibility of the implementation:

As already mentioned in the previous article, the thread itself does not interact directly with the main memory for data, but rather through the working memory of the thread to do the appropriate operation. This is also the essential cause of data invisibility between threads. Therefore, in order to realize the visibility of volatile variables, we can start with this aspect directly. There are two main differences between write operations for volatile variables and ordinary variables:

(1) Modifying the volatile variable will force the modified value to be refreshed in the main memory.

(2) Modifying the volatile variable causes the value of the corresponding variable in the working memory of other threads to fail. Therefore, the value of the variable will need to be read again from the values in main memory.

These two actions allow you to resolve the visibility of the volatile variable.

2, the orderly implementation of:

Before we explain the problem, let's take a look at the Happen-before rules in Java, and the definition of Happen-before in JSR 133 is as follows:

Two actions can be ordered by a happens-before relationship. If One action happens before another, then the ' the ' is visible to and ordered before the second.

The popular point is that if a happen-before B, then any action made by a is visible to B. (This must be remembered, because the word happen-before is easily misunderstood as the time before and after). Let's look at what Happen-before rules are defined in JSR 133:

each action in a thread happens before every subsequent action into that thread. 
an unlock on a monitor happens before every subsequent lock on that monitor. 
a write to A volatile field happens before every subsequent read of that volatile. 
a call to start () on A thread happens before any actions in the started thread. 
all actions in a thread happen before the any other thread successfully the returns from a join () to that thread. 
If an action a happens before a action B, and b happens before a action C, then a happens before c.

Translated to:

• In the same thread, the preceding operation Happen-before subsequent operations. (That is, a single thread executes in code order.) However, it is legal for the compiler and the processor to reorder without affecting the execution of results in a single-threaded environment. In other words, this is a rule that does not guarantee the compilation of rearrangement and command rearrangement.

• The unlock operation on the monitor happen-before its subsequent lock operation. (Synchronized rules)

• Write operations for volatile variables Happen-before subsequent read operations. (volatile rules)

• The thread's start () method Happen-before all subsequent operations for that thread. (Thread start rule)

• All actions of the thread Happen-before other threads to invoke the join on that thread to return the successful operation.

• If a happen-before b,b happen-before C, then a Happen-before C (transitivity).

Here we mainly look at the third article: volatile variable to ensure the order of the rules. "Java Concurrent programming: The core theory" in the article mentions that overweight sorting is divided into compiler-reset sorting and processing-reordering. To implement volatile memory semantics, JMM restricts both types of reordering to volatile variables. The following is a table of JMM rules for the volatile variable:

Can Reorder 2nd operation
1st operation Normal Load
Normal Store
Volatile Load Volatile Store
Normal Load
Normal Store


No
Volatile Load No No No
Volatile Store
No No

3. Memory barrier

To achieve volatile visibility and happen-befor semantics. The bottom of the JVM is done through something called a "memory barrier." A memory barrier, also known as a memory fence, is a set of processor directives that enable you to limit the order of memory operations. The following is the memory barrier required to complete the above rules:

Required Barriers 2nd operation
1st Operation td> normal Load normal store Volatile Load Volatile Store
Normal Load


loadstore
Normal Store


storest Ore
Volatile Load loadload loadstore loadload loadstore
Volatile store

storeload storestore

(1) Loadload barrier

Order of execution: Load1->loadload->load2
Ensure that Load2 and subsequent load instructions have access to the Load1 loaded data before loading the data.

(2) Storestore barrier

Order of execution: Store1->storestore->store2
Ensure that the data for the Store1 operation is visible to other processors before Store2 and subsequent store directives are executed.

(3) Loadstore barrier

Order of execution: Load1->loadstore->store2
Make sure that the Store2 and subsequent store directives can access the data loaded by LOAD1 before they are executed.

(4) Storeload barrier

Order of execution: Store1-> storeload->load2
Ensure that the Store1 data is visible to other processors before Load2 and subsequent load instructions are read.

Finally, I can illustrate how the memory barrier is inserted into the JVM by an example:

Package com.paddx.test.concurrent;

public class Memorybarrier {
  int a, b;
  volatile int V, u;

  void F () {
    int i, J;

    i = A;
    j = B;
    i = V;
    Loadload
    j = u;
    Loadstore
    a = i;
    b = j;
    Storestore
    v = i;
    Storestore
    u = j;
    Storeload
    i = u;
    Loadload
    //loadstore
    j = b;
    A = i;
  }
}

Iv. Summary

Generally speaking, volatile is difficult to understand, if not special understanding, also need not be urgent, complete understanding needs a process, in the subsequent article will also see many times the use of volatile scene. Here for the volatile basic knowledge and the original has a basic understanding. In general, volatile is an optimization in concurrent programming and can replace synchronized in some scenarios. However, the volatile can not completely replace the position of synchronized, only in some special situations, can apply to volatile. In general, the following two conditions must be met to ensure thread safety in a concurrent environment:

(1) Write operations on variables are not dependent on the current value.

(2) The variable is not included in the invariant with other variables.

The above Java concurrent programming: the use of volatile and its principle analysis is small to share all the content of everyone, hope to give you a reference, but also hope that we support the cloud habitat community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.