Java concurrent programming: the usage and principle of volatile, javavolatile

Source: Internet
Author: User

Java concurrent programming: the usage and principle of volatile, javavolatile

Java concurrent programming series [unfinished ]:

  • Java concurrent programming: core Theory

  • Java concurrent programming: Synchronized and its implementation principle

  • Java concurrent programming: underlying Optimization of Synchronized (lightweight lock and biased lock)

  • Java concurrent programming: collaboration between threads (wait/notify/sleep/yield/join)

  • Java concurrent programming: Use of volatile and Its Principle

 I. Role of volatile

In the article <a href = "http://www.cnblogs.com/paddix/p/5374810.html"> Java concurrent programming: core theory </a>, we have mentioned the visibility, orderliness and atomicity, in general, we can solve these problems through the Synchronized keyword. However, if you have an understanding of the Synchronized principle, you should know that Synchronized is a heavyweight operation, this has a big impact on the system performance. Therefore, if there are other solutions, we usually avoid using Synchronized to solve the problem. The volatile keyword is another solution to the visibility and orderliness issues provided in Java. It is also easy to misunderstand that single read/write operations on volatile variables can ensure atomicity, such as long and double variables, however, I ++ cannot guarantee the atomicity of such operations, because I ++ is essentially a read or write operation.

Ii. Use of volatile

The usage of volatile can be illustrated through several examples.

1. prevent re-sorting

We will analyze the reordering problem from a classic example. Everyone should be familiar with the implementation of the singleton mode. In the concurrent environment, the singleton implementation mode can be implemented by dual-check locking (DCL. The source code is as follows:

1 package com. paddx. test. concurrent; 2 3 public class Singleton {4 public static volatile Singleton singleton; 5 6/** 7 * constructor private, disable External instantiation 8 */9 private Singleton (){}; 10 11 public static Singleton getInstance () {12 if (singleton = null) {13 synchronized (singleton) {14 if (singleton = null) {15 singleton = new Singleton (); 16} 17} 18} 19 return singleton; 20} 21}

Now let's analyze why the volatile keyword should be added between the variable singleton. To understand this problem, we must first understand the object construction process. instantiating an object can be divided into three steps:

(1) allocate memory space.

(2) initialize the object.

(3) Assign the address of the memory space to the corresponding reference.

However, since the operating system can re-Sort commands, the above process may also become the following process:

(1) allocate memory space.

(2) Assign the address of the memory space to the corresponding reference.

(3) initialize the object

In this process, an uninitialized object reference may be exposed in a multi-threaded environment, leading to unpredictable results. Therefore, to prevent the process from being reordered, we need to set the variable to volatile type.

2. Visibility

  Visibility issues mainly refer to the fact that one thread modifies the shared variable value, but the other thread does not. The main cause of visibility problems is that each thread has its own high-speed cache zone-thread working memory. The volatile keyword can effectively solve this problem. Let's take a look at the following example to understand its role:

 1 package com.paddx.test.concurrent; 2  3 public class VolatileTest { 4     int a = 1; 5     int b = 2; 6  7     public void change(){ 8         a = 3; 9         b = a;10     }11 12     public void print(){13         System.out.println("b="+b+";a="+a);14     }15 16     public static void main(String[] args) {17         while (true){18             final VolatileTest test = new VolatileTest();19             new Thread(new Runnable() {20                 @Override21                 public void run() {22                     try {23                         Thread.sleep(10);24                     } catch (InterruptedException e) {25                         e.printStackTrace();26                     }27                     test.change();28                 }29             }).start();30 31             new Thread(new Runnable() {32                 @Override33                 public void run() {34                     try {35                         Thread.sleep(10);36                     } catch (InterruptedException e) {37                         e.printStackTrace();38                     }39                     test.print();40                 }41             }).start();42 43         }44     }45 }

Intuitively, there are only two possible results for this code: B = 3; a = 3 or B = 2; a = 1. However, when you run the above Code (It may take a little longer), you will find that there are three results in addition to the above two results:

......
B = 2; a = 1
B = 2; a = 1
B = 3; a = 3
B = 3; a = 3
B = 3; a = 1
B = 3; a = 3
B = 2; a = 1
B = 3; a = 3
B = 3; a = 3
......

Why is the result of B = 3; a = 1? Under normal circumstances, if the change method is executed first and then the print method is executed, the output result should be B = 3; a = 3. On the contrary, if the print method is executed first and then the change method is executed, the result should be B = 2; a = 1. Then how is the result of B = 3; a = 1? This is because the first thread changes the value a = 3, but is invisible to the second thread. If you change both a and B to volatile type variables and then execute them, the result of B = 3; a = 1 will no longer appear.

3. Guarantee atomicity

I have explained the atomicity issue above. Volatile can only guarantee the atomicity of a single read/write. For this question, see the description in JLS:

17.7 Non-Atomic Treatment of double and long

For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. this can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write.

Writes and reads of volatile long and double values are always atomic.

Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.

Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32-bit values. for efficiency's sake, this behavior is implementation-specific; an implementation of the Java Virtual Machine is free to perform writes to long and double values atomically or in two parts.

Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are used to declare shared 64-bit values as volatile or synchronize their programs failed to avoid possible complications.

The content in this section is similar to the content described above. Because long and double data types can be divided into high 32-bit and low 32-bit operations, normal long or double type read/write may not be atomic. Therefore, we encourage you to set the shared long and double variables to the volatile type, which ensures that the single read/write operations on long and double are atomic in any situation.

There is a problem that volatile variables may be misunderstood to guarantee atomicity. Now we can use the following program to demonstrate this problem:

1 package com. paddx. test. concurrent; 2 3 public class VolatileTest01 {4 volatile int I; 5 6 public void addI () {7 I ++; 8} 9 10 public static void main (String [] args) throws InterruptedException {11 final VolatileTest01 test01 = new VolatileTest01 (); 12 for (int n = 0; n <1000; n ++) {13 new Thread (new Runnable () {14 @ Override15 public void run () {16 try {17 Thread. sleep (10); 18} catch (InterruptedException e) {19 e. printStackTrace (); 20} 21 test01.addI (); 22} 23 }). start (); 24} 25 26 Thread. sleep (10000); // wait for 10 seconds to ensure that the above program is executed successfully 27 28 System. out. println (test01. I); 29} 30}

You may mistakenly think that this program is thread-safe after the keyword volatile is added to variable I. You can try to run the above program. The following is my local running result:

Note: The Thread. sleep () method is executed in multiple parts of the above Code to increase the chance of concurrent problems and have no other effect.

Iii. volatile Principle

Through the above example, we should basically know what volatile is and how to use it. Now let's take a look at how the bottom layer of volatile is implemented.

  1. Visibility implementation:

As mentioned earlier, the thread itself does not directly interact with the main memory, but uses the working memory of the thread to complete corresponding operations. This is also the essential cause of Data invisible between threads. Therefore, to realize the visibility of volatile variables, you can start from this aspect. There are two main differences between the write operation on volatile variables and normal variables:

(1) When you modify the volatile variable, the modified value is forcibly refreshed in the primary memory.

(2) modifying the volatile variable will invalidate the variable value corresponding to the working memory of other threads. Therefore, when you read the variable value, you need to read the value from the main memory again.

These two operations can solve the volatile variable visibility problem.

 2. Implementation of orderliness:

Before explaining this question, let's take a look at the happen-before rules in Java. The definition of Happen-before in JSR 133 is as follows:

Two actions can be ordered by a happens-before relationship. If one action happens before another, then the first is visible to and ordered before the second.

In other words, if a happen-before B, all operations performed by a are visible to B. (You must remember this because the word "happen-before" is often misunderstood as before and after the time ). Let's take a look at the happen-before rules defined in JSR 133:

• Each action in a thread happens before every subsequent action in that thread.
• An unlock on a monitor happens before every subsequent lock on that monitor.
• A write to a volatile field happens before every subsequent read of that volatile.
• A call to start () on a thread happens before any actions in the started thread.
• All actions in a thread happen before any other thread successfully returns from a join () on that thread.
• If an action a happens before an action B, and B happens before an action c, then a happens before c.

Translated:

  • In the same thread, the previous operation is happen-before. (That is, the code is executed in a single thread in order. However, it is legal that the compiler and the processor can reorder the execution results in a single-threaded environment. In other words, this is a rule that cannot ensure compilation and command rescheduling ).
  • The unlock operation on the monitor is happen-before and its subsequent lock operations. (Synchronized Rules)
  • Write operation for volatile variable happen-before subsequent read operation. (Volatile Rules)
  • Thread start () method happen-before all subsequent operations of this thread. (Thread startup Rules)
  • All operations of the thread happen-before other threads on this thread call join to return successful operations.
  • If a happen-before B, B happen-before c, a happen-before c (pass-through ).

Here we will mainly look at Article 3: rules for ensuring the orderliness of volatile variables. <A href = "http://www.cnblogs.com/paddix/p/5374810.html"> Java concurrent programming: core theory </a> mentioned in the article that excessive sorting is divided into compiler re-sorting and processor re-sorting. To implement volatile memory semantics, JMM restricts the two types of reordering of volatile variables. The following table lists the reordering rules stipulated by JMM for volatile variables:

Can Reorder 2nd operation
1st operation Normal Load
Normal Store
Volatile Load Volatile Store
Normal Load
Normal Store


No
Volatile Load No No No
Volatile store
No No

 3. Memory barrier

To realize volatile visibility and the semantics of happen-befor. The underlying JVM layer is implemented through something called a "memory barrier. Memory barrier (also called memory barrier) is a set of processor commands used to limit the sequence of memory operations. Below is the memory barrier required to complete the above rules:

Required barriers 2nd operation
1st operation Normal Load Normal Store Volatile Load Volatile Store
Normal Load


LoadStore
Normal Store


StoreStore
Volatile Load LoadLoad LoadStore LoadLoad LoadStore
Volatile Store

StoreLoad StoreStore

(1) LoadLoad barrier
Execution sequence: Load1-> Loadload-> Load2
Ensure that the data loaded by Load1 can be accessed before Load2 and subsequent Load commands Load data.

(2) StoreStore barrier
Execution sequence: Store1-> StoreStore-> Store2
Make sure that the data operated by Store1 is visible to other processors before Store2 and subsequent Store commands are executed.

(3) LoadStore barrier
Execution sequence: Load1-> LoadStore-> Store2
Make sure that the data loaded by Load1 can be accessed before Store2 and subsequent Store commands are executed.

(4) StoreLoad barrier
Execution sequence: Store1-> StoreLoad-> Load2
Make sure that the Store1 data is visible to other processors before Load2 and subsequent Load commands are read.

Finally, I can use an instance to illustrate how the JVM inserts a memory barrier:

 1 package com.paddx.test.concurrent; 2  3 public class MemoryBarrier { 4     int a, b; 5     volatile int v, u; 6  7     void f() { 8         int i, j; 9 10         i = a;11         j = b;12         i = v;13         //LoadLoad14         j = u;15         //LoadStore16         a = i;17         b = j;18         //StoreStore19         v = i;20         //StoreStore21         u = j;22         //StoreLoad23         i = u;24         //LoadLoad25         //LoadStore26         j = b;27         a = i;28     }29 }

Iv. Summary

In general, the understanding of volatile is still relatively difficult. If you do not have a special understanding, you do not have to worry about it. A process is required to fully understand it, the volatile application scenarios will also be seen multiple times in subsequent articles. Now I have a basic understanding of volatile. In general, volatile is an optimization in concurrent programming. In some scenarios, it can replace Synchronized. However, volatile cannot completely replace Synchronized. volatile can be applied only in some special scenarios. In general, the following two conditions must be met to ensure thread security in the concurrent environment:

(1) write operations on variables do not depend on the current value.

(2) The variable is not included in the variant with other variables.

  

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.