The core theoretical foundation of the Java Concurrent programming learning notes _java

Source: Internet
Author: User
Tags cas static class visibility volatile

Concurrent programming is one of the most important skills of Java programmers and one of the most difficult to master. It requires the programmer to the computer at the bottom of the operating principles of a deep understanding, but also requires the programmer logic clear, thoughtful, so as to write efficient, safe, reliable multithreaded concurrent programs. This series begins with the nature of the thread-coordinated approach (wait, notify, Notifyall), synchronized, and volatile, detailing each of the concurrency tools and underlying implementation mechanisms that the JDK provides for us. On this basis, we will further analyze the Java.util.concurrent package tools, including its use, the realization of the source code and its underlying principles. This is the first article in the series, which is the core part of this series, and will be based on this analysis and interpretation.

First, the sharing of

Data sharing is one of the main causes of thread safety. If all the data is valid within a thread, there is no threading security problem, which is one of the main reasons why we often do not need to consider thread safety when programming. However, in multithreaded programming, data sharing is unavoidable. The most typical scenario is the data in the database, in order to ensure the consistency of the data, we usually need to share the data in the same database, even in the master-slave situation, access to the same data, master-slave only for the efficiency of access and data security, but the same copy of the data. Let's now use a simple example to illustrate the problem of sharing data under Multithreading:

Code Snippet One:

 package com.paddx.test.concurrent;

  public class Sharedata {public static int count = 0;
    public static void Main (string[] args) {final sharedata data = new Sharedata ();
          for (int i = 0; i < i++) {new Thread (new Runnable () {@Override public void run () {
          The try {//Enter time pauses 1 milliseconds, increases the likelihood which the concurrency problem appears thread.sleep (1);
          catch (Interruptedexception e) {e.printstacktrace ();
          for (int j = 0; J < + j) {Data.addcount ();
        } System.out.print (Count + "");

    }). Start ();
    The try {//main program pauses for 3 seconds to ensure that the above program execution completes Thread.Sleep (3000);
    catch (Interruptedexception e) {e.printstacktrace ();
  } System.out.println ("count=" + count);
  public void Addcount () {count++; }
}

The purpose of the above code is to add an operation to count, executed 1000 times, but this is done through 10 threads, each thread executes 100 times, and normally, 1000 should be output. However, if you run the above program, you will find that the result is not the case. Here is the result of a certain execution (the results of each run are not necessarily the same, and sometimes the correct results may be obtained):

It can be seen that the operation of the shared variables, in a multi-threaded environment is very easy to appear a variety of unexpected results.

Two, mutually exclusive sex

A resource mutex is a unique and exclusive way of allowing only one visitor to access it at the same time. We usually allow multiple threads to read data at the same time, but only one thread is allowed to write the data at once. So we usually divide the locks into shared and exclusive locks, also known as read locks and write locks. If resources are not mutually exclusive, we do not need to worry about thread safety even if we are sharing resources. For example, for immutable data sharing, all threads can only read them, so there is no need to consider thread-safety issues. But for the shared data write operations, the general need to ensure mutual exclusion, the above example is because there is no guarantee of mutual exclusion to cause data modification problems. Java provides a variety of mechanisms to ensure mutual exclusion, the simplest way is to use synchronized. Now we add synchronized to the above program and then execute:

Code Snippet Two:

 package com.paddx.test.concurrent;

  public class Sharedata {public static int count = 0;
    public static void Main (string[] args) {final sharedata data = new Sharedata ();
          for (int i = 0; i < i++) {new Thread (new Runnable () {@Override public void run () {
          The try {//Enter time pauses 1 milliseconds, increases the likelihood which the concurrency problem appears thread.sleep (1);
          catch (Interruptedexception e) {e.printstacktrace ();
          for (int j = 0; J < + j) {Data.addcount ();
        } System.out.print (Count + "");

    }). Start ();
    The try {//main program pauses for 3 seconds to ensure that the above program execution completes Thread.Sleep (3000);
    catch (Interruptedexception e) {e.printstacktrace ();
  } System.out.println ("count=" + count);
  }/** * Add synchronized keyword * * public synchronized void Addcount () {count++; }
}

Now that the code is executed, you will find that no matter how many times it is executed, the final result of the return is 1000.

Third, the atomic nature

Atomicity means that the operation of data is an independent and indivisible whole. In other words, it is an operation, a continuous, uninterrupted process that is modified by other threads when the data is not executed in half. The simplest way to ensure atomicity is the operating system instruction, which means that if one operation corresponds to an operating system instruction, it can certainly be guaranteed atomicity. But many operations cannot be done with a single instruction. For example, for a long operation, many systems need to be divided into multiple instructions to operate on both high and low levels to complete. For example, we often use the integer i++ operation, in fact, it needs to be divided into three steps: (1) Read the value of Integer I, (2) Add an operation to I; (3) Write the result back to memory. This process may appear as follows in multithreading:

This is why the result of code snippet execution is not correct. For this combination operation, to ensure atomicity, the most common way is to add locks, such as synchronized or lock in Java can be implemented, code snippet two is implemented through synchronized. In addition to the lock, there is also a way of CAs (Compare and Swap), that is, to modify the data before the comparison with the previous read to the consistency of the value, if consistent, then make the modification, if inconsistent then the implementation of the principle of optimistic lock. CAs, however, are not necessarily valid in some scenarios, such as when another thread modifies a value before changing it back to its original value.

Iv. Visibility

To understand visibility, you need to understand the memory model of the JVM first, and the JVM's memory model is similar to the operating system, as shown in the figure:

From this diagram we can see that each thread has a working memory of its own (the equivalent of a CPU advanced buffer, which is designed to further reduce the speed difference between the storage system and the CPU and improve performance), and for shared variables, each time the thread reads a copy of the shared variable in working memory, The value of the copy in working memory is also modified directly at the time of writing, and then the work memory is synchronized with the value in the main memory at a point. The problem with this is that if thread 1 modifies a variable, thread 2 may not see the changes that thread 1 makes to the shared variable. We can demonstrate an invisible problem with the following procedure:

Package com.paddx.test.concurrent;

public class Visibilitytest {
  private static Boolean ready;
  private static int number;

  private static class Readerthread extends Thread {public
    void run () {
      try {
        thread.sleep ()
      } catch (in Terruptedexception e) {
        e.printstacktrace ();
      }
      if (!ready) {
        System.out.println (ready);
      }
      SYSTEM.OUT.PRINTLN (number);
    }

  private static class Writerthread extends Thread {public
    void run () {
      try {
        thread.sleep ()
      } CATC H (interruptedexception e) {
        e.printstacktrace ();
      }
      Number = m;
      Ready = true;
    }
  }

  public static void Main (string[] args) {
    new Writerthread (). Start ();
    New Readerthread (). Start ();
  }
}

Intuitively, this program should only output 100,ready values that are not printed. In fact, if you execute the above code multiple times, there may be a number of different results, and here are some two results that I ran:

Of course, this result can only be said to be caused by visibility, when the write thread (writerthread) set ready=true, read thread (readerthread) can not see the modified result, so will print false, for the second result, that is, execute if (! Ready) was not read to the result of the write thread, but the execution of the write thread was read when the SYSTEM.OUT.PRINTLN (ready) was executed. However, this result may also be caused by the alternate execution of threads. Java can be synchronized or volatile to ensure visibility, details will be analyzed in subsequent articles.

Five, the Order of

To improve performance, compilers and processors may reorder instructions. Reordering can be divided into three categories:

(1) compiler-optimized reordering. The compiler can rearrange the execution order of the statements without changing the semantics of the single-threaded routines.

(2) instruction level parallel reordering. Modern processors use instruction-level parallelism (Instruction-level Parallelism, ILP) to overlap multiple instructions. If there is no data dependency, the processor can change the order in which the statements correspond to machine instructions.
(3) Reordering of memory systems. Because the processor uses a cache and read/write buffer, this makes the load and storage operations appear to be executing in random order.

We can refer directly to the description of the reordering problem in JSR 133:

(1) (2)

First Look at the (1) Source code section, from the source view, or command 1 to execute first or 3 to execute first. If instruction 1 is executed first, R2 should not be able to see the value written in instruction 4. If instruction 3 is executed first, R1 should not be able to see the value written in instruction 2. However, the results of the run may be r2==2,r1==1, which is the result of "reordering". The figure (2) above is a possible legitimate compilation result, and the order of instruction 1 and instruction 2 may be interchanged after compilation. Therefore, the result of R2==2,r1==1 will appear. Synchronized or volatile can also be used in Java to guarantee order.

Vi. Summary

In this paper, the theoretical basis of Java concurrent programming is explained, some things in the subsequent analysis will be more detailed discussion, such as visibility, order and so on. Subsequent articles will be discussed in the context of this chapter as a theoretical basis. If you can understand the above content, I believe that whether it is to understand other concurrent programming articles or in the ordinary concurrent programming work, can be very helpful to everyone.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.