Explain the deadlock and memory footprint of the Synchronized keyword in Java _java

Source: Internet
Author: User
Tags mutex object object static class visibility volatile

First look at a synchronized of the detailed:
Synchronized is a keyword in the Java language that, when used to modify a method or a block of code, ensures that at most one thread at a time executes that segment of code.

First, when two concurrent threads access the synchronized (this) synchronized code block in the same object objects, only one thread can be executed at a time. Another thread must wait until the current thread finishes executing the code block before executing the code block.

Second, however, when a thread accesses a synchronized (this) synchronization code block of object, another thread can still access a non-synchronized (this) synchronized code block in that object.

Third, it is particularly critical that when a thread accesses a synchronized (this) synchronization code block of object, access to all other synchronized (this) synchronized code blocks in object is blocked by other threads.

The third example also applies to other synchronized code blocks. That is, when a thread accesses a synchronized (this) synchronization code block of object, it obtains the object lock of the objects. As a result, access to all of the synchronization code portions of the object object by other threads is temporarily blocked.

The above rules apply to other object locks as well.
In short, synchronized is a lock for the current thread, the thread that owns the lock can execute the instructions in the block, and the other threads can only wait to acquire the lock before the same operation.
This is very easy to use, but the author encountered another kind of relatively exotic situation.
1. In the same class, two methods are declared with the Synchronized keyword
2. When you finish one of these methods, you need to wait for another method (the asynchronous thread callback) to finish, so you use a countdownlatch to wait.
3. Code deconstruction is as follows:

synchronized void A () {
 countdownlatch = new Countdownlatch (1);
 Do someing
 countdownlatch.await ();
}

synchronized void B () {
   countdownlatch.countdown ();
}

which
A method is executed by the main thread, and the B method is executed by the asynchronous thread after the callback
The results of the execution are:
The main thread executes a method and starts to get stuck, no more down, no matter how long you wait.
It's a classic deadlock problem.
A wait for B to execute, actually do not see B is callback, B is also waiting for a to execute. Why, then? Synchronized played a role.
In general, when we want to synchronized a block of code, we need to use a shared variable to lock, for example:

byte[] Mutex = new byte[0];

void A1 () {
   synchronized (mutex) {
     //dosomething
   }
}

void B1 () {

   synchronized (mutex) {
     //DoSomething
   }

}

If you migrate the contents of the A method and the B method to the synchronized block of the A1 and B1 methods, it is very well understood. The
A1 waits indirectly for the (Countdownlatch) B1 method to execute when it is finished.
However, because the mutex in the A1 is not freed, it begins to wait for B1, and the B method does not execute even if the asynchronous callback B1 method waits for the mutex to release the lock.
So it causes a deadlock!
While the Synchronized keyword is placed in front of the method, it works the same way. It's just the Java language that hides the declaration and use of mutexes. The mutex used by the Synchronized method in the same object is the same, so even the asynchronous callback , it can also cause deadlocks, so pay attention to this problem. This level of error is the SYNCHRONIZED keyword used incorrectly. Don't use it, and use it right.
So what exactly is this invisible mutex object?
It's easy to think of the instance itself. Because you don't have to define a new object. To prove the idea, you can write a procedure to prove it.
The idea is simple, define a class, there are two methods, a method declared as synchronized, one in the method body using the synchronized (this), and then start two threads, to call the two methods, if the two methods of lock competition (wait) , it can be explained that the invisible mutex in the synchronized of the method declaration is actually the instance itself.

public class Multithreadsync {public synchronized void M1 () throws interruptedexception{System. Out.println ("M1
     Call "); Thread.
     Sleep (2000); System.
  OUT.PRINTLN ("M1 called Done");
       public void M2 () throws interruptedexception{synchronized (this) {System. out.println (' m2 call '); Thread.
       Sleep (2000); System.
     Out.println ("M2 called Done");

     } public static void Main (string[] args) {final Multithreadsync thisobj = new Multithreadsync ();
         thread T1 = new Thread () {@Override public void run () {try {thisobj.m1 ();
         catch (Interruptedexception e) {e.printstacktrace ();

     }
       }
     };
         Thread t2 = new Thread () {@Override public void run () {try {thisobj.m2 ();
         catch (Interruptedexception e) {e.printstacktrace ();

     }
       }
     };
     T1.start ();

  T2.start ();

 }

}

The resulting output is:

M1 Call
M1 call do
m2 call
m2

Description method M2 's sync block waits for the M1 to execute. This will confirm the above idea.
It is also necessary to note that when sync is added to the static method, because it is a class-level method, the locked object is the class instance of the current classes. You can also write a program to prove it. Here's a little.
So the synchronized keyword for the method can be automatically replaced with synchronized (this) {} when reading.

                    void method () {
void Synchronized method () {         synchronized (this) {/
   /Biz Code                //Biz Code
}               -- ---->>>   }
                    }

The memory visibility of synchronized
in Java, we all know that keyword synchronized can be used to achieve mutual exclusion between threads, but we often forget that it has another role in ensuring that variables are visible in memory-that is, when you read and write two threads and access the same variable at the same time, Synchronized is used to ensure that after a write thread updates a variable, the read thread can then access the variable to read the latest value of the variable.

Let's say the following example:

public class Novisibility {
  private static Boolean ready = FALSE;
  private static int number = 0;

  private static class Readerthread extends thread {
    @Override public
    void Run () {while
      (!ready) {
        thread. Yield (); Hand over CPU for other threads to work
      }
      System.out.println (number);}

  public static void Main (string[] args) {
    new Readerthread (). Start ();
    Number = N;
    Ready = true;
  }
}

What do you think the reading thread will output? 42? In normal circumstances it will output 42. However, because of the reordering problem, the read thread may also output 0 or nothing.

We know that the compiler may reorder code when compiling Java code into bytecode, and the CPU may reorder its instructions when executing machine instructions, as long as reordering does not break the semantics of the program-

In a single thread, as long as the reordering does not affect the execution result of the program, it is not guaranteed that the operation must be performed in the order in which the program is written, even if the reordering may have a noticeable effect on other threads.
This means that the execution of the statement "Ready=true" may take precedence over the execution of the statement "Number=42", in which case the read thread might output the default value of number 0.

In the Java memory model, the problem of reordering can cause such memory visibility problems. Under the Java memory model, each thread has its own working memory (primarily a cache or register of CPUs) that operates on variables in its own working memory, while communication between threads is achieved through synchronization between the main memory and the working RAM of the thread.

For instance, for the above example, the write thread has succeeded in updating number to 42,ready update to True, but it is likely that the write thread has only synchronized number to main memory (possibly due to a CPU write buffer), The ready value that causes subsequent read threads to read is always false, so the code above does not output any numeric values.

And if we use the Synchronized keyword to synchronize, there is no such problem.

public class Novisibility {
  private static Boolean ready = FALSE;
  private static int number = 0;
  private static Object lock = new Object ();

  private static class Readerthread extends Thread {
    @Override public
    void Run () {
      synchronized (lock) {
        WH Ile (!ready) {
          Thread.yield ();
        }
        SYSTEM.OUT.PRINTLN (number);

  }} public static void Main (string[] args) {
    synchronized (lock) {
      new Readerthread (). Start ();
      Number = N;
      Ready = true;
    }
  }


This is because the Java memory model makes the following guarantees for synchronized semantics,

That is, when Threada releases the lock m, its written variables (for example, X and Y, which exist in its working memory) are synchronized to main memory, and when THREADB applies for the same lock M, THREADB's working memory is set to invalid. THREADB then reloads the variable it accesses from main memory into its working memory (at which point X=1,y=1 is the most recent value modified in Threada). In this way, communication between Threada and THREADB threads is implemented.

This is actually one of the Happen-before rules defined by JSR133. JSR133 defines the following set of Happen-before rules for the Java memory model,

    • single-threaded rule: Each operation in the same thread is happens-before to any action that appears thereafter.
    • The unlock operation for one monitor happens-before each subsequent lock operation on the same monitor.
    • The write operation on the volatile field happens-before each subsequent read operation on the same volatile field.
    • The invoke operation of the Thread.Start () will happens-before the action in the boot thread.
    • All operations in one thread are happens-before to the other line Cheng returns all operations after the join () call on that thread.
    • The end operation of an object constructor Happens-before the beginning of the finalizer of the object.
    • Transitivity rule: If a operation Happens-before to B operation, while B operates Happens-before and C, then a action Happens-before to C operation.

In fact, this set of Happens-before rules defines the memory visibility between operations, and if a operates Happens-before B, then the result of a operation, such as writing to a variable, must be visible when the B operation is performed.

To get a deeper understanding of these Happens-before rules, let's look at an example:

The thread a,b Common access code
object lock = new Object ();
int a=0;
int b=0;
int c=0;

Thread A, calling the following code
synchronized (lock) {
  a=1;//1
  b=2;//2
}//3
c=3;//4


//thread B, calling
the following code Synchronized (lock) {//5
  System.out.println (a);//6
  System.out.println (b);//7
  System.out.println (c ); 8
}

Let's assume thread a runs first, assigning to a,b,c three variables (note: The assignment of the variable a,b is in the synchronized statement block), and then thread B runs again, reading out the values of the three variables and printing them separately. So what is the value of the variable a,b,c that thread B prints?

According to the single-threaded rule, in the execution of a thread, we can draw 1 operation happens before in 2 operation, 2 operation happens before in 3 operation, 3 operation happens before in 4 operation. Similarly, in the execution of the B-thread, the 5 operation happens before 6 operation, 6 operation happens before in 7 operation, 7 operation happens before in 8 operation. And according to the monitor unlock and lock principle, 3 operation (unlock operation) is happens before 5 operation (lock operation), and then according to the transitivity rules we can draw, Operation 1,2 is happens before Operation 6,7,8.

According to the memory semantics of the Happens-before, the execution result of the operation 1,2 is visible to the Operation 6,7,8, then the printed a,b in thread B must be 1 and 2. And for the operation of variable C 4, and Operation 8. We are not able to launch Operation 4 happens before to Operation 8 according to the existing happens before rules. So in thread B, the access to the C variable is probably still 0, not 3.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.