Basic explanation of volatile and synchronized synchronization principle

Source: Internet
Author: User
Tags visibility volatile

Basic Knowledge:

Visibility:

Visibility is a complex attribute, because errors in visibility always violate our intuition. In general, we cannot be sure that the thread that performs the read operation can see the value written by other threads in due time, sometimes even impossible. To ensure the visibility of memory write operations between multiple threads, you must use a synchronization mechanism.

Visibility refers to the visibility between threads, and the state of a thread modification is visible to another thread. Which is the result of a thread modification. Another thread can see it right away. For example, a variable decorated with volatile will have visibility. Volatile-decorated variables do not allow thread internal caching and reordering, that is, directly modifying memory. So it's visible to other threads. But here's a question to note, volatile can only make the content that he modifies visible, but it does not guarantee atomicity. For example, volatile int a = 0; Then there is an operation a++; the variable A is visible, but a++ is still a non atomic operation, which is also a thread-safety problem.

Volatile, synchronized, and final implementations of visibility in Java.

Atomic nature:

Atoms are the smallest units in the world and are indivisible. such as a=0 (a not long and double type) This operation is indivisible, then we say this operation is atomic operation. Another example: a++; This operation is actually a = a + 1; it is divisible, so he is not an atomic operation. There is a thread-safety problem with non-atomic operations, and we need to use synchronization technology (sychronized) to make it an atomic operation. An operation is an atomic operation, then we call it atomic. The Java concurrent package provides some atomic classes that we can use to read the API to understand the usage of these atomic classes. For example: Atomicinteger, Atomiclong, atomicreference and so on.

Synchronized in Java and operating in lock, unlock guarantee atomicity.

Order:

The Java language provides volatile and synchronized two keywords to ensure the order of operations between threads, volatile because it contains the semantics of "Prohibit instruction reordering", synchronized by " A variable that allows only one thread to lock it at the same time is obtained by this rule, which determines that two synchronized blocks holding the same object lock can only be executed serially.

The following excerpt from the Java Concurrency in Practice:

The following section of code in a multithreaded environment, there will be a problem. + View Code

1/**
 2  * @author zhengbinmac
 3  /
 4 public class Novisibility {
 5     private Static Boolean Ready;
 6     private static int number;
 7     private Static class Readerthread extends Thread {
 8         @Override
 9 public         void Run () {
10 While             (!ready) {
one                 Thread.yield ();             System.out.println (number);
The         public     static void Main (string[] args) {A         new Readerthread (). Start ();
Number         = n;         ready = true;
21}

Novisibility may continue to loop because the read thread may never see the ready value. Even novisibility may output 0, because a read thread might see a value written to ready, but not a value written after number, which is called "reordering." As long as the reordering situation cannot be detected in a thread (even if the reordering in that thread is visible in other threads), it is not possible to ensure that the actions in the thread are executed in the order specified in the program. When the main thread writes the number first, and then writes ready without synchronizing, the read thread may see the exact opposite of the order in which it was written.

In the absence of synchronization, the compiler, the processor, and the runtime can make some unexpected adjustments to the order in which the operation is performed. In the absence of sufficient synchronization of multithreaded programs, to the implementation of memory operation Spring Xu to judge, can not get the correct conclusion.

This looks like a failed design, but it allows the JVM to take full advantage of the powerful performance of modern multi-core processors. For example, in the absence of synchronization, the Java memory model allows the compiler to reorder the order of operations and cache the values in registers. In addition, it allows the CPU to reorder the order of operations and cache the values in the processor-specific cache.

Volatile principle

The Java language provides a slightly weaker synchronization mechanism, the volatile variable, to ensure that update operations on variables are notified to other threads. When a variable is declared as a volatile type, the compiler and runtime notice that the variable is shared, so that the action on the variable is not reordered with other memory operations. Volatile variables are not cached in registers or where they are not visible to other processors, so the most recently written values are always returned when you read a variable of type volatile.

A lock operation is not performed while accessing the volatile variable, and therefore the execution thread is not blocked, so the volatile variable is a more lightweight synchronization mechanism than the Sychronized keyword.

When reading and writing to a volatile variable, each thread first copies the variable from memory to the CPU cache. If the computer has more than one CPU, each thread may be processed on a different CPU, which means that each thread can be copied to a different CPU cache.

The declaration variable is volatile, and the JVM guarantees that each read variable is read from memory, skipping over the CPU cache. when a variable is defined as volatile, there are two features:

1. To ensure that this variable is visible to all threads, the "visibility" here, as described at the beginning of this article, when a thread modifies the value of this variable, volatile guarantees that the new value is immediately synchronized to the main memory and immediately refreshed from the main memory before each use. But ordinary variables do not do this, the value of ordinary variables passed through the main memory (see: Java memory Model) to complete.

2. Disable command reordering optimization. There are volatile modified variables, assigned to perform a "load Addl $0x0, (%ESP)" operation, which is equivalent to a memory barrier (command reordering can not reorder the following instructions to the memory barrier before the position), only one CPU access to memory, There is no need for a memory barrier; (What is the order reordering: refers to the CPU used to allow a number of instructions are not in accordance with the order of the procedures required for the development of the corresponding Circuit unit processing). volatile performance:

Volatile's read performance consumption is almost the same as a normal variable, but the write operation is slightly slower because it requires inserting a number of memory-barrier directives into the local code to ensure that the processor does not execute in a disorderly sequence.


synchronized:

synchronized is for cosmetic code blocks, object methods, or class objects (class objects are cosmetic static methods or objects)

Java built-in locks: Each Java object can be used as a lock that implements synchronization, and these locks become built-in locks. The lock is automatically acquired when a thread enters a synchronized code block or method, which is released when the code block or method is exited. The only way to get a built-in lock is to enter the synchronized code block or method of the lock's protection.
A Java built-in lock is a mutex, this means that at most only one thread can acquire the lock, and when thread a attempts to obtain the built-in lock held by thread B, thread A must wait or block until thread B releases the lock, and if the B thread does not release the lock, a thread will wait forever.
Java object locks and class Locks: Java object Locks and class locks are essentially the same as built-in locks in the concept of locks, however, the two locks are actually very different, object locks are used for object instance methods, or on an object instance, class locks are used for static methods of classes or for class objects of classes. We know that the object instance of a class can have many, but each class has only one class object, so object locks on different object instances are not disturbed, but each class has only one class lock. But one thing to be aware of is that class locks are just a conceptual thing, not real, and it's just used to help us understand the difference between locking instance methods and static methods.

Here is a concept of built-in locks (this concept comes from the Java Concurrent Programming Combat Chapter II): There is no intrinsic association between the object's built-in lock and the state of the object, although most classes use a built-in lock as an effective locking mechanism, but the domain of the object is not necessarily protected by a built-in lock. When you obtain a built-in lock associated with an object, you cannot prevent other threads from accessing the object, and when a thread acquires the lock on the object, it can only prevent other threads from acquiring the same lock. Each object has a built-in lock in order to eliminate the explicit creation of the lock object.
class locks and object locks are two distinct locks that control different areas, and they do not interfere with each other. Similarly, when a thread obtains an object lock, it can also obtain the class lock, which is allowed by acquiring two locks at the same time.

Here, there has been a certain understanding of the use of synchronized. At this point, there is a question, since there is a synchronized modification method of synchronization, why do you need to synchronized modify the way the synchronized code block. And this problem is also the flaw of synchronized.
Synchronized flaw: When a thread enters the synchronization method to acquire an object lock, other threads must wait or block when accessing the synchronization method of the object here, which is fatal to the highly concurrent system, which can easily cause the system to crash. If a thread has a dead loop in the synchronization method, it will never release the object lock, then the other threads will wait forever. This is a fatal problem.
Of course the synchronization method and synchronized code block will have such a flaw, as long as the use of the Synchronized keyword will have such risks and defects. Since this defect cannot be avoided, the risk should be minimized. This is an aspect of the synchronized code block that is better than the synchronization method in some cases. For example, within the method of a class: This class declares an object instance, Synobject so=new synobject (), the method that invokes the instance in a method So.testsy (), but the call to this method requires synchronization, You cannot have more than one thread executing this method at the same time. In this case, if the So.testsy () method is invoked directly with the synchronized modifier, then when a thread enters the method, no other synchronization method of that object can be accessed by another thread. If this method takes a long time to execute, other threads will block and affect the performance of the system. If you use synchronized to decorate the code block: synchronized (SO) {So.testsy ();}, the object that this method locks is the object of so, which has nothing to do with the object that executes this line of code, when a thread executes the method, This has no effect on other synchronization methods because they hold a completely different lock.

public class Testsynchronized {public void Testone () {synchronized () {    
              int i = 5;  while (i--> 0) {System.out.println (Thread.CurrentThread (). GetName () + ":" +    
                   i);    
                   try {thread.sleep (500);    
              catch (Interruptedexception IE) {}    
         }} public synchronized void Testtwo () {int i = 5;    
              while (i--> 0) {System.out.println (Thread.CurrentThread (). GetName () + ":" + i);    
              try {thread.sleep (500);    
  catch (Interruptedexception IE) {}}}    
    public static void Main (string[] args) {final testsynchronized myt2 = new testsynchronized ()    
         ;  Thread thread1= new Thread (new Runnable () {public void run () {myt2.testone ());    
         }, "Testone");   Thread thread2= new Thread (new Runnable () {public void run () {myt2.testtwo ());    
         }, "Testtwo");    
 Thread1.start ();; Thread2.start (); 
 } }

But there is also a special case, is the example shown above, object lock synchronized both methods and blocks of code, this can also reflect the advantages of the synchronized code block, if the Testone method synchronized code block after a very many not synchronized code, and there is a 100000 cycle, This causes the Testone method to execute very long, so if you use the synchronized cosmetic method directly, other threads cannot access the Testtwo method until the method is finished, but if you use a synchronized code block, the object lock is released when you exit the code block. The Testtwo method is already accessible to other threads when the thread is still executing the loop of the Test1 100000.

This leaves fewer opportunities or threads for blocking. Make the system more superior in performance.

Object locks of one class are not associated with object locks of another class, and when a thread obtains an object lock of Class A, it can also acquire object locks of Class B.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.