Java multi-thread in-depth study on the difference and usage of Java Synchronize and Lock

Source: Internet
Author: User
Tags finally block volatile

In distributed development, lock is an important way of thread control. Java also provides 2 locking mechanisms for this, synchronized and lock. As a Java enthusiast, it is natural to compare these 2 mechanisms, but also to learn some of the areas of distributed development needs to be noted.

We start with the simplest, and gradually analyze the 2 differences.

One, synchronized and lock of the use of the difference

Synchronized: This control is added to the object that needs to be synchronized, synchronized can be added to the method, or it can be added to a specific block of code, which represents the object that needs to be locked.

Lock: The specified start and end positions need to be displayed. Generally use the Reentrantlock class as a lock, and multiple threads must use a Reentrantlock class as an object to guarantee the lock's entry. The lock () and unlock () are indicated in the locking and unlocking places. Therefore, it is common to write unlock () in the finally block to prevent deadlocks.

The use of the difference is relatively simple, here do not repeat, if you do not understand the Java basic syntax can be seen.

Second, synchronized and lock performance differences

Synchronized is the code that is hosted for the JVM, and Lock is a Java-written control lock. In Java1.5, synchronize is inefficient in performance. Because this is a heavyweight operation, it is necessary to invoke the operating interface, resulting in a possible lock-up that consumes more system time than a lock-out operation. In contrast, using the lock object provided by Java provides a higher performance. But in the Java1.6, there was a change. The synchronize is semantically clear and can be optimized for many optimizations, such as adaptive spin, lock removal, lock coarsening, lightweight locks, bias locks, and more. resulting in synchronize performance on Java1.6 is no worse than lock. Officials also say they are more supportive of synchronize, and there is room for optimization in future releases.

Speaking of which, I would like to mention the specific differences between the 2 mechanisms. As far as I know, the synchronized primitive uses the CPU pessimistic locking mechanism, that is, the thread obtains the exclusive lock. An exclusive lock means that other threads can only rely on blocking to wait for the thread to release the lock. When the CPU conversion thread is blocked, it causes the thread context switch, and when there are many threads competing for the lock, the CPU's frequent context switches are inefficient.

Lock uses an optimistic locking method. The so-called optimistic lock is that each time without locking but assuming that there is no conflict to complete an operation, if the conflict failed to retry until successful. The mechanism for optimistic locking is the CAS operation (Compare and Swap). We can further study the source code of Reentrantlock, and we will find that one of the more important ways to get a lock is compareandsetstate. This is actually the special instruction provided by the CPU that is invoked.

Modern CPUs provide instructions to automatically update shared data and detect interference from other threads, and Compareandset () replaces the lock with these. This algorithm, called a non-blocking algorithm, means that a failure or hang of one thread should not affect another thread's failed or suspended algorithm.

I also just understand this step, specific to the CPU of the algorithm if interested readers can also be consulted, if there is a better explanation can also give me a message, I also learn.

Three, synchronized and lock use difference

Synchronized primitives and Reentrantlock do not differ in general, but in very complex synchronization applications, consider using Reentrantlock, especially when you encounter the following 2 requirements.

1. A thread needs to be interrupted while waiting for control of a lock
2. You need to separate some wait-notify,reentrantlock inside the condition application, to control which thread notify
3. With the fair lock function, each incoming thread will be queued

The following thin way to ...

First of all, the Reentrantlock lock mechanism has 2 types, ignoring interrupt lock and response interrupt lock, which gives us a lot of flexibility. For example: If a, B2 thread to compete lock, a thread get lock, b thread wait, but a thread this time really have too many things to deal with, is not return, B thread may be able to wait, want to interrupt themselves, no longer wait for this lock, to deal with other things. This time Reentrantlock provides 2 mechanisms, first, the B thread interrupts itself (or another thread interrupts it), but Reentrantlock does not respond, continue to let the B thread wait, how you interrupt, I am all one ear (synchronized the original language is so) Second, the B thread interrupts itself (or the other thread interrupts it), Reentrantlock handles the interrupt, and no longer waits for the lock to come and completely abandon it. (If you do not understand the Java interrupt mechanism, please refer to the relevant information, and then look back to this article, 80% people do not really understand what is a Java interrupt, hehe)

Here to do a test, first of all a buffer class, it has read operations and write operations, in order not to read dirty data, write and read all need to lock, we first use the Synchronized primitive language to lock, as follows:

public class Buffer {       private Object lock;       Public Buffer () {        lock = this;    }       public void Write () {        synchronized (lock) {            Long startTime = System.currenttimemillis ();            System.out.println ("Start writing data to this buff ...");            for (;;) Impersonation to be processed for a long time            {                if (System.currenttimemillis ()                        -startTime > Integer.max_value) break                    ;            }            System.out.println ("finally finished");}    }       public void Read () {   synchronized (lock) {            System.out.println ("read data from this buff");        }    }}

Next, we define 2 threads, one thread to write, and one thread to read.

public class Writer extends Thread {       private Buffer buff;       Public Writer (Buffer buff) {        this.buff = buff;    }       @Override public    Void Run () {   buff.write ();    }   }   public class Reader extends Thread {       private Buffer buff;       Public Reader (Buffer buff) {        this.buff = buff;    }       @Override public    Void Run () {           buff.read ();//This is expected to block           System.out.println ("read End");}   }

Well, write a main to test, we intentionally go to "write", and then let "read" Wait, "write" The time is endless, see "read" can give up.

public class Test {public    static void Main (string[] args) {        buffer buff = new Buffer ();           Final writer writer = new writer (buff);        Final Reader reader = new reader (buff);           Writer.start ();        Reader.start ();           New Thread (New Runnable () {               @Override public            void Run () {                Long start = System.currenttimemillis ();                for (;;) {                    //wait 5 seconds to interrupt read                    if (System.currenttimemillis ()                            -start >) {                        System.out.println ("No, try to interrupt") ;                        Reader.interrupt ();                        Break;}}}        ). Start ();       }}

We look forward to the "read" This thread can exit the waiting lock, but it backfired, once read this thread found that they do not have a lock, it has been waiting, even if it waits to die, also can not get the lock, because the thread to write 2.1 billion seconds to complete t_t, even if we interrupt it, it does not respond, it seems really to die At this time, Reentrantlock gave a mechanism for us to respond to interrupts, so that "reading" can stretch and give up the wait for the lock. Let's rewrite the buffer class, called bufferinterruptibly, which interrupts the cache.

Of course, to respond to reader and writer changes

public class Reader extends Thread {private bufferinterruptibly buff;    Public Reader (bufferinterruptibly buff) {this.buff = buff; } @Override public void Run () {try {buff.read ();//can receive an interrupted exception to effectively exit} catch (Interru        Ptedexception e) {System.out.println ("I do not read");       } System.out.println ("read End");       }}/*** writer does not have to change */public class writer extends Thread {private bufferinterruptibly buff;    Public Writer (bufferinterruptibly buff) {this.buff = buff;    } @Override public void Run () {buff.write (); }} public class Test {public static void main (string[] args) {bufferinterruptibly buff = new Bufferinterr           Uptibly ();        Final writer writer = new writer (buff);           Final Reader reader = new reader (buff);        Writer.start ();           Reader.start (); New Thread (New Runnable () {@Override public void run () {Long start = System.currenttimemillis (); for (;;)                        {if (System.currenttimemillis ()-Start > 5000) {                        System.out.println ("Unequal, try to interrupt");                        Reader.interrupt ();                    Break       }}}). Start (); }}

This time the "read" thread received a lock.lockinterruptibly () interrupt and effectively handled the "exception".

In the second case, the Reentrantlock can be used in conjunction with the condition, condition provides control logic for the wait and release of the Reentrantlock lock.

For example, after using Reentrantlock locking, the lock can be freed through its own condition.await () method, where the thread waits for the Condition.signal () method, and then proceeds to the execution. The await method needs to be placed in a while loop, so concurrency control between different threads requires a volatile variable, and Boolean is an atomic variable. Therefore, the operating logic for general concurrency control is as follows:

Volatile Boolean isprocess = false; Reentrantlock Lock  = new Reentrantlock (); Condtion Processready = Lock.newcondtion (); Thread:run () {    lock.lock ();    Isprocess = true;   try {while    (!isprocessready) {  //isprocessready is the control variable of another thread      processready.await ();//Release lock, Wait here for signal     }catch (interruptedexception e) {          thread.currentthread (). interrupt ();        } finally {          Lock.unlock ();          Isprocess = false;}}}    

Here is just a simplification of the code used, below we look at the source of a piece of Hadoop extracted:

Private class Mapoutputbuffer<k extends Object, V extends object> implements Mapoutputcollector<k, V>, in    dexedsortable {... boolean spillinprogress;    Final Reentrantlock spilllock = new Reentrantlock ();    Final Condition Spilldone = Spilllock.newcondition ();    Final Condition Spillready = Spilllock.newcondition ();    Volatile Boolean spillthreadrunning = false; Final Spillthread spillthread = new Spillthread (); public Mapoutputbuffer (Taskumbilicalprotocol Umbilical, JobConf J  OB, Taskreporter reporter) throws IOException, ClassNotFoundException      {... spillinprogress = false;      Spillthread.setdaemon (TRUE);      Spillthread.setname ("Spillthread");      Spilllock.lock ();        try {spillthread.start ();        while (!spillthreadrunning) {spilldone.await ();     }} catch (Interruptedexception e) {throw new IOException ("Spill thread failed to initialize", e); } finally {Spilllock.unlock ();        }} protected class Spillthread extends Thread {@Override public void run () {spilllock.lock ();        Spillthreadrunning = true;            try {while (true) {spilldone.signal ();            while (!spillinprogress) {spillready.await ();              } try {spilllock.unlock ();            Sortandspill ();            } catch (Throwable t) {sortspillexception = t;              } finally {Spilllock.lock ();              if (Bufend < Bufstart) {bufvoid = Kvbuffer.length;              } Kvstart = Kvend;              Bufstart = Bufend;            Spillinprogress = false;        }}} catch (Interruptedexception e) {thread.currentthread (). interrupt ();          } finally {Spilllock.unlock ();        Spillthreadrunning = false; }      }    }

The Spilldone in the code is a newcondition () of Spilllock. When you call Spilldone.await (), you can release the Spilllock lock, the thread goes into a blocking state, and when the spilldone.signal () operation waits for another thread, it wakes the thread and re-holds the Spilllock lock.

As can be seen here, the use of lock can make our multi-threaded interaction easier, and the use of synchronized does not do this.

Finally, the Reentrantlock class also provides 2 mechanisms for competitive locking: Fair lock and unfair lock. The meaning of these 2 mechanisms can literally be understood: that for multithreading, a fair lock relies on the order in which threads come in, and the subsequent thread gets a lock. The non-fair lock means that the lock can also compete with the thread that is waiting to lock the lock resource. For efficiency, of course, the unfair lock is more efficient, because the fair lock also determines whether the thread queue is the first one to get the lock.

Original address: http://blog.csdn.net/natian306/article/details/18504111

Java multi-thread in-depth study on the difference and usage of Java Synchronize and Lock

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.