"Java thread" lock mechanism: synchronized, lock, condition reprint

Source: Internet
Author: User
Tags mutex volatile

Http://www.infoq.com/cn/articles/java-memory-model-5 in-depth understanding of Java memory Model (v)--lock

Http://www.ibm.com/developerworks/cn/java/j-jtp10264/Java Theory and Practice: a more flexible and scalable locking mechanism in JDK 5.0

http://blog.csdn.net/ghsau/article/details/7481142

1, synchronized

Declaring a block of code as synchronized has two important consequences, usually referring to the code having atomicity (atomicity) and visibility (visibility).

1.1 atomicity

Atomicity means that there is a moment when only one thread can execute a piece of code, which is protected by a monitor object. This prevents multiple threads from conflicting when they update the shared state.

1.2 Visibility

Visibility is more subtle, and it deals with the various anomalous behaviors of memory caches and compiler optimizations. It must ensure that the changes made to the shared data before the lock is released are visible to the other thread that subsequently acquired the lock.

Role: If there is no guarantee of this visibility provided by the synchronization mechanism, the shared variables that the thread sees may be pre-modified or inconsistent values, which can cause many serious problems.

Principle: When an object acquires a lock, it first invalidates its cache, which guarantees that the variables are loaded directly from the main memory. Similarly, before an object releases a lock, it flushes its cache, forcing any changes that have been made to appear in main memory. This will ensure that two threads synchronized in the same lock see the same values for variables modified within the synchronized block.

In general, threads are not constrained by the values of cached variables in a way that does not have to be immediately visible to other threads (whether they are in registers, in processor-specific caches, or through instruction Reflow or other compilers), but if the developer uses synchronization, The runtime will ensure that updates made to the variables by one thread will be updated prior to the existing synchronized block, and the updates to the variables are immediately visible when entering another synchronized block protected by the same monitor (lock). Similar rules exist on volatile variables.

--volatile only guaranteed visibility, not guaranteed atomicity!

1.3 When do you want to synchronize?

The basic rule for visibility synchronization is that you must synchronize in the following situations:

Read the last variable that might have been written by another thread

Writes the next variable that may be read by another thread

Consistency synchronization: When you modify multiple related values, you want other threads to see this set of changes atomically--either see all the changes, or see nothing.

This applies to related data items, such as the location and rate of particles, and metadata items such as the data values contained in the linked list and the chain of data items in the lists themselves.

In some cases, you do not have to use synchronization to pass data from one thread to another because the JVM has implicitly performed synchronization for you. These situations include:

Initialized by a static initializer (on a static field or in a static{} block)

When initializing data

What about the--final object when you access the final field?

When an object is created before the thread is created

When a thread can see the object it will be working on

Limitations of 1.4 Synchronize

Synchronized is good, but it's not perfect. It has some functional limitations:

It cannot interrupt a thread that is waiting to acquire a lock;

Also can't vote to get lock, if do not want to wait, also can't get lock;

Synchronization also requires that the release of the lock be made only in the same stack frame as the stack frame where the lock was obtained, and in most cases this is fine (and interacts well with exception handling), but there are some situations where it is more appropriate to have some non-block-structured locking.

2, Reentrantlock

The lock framework in Java.util.concurrent.lock is an abstraction of locking, which allows the implementation of a lock to be implemented as a Java class, rather than as a language feature. This leaves space for the various implementations of lock, which may have different scheduling algorithms, performance characteristics, or locking semantics.

The Reentrantlock class implements lock, which has the same concurrency and memory semantics as synchronized, but adds features like lock polling, timed lock-in, and interruptible lock waiting. In addition, it provides better performance in the case of intense contention. (In other words, when many threads are trying to access a shared resource, the JVM can spend less time dispatching the thread and more of it to the execution thread.) )

Class Outputter1 {

Private lock lock = new Reentrantlock ();//Lock Object

public void output (String name) {

Lock.lock (); Get the Lock

try {

for (int i = 0; i < name.length (); i++) {

System.out.print (Name.charat (i));

}

} finally {

Lock.unlock ();//Release lock

}

}

}

Difference:

It is important to note that the Sychronized modified method or block of statements after the execution of the code, the lock is automatically released, but with the lock requires us to manually release the lock, so in order to ensure that the lock is finally released (abnormal situation), the mutex is placed in the try, release the lock in the finally!!

3. Read/write lock Readwritelock

The above example shows the same function as synchronized, where is the lock advantage?

For example, a class provides the Get () and set () methods for its internal shared data, and if synchronized, the code is as follows:

Class Syncdata {

private int data;//Shared data

Public synchronized void set (int data) {

System.out.println (Thread.CurrentThread (). GetName () + "ready to write Data");

try {

Thread.Sleep (20);

} catch (Interruptedexception e) {

E.printstacktrace ();

}

This.data = data;

System.out.println (Thread.CurrentThread (). GetName () + "write" + This.data);

}

Public synchronized void get () {

System.out.println (Thread.CurrentThread (). GetName () + "ready to read Data");

try {

Thread.Sleep (20);

} catch (Interruptedexception e) {

E.printstacktrace ();

}

System.out.println (Thread.CurrentThread (). GetName () + "read" + This.data);

}

}

Then write a test class to read and write this shared data with multiple threads:

public static void Main (string[] args) {

Final data data = new data ();

Final Syncdata data = new Syncdata ();

Final Rwlockdata data = new Rwlockdata ();

Write

for (int i = 0; i < 3; i++) {

Thread t = new Thread (new Runnable () {

@Override

public void Run () {

for (int j = 0; J < 5; J + +) {

Data.set (New Random (). Nextint (30));

}

}

});

T.setname ("thread-w" + i);

T.start ();

}

Read

for (int i = 0; i < 3; i++) {

Thread t = new Thread (new Runnable () {

@Override

public void Run () {

for (int j = 0; J < 5; J + +) {

Data.get ();

}

}

});

T.setname ("thread-r" + i);

T.start ();

}

}

Operation Result:

Thread-w0 Preparing to write data

Thread-w0 Write 0

Thread-w0 Preparing to write data

Thread-w0 Write 1

THREAD-R1 ready to read data

THREAD-R1 Reading 1

THREAD-R1 ready to read data

THREAD-R1 Reading 1

THREAD-R1 ready to read data

THREAD-R1 Reading 1

THREAD-R1 ready to read data

THREAD-R1 Reading 1

THREAD-R1 ready to read data

THREAD-R1 Reading 1

THREAD-R2 ready to read data

THREAD-R2 Reading 1

THREAD-R2 ready to read data

THREAD-R2 Reading 1

THREAD-R2 ready to read data

THREAD-R2 Reading 1

THREAD-R2 ready to read data

THREAD-R2 Reading 1

THREAD-R2 ready to read data

THREAD-R2 Reading 1

Thread-r0 ready to read Data//r0 and R2 can read at the same time, should not be mutually exclusive!

THREAD-R0 Reading 1

Thread-r0 ready to read data

THREAD-R0 Reading 1

Thread-r0 ready to read data

THREAD-R0 Reading 1

Thread-r0 ready to read data

THREAD-R0 Reading 1

Thread-r0 ready to read data

THREAD-R0 Reading 1

THREAD-W1 Preparing to write data

THREAD-W1 Write 18

THREAD-W1 Preparing to write data

THREAD-W1 Write 16

THREAD-W1 Preparing to write data

THREAD-W1 Write 19

THREAD-W1 Preparing to write data

THREAD-W1 Write 21

THREAD-W1 Preparing to write data

THREAD-W1 Write 4

THREAD-W2 Preparing to write data

THREAD-W2 Write 10

THREAD-W2 Preparing to write data

THREAD-W2 Write 4

THREAD-W2 Preparing to write data

THREAD-W2 Write 1

THREAD-W2 Preparing to write data

THREAD-W2 Write 14

THREAD-W2 Preparing to write data

THREAD-W2 Write 2

Thread-w0 Preparing to write data

Thread-w0 Write 4

Thread-w0 Preparing to write data

Thread-w0 Write 20

Thread-w0 Preparing to write data

Thread-w0 Write 29

Everything looks good now! Each thread does not interfere! Wait a minute.. It is normal for both the read thread and the write thread to interfere with each other, but does the two read threads need to interfere with each other??

Right! Read threads should not be mutually exclusive!

We can use read-write lock Readwritelock to achieve:

Import Java.util.concurrent.locks.ReadWriteLock;

Import Java.util.concurrent.locks.ReentrantReadWriteLock;

Class Data {

private int data;//Shared data

Private Readwritelock RWL = new Reentrantreadwritelock ();

public void set (int data) {

Rwl.writelock (). Lock ();//Fetch to write lock

try {

System.out.println (Thread.CurrentThread (). GetName () + "ready to write Data");

try {

Thread.Sleep (20);

} catch (Interruptedexception e) {

E.printstacktrace ();

}

This.data = data;

System.out.println (Thread.CurrentThread (). GetName () + "write" + This.data);

} finally {

Rwl.writelock (). unlock ();//Release Write lock

}

}

public void get () {

Rwl.readlock (). Lock ();//Fetch to read lock

try {

System.out.println (Thread.CurrentThread (). GetName () + "ready to read Data");

try {

Thread.Sleep (20);

} catch (Interruptedexception e) {

E.printstacktrace ();

}

System.out.println (Thread.CurrentThread (). GetName () + "read" + This.data);

} finally {

Rwl.readlock (). unlock ();//Release read lock

}

}

}

Test results:

THREAD-W1 Preparing to write data

THREAD-W1 Write 9

THREAD-W1 Preparing to write data

THREAD-W1 Write 24

THREAD-W1 Preparing to write data

THREAD-W1 Write 12

Thread-w0 Preparing to write data

Thread-w0 Write 22

Thread-w0 Preparing to write data

Thread-w0 Write 15

Thread-w0 Preparing to write data

Thread-w0 Write 6

Thread-w0 Preparing to write data

Thread-w0 Write 13

Thread-w0 Preparing to write data

Thread-w0 Write 0

THREAD-W2 Preparing to write data

THREAD-W2 Write 23

THREAD-W2 Preparing to write data

THREAD-W2 Write 24

THREAD-W2 Preparing to write data

THREAD-W2 Write 24

THREAD-W2 Preparing to write data

THREAD-W2 Write 17

THREAD-W2 Preparing to write data

THREAD-W2 Write 11

THREAD-R2 ready to read data

THREAD-R1 ready to read data

Thread-r0 ready to read data

THREAD-R0 Reading 11

THREAD-R1 Reading 11

THREAD-R2 Reading 11

THREAD-W1 Preparing to write data

THREAD-W1 Write 18

THREAD-W1 Preparing to write data

THREAD-W1 Write 1

Thread-r0 ready to read data

THREAD-R2 ready to read data

THREAD-R1 ready to read data

THREAD-R2 Reading 1

THREAD-R2 ready to read data

THREAD-R1 Reading 1

THREAD-R0 Reading 1

THREAD-R1 ready to read data

Thread-r0 ready to read data

THREAD-R0 Reading 1

THREAD-R2 Reading 1

THREAD-R2 ready to read data

THREAD-R1 Reading 1

Thread-r0 ready to read data

THREAD-R1 ready to read data

THREAD-R0 Reading 1

THREAD-R2 Reading 1

THREAD-R1 Reading 1

Thread-r0 ready to read data

THREAD-R1 ready to read data

THREAD-R2 ready to read data

THREAD-R1 Reading 1

THREAD-R2 Reading 1

THREAD-R0 Reading 1

Read-write locks allow higher levels of concurrent access to shared data than mutex locking. Although only one thread at a time (writer thread) can modify shared data, in many cases, any number of threads can concurrently read shared data (reader thread)

In theory, the concurrency enhancements allowed with read-write locking allow greater performance improvement than mutex locking.

In practice, concurrency enhancements can be fully implemented only on multiprocessor and only when access patterns are applicable to shared data. --for example, a collection that is initially populated with data and is not often modified since it is often searched (such as searching for a directory), so collection is the ideal candidate to use a read-write lock.

4, inter-thread communication condition

Condition can replace traditional inter-thread communication by replacing wait () with await (), replacing notify () with signal (), and replacing Notifyall () with Signalall ().

--Why is the method name not directly called Wait ()/notify ()/nofityall ()? Because these methods of object are final and cannot be rewritten!

Traditional thread communication methods, condition can be implemented.

Note that the condition is bound to lock and the condition must be newcondition () method to create a lock.

The power of condition is that it can create different condition for multiple threads

Look at an example in the JDK document: Suppose there is a bound buffer that supports the put and take methods. If you attempt to perform a take operation on an empty buffer, the thread will block until an item becomes available, and if you attempt to perform a put operation on a full buffer, the thread will block until the space becomes available. We like to save the put thread and the take thread in a separate wait set so that you can make use of the best plan when an item or space in the buffer becomes available, notifying only one thread at a time. You can use two condition instances to do this.

--is actually the function of Java.util.concurrent.ArrayBlockingQueue

Class Boundedbuffer {

Final lock lock = new Reentrantlock (); Lock Object

Final Condition notfull = Lock.newcondition (); Write thread Lock

Final Condition notempty = Lock.newcondition (); Read-Thread Lock

Final object[] items = new object[100];//Cache queue

int putptr; Write index

int takeptr; Read Index

int count; Number of data in queue

Write

public void put (Object x) throws Interruptedexception {

Lock.lock (); Lock

try {

If the queue is full, block < write threads >

while (count = = items.length) {

Notfull.await ();

}

Write queue, and update write index

ITEMS[PUTPTR] = x;

if (++putptr = = items.length) putptr = 0;

++count;

Wake up < Read threads >

Notempty.signal ();

} finally {

Lock.unlock ();//Unlock Lock

}

}

Read

Public Object take () throws Interruptedexception {

Lock.lock (); Lock

try {

If the queue is empty, block < read threads >

while (count = = 0) {

Notempty.await ();

}

Read the queue and update the read index

Object x = items[takeptr];

if (++takeptr = = items.length) takeptr = 0;

--count;

Wake-up < Write threads >

Notfull.signal ();

return x;

} finally {

Lock.unlock ();//Unlock Lock

}

}

}

Advantages:

Assuming the cache queue is already full, then the blocking is definitely a write thread, the wake is definitely a read thread, on the contrary, the blocking is definitely a read thread, the wake is definitely a write thread.

So what's the effect of assuming that there is only one condition? Cache queue is already full, this lock does not know whether to wake up the read thread or write thread, if the wake is a read thread, happy, if the wake is a write thread, then the line Cheng Gang is awakened, and is blocked, then to wake up, so that wasted a lot of time.

"Java thread" lock mechanism: synchronized, lock, condition reprint

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.