Http://www.infoq.com/cn/articles/java-memory-model-5 deep understanding of the Java Memory Model (v)--lock
Http://www.ibm.com/developerworks/cn/java/j-jtp10264/Java Theory and Practice: a more flexible and scalable locking mechanism in JDK 5.0
http://blog.csdn.net/ghsau/article/details/7481142
1, Synchronized
Declaring a block of code as synchronized has two important consequences, usually the code has atomicity (atomicity) and visibility (visibility).
1.1 Atomic Sex
Atomicity means a moment when only one thread can execute a piece of code that is protected by a monitor object. This prevents multiple threads from conflicting with each other while updating the shared state.
1.2 Visibility
Visibility is more subtle, and it copes with the various perverse behaviors of memory caching and compiler optimizations. It must ensure that changes made to the shared data before the lock is released are visible to another thread that subsequently acquired the lock.
function: If no synchronization mechanism provides such visibility, the thread may see a shared variable that is either a modified value or an inconsistent value, which can cause many serious problems.
principle: When an object acquires a lock, it first invalidates its own cache, which guarantees that the variables are loaded directly from the main memory. Similarly, before an object releases a lock, it flushes its cache, forcing any changes that have been made to appear in main memory. This ensures that the two threads that synchronize on the same lock see the same value for the variable modified within the synchronized block.
In general, threads are not bound by the values of cached variables in a manner that does not necessarily allow other threads to see immediately (whether they are in registers, in processor-specific caches, or through command rearrangement or other compiler optimizations), but if the developer uses synchronization, The runtime will ensure that a thread's update to a variable is preceded by an update to an existing synchronized block, and when you enter another synchronized block that is protected by the same monitor (lock), these changes to the variable are immediately visible. Similar rules exist on volatile variables.
--volatile only guarantees visibility and does not guarantee atomicity.
1.3 When to sync.
The basic rule of visibility synchronization is that you must synchronize in the following situations:
Read the last variable that might have been written by another thread
Write the next variable that might be read by another thread
consistency Synchronization : When you modify multiple related values, you want the other threads to see the set of changes atomically-either see all the changes or see nothing.
This applies to related data items, such as the position and rate of particles, and metadata items such as the data values contained in a linked list and the chain of data items in the lists themselves.
In some cases, you do not have to use synchronization to pass data from one thread to another because the JVM has implicitly performed synchronization for you. These situations include: when data is initialized by a static initializer (on a static field or an initializer in a static{} block)
--final object when accessing final field.
When an object is created before the thread is created
When the thread can see the object it will be working on
Limitations of 1.4 Synchronize
Synchronized is nice, but it's not perfect. It has some functional limitations: it cannot interrupt a thread that is waiting to acquire a lock, nor can it get a lock by voting, and if it does not want to wait, it cannot get a lock; synchronization also requires that the release of a lock be performed only in the same stack frame as the stack frame where the lock was obtained, in most cases This is fine (and it interacts well with exception handling), but there are some situations where the locking of a block structure is more appropriate.
2, Reentrantlock
The lock frame in Java.util.concurrent.lock is an abstraction of the lock, which allows the implementation of the lock as a Java class rather than as a language feature. This leaves space for the various implementations of lock, which may have different scheduling algorithms, performance characteristics, or locking semantics.
The Reentrantlock class implements lock, which has the same concurrency and memory semantics as the synchronized, but adds features such as Lock voting , timed lock waiting , and interruptible lock waiting . In addition, it provides better performance in the case of intense contention. (In other words, when many threads want to access shared resources, the JVM can spend less time scheduling threads, and more on execution threads.) )
Class Outputter1 {
private lock lock = new Reentrantlock ();//Lock object public
void output (String name) {
Lock.lock ( ); Get lock
try {for
(int i = 0; i < name.length (); i++) {
System.out.print (Name.charat (i));
}
Finall y {
lock.unlock ();//Free Lock}}}
Difference:
Note that the Sychronized-modified method or statement block is automatically released after the code is executed, but lock requires us to manually release the lock , so in order to ensure that the lock is eventually released (an exception occurs), the mutex is placed in a try, Release the lock and put it in finally.
3, read and write lock Readwritelock in the example shown in the same function as synchronized, that lock's advantage.
For example, a class provides a get () and set () method for its internal shared data, and if used synchronized, the code is as follows:
Class Syncdata {
private int data;//shared data public
synchronized void set (int data) {
System.out.println (Thread . CurrentThread (). GetName () + "Preparing to write Data");
try {
thread.sleep ();
} catch (Interruptedexception e) {
e.printstacktrace ();
}
This.data = data;
System.out.println (Thread.CurrentThread (). GetName () + "write" + This.data);
}
Public synchronized void get () {
System.out.println (Thread.CurrentThread (). GetName () + "prepare to read Data");
try {
thread.sleep ();
} catch (Interruptedexception e) {
e.printstacktrace ();
}
System.out.println (Thread.CurrentThread (). GetName () + read + This.data);
}
Then write a test class to read and write the shared data separately using multiple threads:
public static void Main (string[] args) {//FINAL data data = new data ();
Final Syncdata data = new Syncdata ();
Final Rwlockdata data = new Rwlockdata (); Write for (int i = 0; i < 3; i++) {Thread t = new Thread (new Runnable () {@Over Ride public void Run () {for (int j = 0; J < 5; J +) {Data.set (new R)
Andom (). Nextint (30));
}
}
});
T.setname ("thread-w" + i);
T.start ();
//Read for (int i = 0; i < 3; i++) {Thread t = new Thread (new Runnable () { @Override public void Run () {(int j = 0; J < 5; J + +) {data.
Get ();
}
}
});
T.setname ("thread-r" + i);
T.start (); }
}
Run Result:
Thread-w0 ready to write Data
thread-w0 write 0
thread-w0 ready to write Data
thread-w0 write to 1
thread-r1 prepare to read data
THREAD-R1 read 1
THREAD-R1 ready to read data
THREAD-R1 Read 1
thread-r1 ready to read data
thread-r1 read 1
thread-r1 ready to read data
thread-r1 read 1
thread-r1 prepare to read data
THREAD-R1 Read 1
THREAD-R2 ready to read the data
THREAD-R2 read 1
thread-r2 prepare to read the data
THREAD-R2 read 1
thread-r2 prepare to read the data
THREAD-R2 Read 1
thread-r2 ready to read the data
THREAD-R2 read 1
thread-r2 ready to read the data
THREAD-R2 read 1
Thread-r0 ready to read Data//r0 and R2 can be read at the same time, should not be mutually exclusive.
Thread-r0 Read 1
Thread-r0 ready to read the data
thread-r0 read 1
thread-r0 prepare to read the data
thread-r0 read 1
thread-r0 prepare to read the data
Thread-r0 reads 1
thread-r0 prepares to read the data
thread-r0 reads 1
thread-w1 prepares to write data
Thread-w1 writes
THREAD-W1 ready to write Data
thread-w1 write to
thread-w1 ready to write Data
thread-w1 write
to thread-w1 ready
to write data THREAD-W1 Write
thread-w1 ready to write Data
thread-w1 Write 4
thread-w2 ready to write Data
thread-w2 write
THREAD-W2 ready to write Data
thread-w2 Write 4
thread-w2 ready to write Data
thread-w2 write to 1
thread-w2 prepare to write data
THREAD-W2 Write
thread-w2 ready to write Data
thread-w2 write 2
thread-w0 ready to write Data
thread-w0 Write 4
Thread-w0 ready to write Data thread-w0 write to
thread-w0 ready to write Data
thread-w0 write 29
Everything is looking good now. Each thread does not interfere with each other. Wait a minute.. It is normal that the read and write threads do not interfere with each other, but the two read threads need not interfere with each other.
That 's right. Read threads should not be mutually exclusive.
We can use the read and write lock Readwritelock implementation:
Import Java.util.concurrent.locks.ReadWriteLock;
Import Java.util.concurrent.locks.ReentrantReadWriteLock;
class Data {private int data;//shared data private Readwritelock RWL = new Reentrantreadwritelock ()
; public void set (int data) {Rwl.writelock (). Lock ();//Fetch to write lock try {System.out.
println (Thread.CurrentThread (). GetName () + "ready to write Data");
try {thread.sleep (20);
catch (Interruptedexception e) {e.printstacktrace ();
} this.data = data;
System.out.println (Thread.CurrentThread (). GetName () + "write" + This.data); finally {Rwl.writelock (). unlock ();/Free Write Lock}} public void Get () {Rwl.readlock (). Lock ();//Fetch to read lock try {System.out.println (Thread.currentthre
AD (). GetName () + "prepare to read data");
try {thread.sleep (20); catch (IntErruptedexception e) {e.printstacktrace ();
} System.out.println (Thread.CurrentThread (). GetName () + "read" + This.data); finally {Rwl.readlock (). unlock ();/free Read Lock}}}
Test results:
THREAD-W1 ready to write Data
thread-w1 write 9
thread-w1 ready to write Data
thread-w1 write
to thread-w1 write data
thread-w1 written to 12
Thread-w0 ready to write Data
Thread-w0 Write
thread-w0 ready to write Data
Thread-w0 writes
thread-w0 ready to write Data
thread-w0 write 6
thread-w0 ready to write data
Thread-w0 writes
Thread-w0 ready to write Data
thread-w0 write 0
thread-w2 ready to write Data
thread-w2 write
to thread-w2 ready to write Data
THREAD-W2 Write
thread-w2 ready to write Data
thread-w2 write
to thread-w2 ready to write Data
thread-w2 write
THREAD-W2 ready to write Data
thread-w2 write
thread-r2 prepare to read data thread-r1 prepare to read data
thread-r0 prepare to read
data THREAD-R0 read one
thread-r1 read one
thread-r2 read one
thread-w1 ready to write Data
thread-w1 write
THREAD-W1 ready to write Data
thread-w1 write 1
thread-r0 prepare to read data THREAD-R2 prepare to read data
THREAD-R1 prepare to
read data THREAD-R2 reads 1
THREAD-R2 prepares to read the data
thread-r1 reads 1
thread-r0 reads 1
thread-r1 prepares to read the data
Thread-r0 ready to read the data
thread-r0 read 1
thread-r2 read 1
thread-r2 ready to read the data
THREAD-R1 read 1
Thread-r0 ready to read the data
thread-r1 ready to read the data
thread-r0 read 1
thread-r2 read 1
thread-r1 read 1
Thread-r0 ready to read the data thread-r1 prepare to read the data
THREAD-R2 prepare to read the data
thread-r1 read 1
thread-r2 read 1
THREAD-R0 Read 1
Read-write locking allows for higher levels of concurrent access to shared data than mutex locks. Although only one thread at a time (writer thread) can modify shared data, in many cases, any number of threads can simultaneously read shared data (reader thread)
In theory, the concurrency enhancements allowed with read-write locks are more performance-enhancing than mutex locks.
In practice, concurrency enhancements can be fully implemented only on multiprocessor and only when access mode is applicable to shared data. -for example, a collection that is initially populated with data and is not constantly modified, because it is often searched (such as searching for a directory), so such collection are ideal candidates for using read-write locking.
4. Communication between threads condition
Condition can replace the traditional communication between threads, replacing wait () with await (), replacing notify () with signal (), replacing Signalall () with Notifyall () .
--Why the method name is not directly called Wait ()/notify ()/nofityall (). Because these methods of object are final and cannot be rewritten.
The traditional thread communication way, the condition can realize.
Note that thecondition is bound to lock , and the condition to create a lock must be in the Newcondition () method.
The strength of condition is that it can create different condition for multiple threads
Look at an example in the JDK documentation: Suppose you have a bound buffer that supports the put and take methods. If an attempt is made to perform a take operation on an empty buffer, the thread blocks until an item becomes available, and the thread blocks until space becomes available, if an attempt is made to perform a put operation on a full buffer. We like to save the put thread and the take thread in a separate wait set, so that the best plan can be used when an item or space in the buffer becomes available, and only one thread at a time is notified. You can do this by using two condition instances.
--it's actually the Java.util.concurrent.ArrayBlockingQueue function.
Class Boundedbuffer {final lock lock = new Reentrantlock (); Lock Object Final Condition notfull = Lock.newcondition (); The writer lock final Condition notempty = Lock.newcondition (); Reader lock final object[] items = new object[100];//cache queue int putptr; Write index int takeptr; Read index int count; Number of data in queue/write public void put (Object x) throws Interruptedexception {Lock.lock ();//Lock try {//AS
The fruit queue is full, blocking < write thread > while (count = = items.length) {notfull.await ();
}//write queue, and update write index items[putptr] = x;
if (++putptr = = items.length) putptr = 0;
++count;
Wake < Read threads > notempty.signal ();
finally {Lock.unlock ()//Unlock}}//Read public Object take () throws Interruptedexception { Lock.lock ();
Lock try {//If queue is empty, block < read thread > while (count = = 0) {notempty.await (); //Read queue and update read index Object x = items[takeptr];
if (++takeptr = = items.length) takeptr = 0;
--count;
Wake < Write thread > notfull.signal ();
return x; finally {lock.unlock ();//Unlock}}}
Advantages:
Assuming that the cache queue is full, then the blocking must be write-thread, the wake-up must be read thread, on the contrary, the blocking must be read thread, the wake-up must be written thread.
What would be the effect of assuming that there was only one condition? The cache queue is full, this lock does not know whether to wake up the read thread or write thread, if the wake-up is read thread, happy, if the wake-up is a write thread, then the line isconcurrently is awakened, and is blocked, then to wake up, so waste a lot of time.