Deep understanding of Java concurrency--"Java Concurrency in Practice" 13. Explicit lock __java

Source: Internet
Author: User
Tags exception handling finally block mutex

Need to support polling, timed locks. A lock is required for an interruptible lock fetch operation. You want to use a lock in a non-block structure. Learn about an explicit lock that supports more advanced operations.

Java 5.0 New

Reentrantlock is not an alternative to built-in locking, but an optional advanced feature when the built-in locking mechanism does not apply. 13.1 Lock and Reentrantlock

The Reentrantlock implements the lock interface, providing the same mutex and memory visibility as the synchronized. When acquiring Reentrantlock, it has the same semantics as entering the synchronized code block, and it has the same memory semantics as the Exit Sync code block when releasing the Reentrantlock. In addition, like synchronized, Reentrantlock also provides reentrant lock semantics. Reentrantlock supports all the acquisition lock modes defined in the lock interface, and provides greater flexibility in dealing with the Yongxing of locks than synchronized.

A built-in lock must be freed in a block of code that acquires the lock, which simplifies coding and enables good interaction with exception handling operations, but does not implement locking rules for non-blocking structures. These are the reasons for using synchronized, but in some cases a more flexible locking mechanism can often provide better activity or performance.

The reason why Reentrantlock cannot completely replace synchronized is that it is more dangerous because the lock is not automatically cleared when the execution of the program controls leaving the protected code block. 13.1.1 polling lock and timing lock

The timed and polling lock acquisition pattern is implemented by the Trylock method, and it has a better error recovery mechanism than the unconditional lock acquisition mode.

If you can't get all the locks you need, you can regain control by using a timed or polling lock, which frees up the locks you've acquired and then tries to get all the locks again.

Timing locks are also useful when implementing a time-constrained operation. When a blocking method is invoked in an operation with a time limit, it can provide a time limit based on the remaining events. If the operation does not give a result within a specified time, the program will end prematurely. When a built-in lock is used, the operation cannot be canceled when the lock is started, so it is difficult to have a built-in lock with a time limit operation. 13.1.2 interruptible Lock fetch Operation

An interruptible lock fetch operation can be used to lock in a cancel operation. The Lockinterruptibly method can maintain a corresponding interrupt while acquiring a lock, and because it is contained in lock, there is no need to create other types of interruptible blocking mechanisms.

The standard structure of an interruptible lock fetch is slightly more complex than a normal lock fetch operation because two try blocks are required. (If you throw a interruptedexception in an interruptible lock fetch, you can use the standard try-finally lock mode) to 13.1.3 the lock of the block structure

A separate lock is used for each linked list node so that different threads can operate independently of the different parts of the linked list. Each node's lock protects the link pointer and the data stored in that node, so when traversing or modifying a linked list, you must hold the lock on that node and know that the next node's lock is available to release the lock on the previous node. This type of locking is also known as a chained lock hand-over-hand locking or locking coupling lock coupling. 13.2 Performance Considerations

In Java5.0, when a single thread (no competition) changes to multiple threads, the performance of the built-in locks drops dramatically, while the performance of the reentrantlock decreases more smoothly, thus making it more scalable. In Java6, however, the built-in locks are not drastically reduced, and the scalability of the built-in locks and explicit locks is basically the same. 13.3 Fair Sex

Two fair choices are provided in the Reentrantlock constructor: the unjust lock (default) and the Fair lock. In a fair lock, threads will acquire locks according to the order in which they make the request, but in an unfair lock, the queue is allowed. When a thread requests an unjust lock, if the state of the lock becomes available at the time the request is made, the thread skips all the waiting threads in the queue and obtains the lock. Unfair reentrantlock do not advocate queue-jumping, but cannot prevent a thread from jumping in line at the right time. In a fair lock, if another thread holds the lock or waits for the lock in the queue by another thread, the new requesting thread is placed in the queue.

In the case of fierce competition, the performance of the non Fair lock is higher than the fair lock. One reason is that there is a serious delay before resuming a suspended thread and the thread actually starts running.

A fair lock should be used when holding a lock for a relatively long time, or if the average time interval between requests for a lock is longer. In these cases, the throughput elevation that the queue will bring may not occur. 13.4 Choose between synchronized and Reentrantlock

Reentrantlock has the same semantics as built-in locks on locks and memory, as well as other features, including timed lock waiting, interruptible lock waiting, fairness, and lock for non block structures .

In contrast to display locks, built-in locks still have a great advantage. Built-in locks are familiar to many developers. The risk of reentrantlock is higher than the synchronization mechanism, and if you forget to call unlock in the finally block, then you bury a major hidden danger.

you can consider using Reentrantlock only if the built-in lock does not meet your requirements. These requirements include: Timed, polling, interruptible Lock acquisition operations, fair queues, and not block-structured locks. In addition, we should give priority to the use of synchronized. 13.5 Read-write lock

Reentrantlock implements a standard mutex: up to one thread at a time can hold reentrantlock. But for maintaining the integrity of the data, mutexes are often an overly aggressive locking rule, and therefore unnecessarily restricting concurrency. In many cases, the operation on the data structure is read, and if the lock requirement is relaxed, allowing multiple threads performing read operations to access the data structure at the same time, the performance of the program is promoted.

Read/write locks solve this problem: A resource can be accessed by multiple read operations, or by a write operation, but both cannot be performed concurrently.

Read-write locks are a performance optimization measure that enables higher concurrency in certain situations. In practice, read-write locks can improve performance for data structures that are frequently read on multiprocessor systems. In other cases, the performance of the read-write lock is slightly worse than the performance of the exclusive lock because it is more complex.

Reentrantreadwritelock provides a reentrant lock semantics for both of these. Similar to Reentrantlock, Reentrantreadwritlock can optionally be constructed with an unfair lock (default) or a fair lock. In a fair lock, the thread with the longest waiting time will get the lock first. If this lock is held by a read thread and another thread requests a write lock, then no other read thread can acquire a read lock, knowing that the write thread is finished and the write lock is freed. In an unfair lock, the order in which the thread obtains access permission is indeterminate. A write thread can be demoted to a read thread, but cannot be upgraded from a read thread to a write thread because it causes a deadlock.

A write lock in Reentrantreadwritelock can only have a unique owner and can only be released by a thread that obtains a write lock.

Read-write locks increase concurrency when locks are held longer and most operations do not modify the protected resources.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.