Java Concurrency Framework-fairness

Source: Internet
Author: User
Tags cas

Fairness means that all threads have the same success rate of requesting access to critical resources, and do not give priority to certain threads. By learning from the previous CLH Node FIFO that the wait queue is a FIFO queue, is it fair to say that each thread acquires a lock? About fairness here split into three points respectively elaborated:
① A node that is ready to queue, which is a discussion of the fairness of the contention that occurs when a thread joins a waiting queue, when a thread is joined to a wait queue after a failed attempt to acquire a lock, when multiple threads join the queue by spin, and all threads are not guaranteed to be fair in the spin process, and may later line turndown early into the queue , so the node into the queue is not fair.
Ii Waiting for a node in the queue, the case becomes a queue of waiting queue after the success of ①, we know that this queue is a first-in first-out queue, then it is very simple to get, all the nodes in the queue is fair, they are in sequence waiting for themselves to wake up the precursor node and acquire the lock, so waiting for the node in the queue
③ Break-in node, which is referred to as a break-in policy, when a new thread arrives at the shared resource boundary regardless of whether there are other waiting nodes in the queue, it will first attempt to acquire the lock. Break-in characteristics undermine fairness, the AQS framework of the external embodiment of the fairness of the main embodiment, the following will be analysis of intrusion characteristics.
The underlying acquisition lock algorithm provided by AQS is an intrusive algorithm that joins the current thread to the wait queue if a new thread arrives to make a fetch attempt before it succeeds. As shown in 2-5-9-6, waiting for a node thread in the queue to try to get access to the shared resource one by one, one time the head node thread is ready to try to get the other thread to break in, and this thread is not directly joined to the tail of the waiting queue, but instead of competing on the node thread to get the resource, Break-in thread if the successful acquisition of a shared resource is executed directly, the head node thread continues to wait for the next attempt, so that the break-in thread succeeds in the queue, and then the thread turndown early to execute, stating that the AQS base fetch algorithm is not strictly fair.

Figure 2-5-9-6 breaking into a thread
The basic acquisition algorithm logic is simplified as follows: first try to acquire the lock, if it fails to create the node and join to the tail of the waiting queue, and then through the continuous loop check whether it is the turn to execute, of course, this process in order to improve performance may be the thread first suspended, and eventually wake up by the precursor node.
If (attempt to acquire lock failed) {
Create node
Using CAs to insert node into the tail of the queue
while (true) {
If (attempt to acquire the lock succeeds and node's predecessor is the head node) {
Set the current node as the head node
Jump out of the loop
}else{
Use CAs to modify node precursor nodes ' waitstatus identity as signal
if (modified successfully)
Suspend current thread
}
}
Why use an intrusion strategy? Intrusive policies can often provide higher total throughput. Because of the small size of the general Synchronizer, it can be said that the range of shared resources is small, and the time period that the thread consumes from blocking state to wake up may be several times or even dozens of times times through the time period of the shared resource, so that there will be a large time period in the process of thread wake, which causes the resource to be underutilized. In order to improve throughput, this intrusion strategy is introduced, which can make the thread that breaks into the waiting queue head node from blocking to the awakened time to acquire the lock directly and pass the Synchronizer, in order to take full advantage of the wake process this empty window period, greatly increase the throughput rate. In addition, the implementation of intrusion mechanism provides a competitive adjustment mechanism, that is, the developer can define the number of attempts to get a break in the custom Synchronizer, assuming that the number of times N is repeatedly obtained until N times are not successful to join the thread waiting queue, with the increase of the number of n can increase the chance of successful intrusion. At the same time, this intrusion strategy may cause the queue to wait for the thread to starve, because the lock can always be fetched by the break-in thread, but because the time it takes to hold the Synchronizer is too short to avoid starvation, and if the protected code body is long and holds the Synchronizer longer, this greatly increases the risk of waiting for the queue to wait indefinitely.
In the actual situation or in accordance with the needs of users to formulate a strategy, in a high level of fairness requires a scenario, you can break the intrusion strategy to achieve fairness. In the custom Synchronizer can be implemented through the Aqs reservation method Tryacquire method, just to determine whether the current thread is waiting for the head node in the queue corresponding to the thread, or if not directly return false, attempt to obtain a failure. But before this fairness is relative to the semantic level of Java syntax fairness, in reality, the implementation of the JVM will directly affect the order of thread execution.

Java Concurrency Framework-fairness

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.