Java concurrency framework-fairness and java concurrency framework

Source: Internet
Author: User

Java concurrency framework-fairness and java concurrency framework
The so-called fairness means that all threads have the same success rate for applying for access to critical resources, so that some threads do not have priority. Through the previous CLH Node FIFO learning, I learned that the waiting queue is a first-in-first-out queue. Is it fair to say that each thread gets the lock? Fairness is divided into three points:
① Prepare the nodes into the queue. In this case, it is discussed whether the competition generated when the thread joins the waiting queue is fair. After the thread fails to obtain the lock, it will be added to the waiting queue, at this time, multiple threads add nodes to the queue through spin, and the fairness of all threads cannot be guaranteed during the spin process. Maybe later threads enter the queue first, therefore, the incoming queue of nodes is not fair.
② Wait for the nodes in the queue. ① After the queue is successfully added, it becomes a node in the waiting queue. We know that this queue is a first-in-first-out queue, which is easy to get, all nodes in the queue are fair. They all wait in order to wait for themselves to be awakened by the Precursor node and get the lock, so waiting for nodes in the queue is fair.
③ Break into the node. In this case, when a new thread reaches the shared resource boundary, no matter whether there are other waiting nodes in the waiting queue, it will first try to get the lock, this is called an intrusion policy. The Intrusion Feature damages the fairness, and the fairness embodied by the AQS framework is mainly reflected. The following will analyze the Intrusion Feature.
The basic lock acquisition algorithm provided by AQS is an intrinable algorithm. If a new thread arrives, the current thread is added to the waiting queue only when the attempt is unsuccessful. As shown in Figure 2-5-9-6, wait for the node threads in the queue to get the right to use the shared resources one by one in sequence. At some time, the header node thread is about to try to get the permission while another thread breaks in, this thread is not directly added to the tail of the waiting queue, but first competing with the thread at the first node to obtain resources. If the thread breaks into the queue and successfully obtains the shared resources, it will be executed directly, and the thread at the first node will continue to wait for the next attempt, as a result, the thread is successfully inserted into the queue, and later the thread is executed first, which means that the AQS basic acquisition algorithm is not strictly fair.
 
Figure 2-5-9-6 break into the thread
The basic retrieval algorithm logic is simplified as follows: First, try to obtain the lock. If the acquisition fails, create a node and add it to the end of the waiting queue. Then, check whether it is your turn to execute the lock repeatedly, of course, in order to improve the performance, the thread may be suspended first and eventually awakened by the Precursor node.
If (failed to get the lock ){
Create a node
Use CAS to insert a node to the end of the queue
While (true ){
If (the attempt to obtain the lock is successful and the node's precursor node is the header node ){
Set the current node as the header Node
Skip Loop
} Else {
Use CAS to change the waitStatus ID of the node precursor node to signal
If (modified successfully)
Suspend current thread
}
}
Why should I use an intrusion policy? Intrusive policies generally provide higher total throughput. Generally, because the synchronization machine has a small granularity, it can be said that the scope of shared resources is small, the time consumed by the thread from the blocking state to the wake-up may be several or even dozens of times the time period of the shared resource, in this way, there will be a large time period window in the thread wake-up process, resulting in insufficient utilization of resources. To increase throughput, this intrusion policy is introduced, it enables the threads that break into the waiting queue header node from the blocking to the wake-up period to directly obtain the lock and pass the synchronization to make full use of the wake-up process, greatly increasing the throughput. In addition, the implementation of the intrusion Mechanism provides a competitive adjustment mechanism, that is, developers can define the number of attempts to access the custom synchronization, if the number of requests is n, the thread is added to the waiting queue until n requests fail to be obtained. As the number of requests increases, the chance of successful intrusion is increased. At the same time, this intrusion policy may cause hunger in the waiting queue thread because the lock may be obtained by the broken thread all the time. However, due to the short-lived holding time of the synchronizator, hunger is avoided, otherwise, if the protected code body is long and it takes a long time to hold the SYN, this will greatly increase the risk of infinite waiting for the queue.
In actual situations, policies should be formulated based on user needs. In a scenario with high fairness requirements, intrusion policies can be removed to achieve fairness. You can use the tryAcquire method of the AQS reservation method in the Custom synchronizator. You only need to determine whether the current thread is the thread corresponding to the header node in the queue. If not, false is returned directly, and an error is returned. However, the preceding fairness is relative to the Java syntax semantics. In reality, JVM implementation directly affects the thread execution sequence.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.