18. Java Concurrency and multithreading-hunger and fairness

Source: Internet
Author: User

The following is transferred from http://ifeve.com/starvation-and-fairness/:

This state is called "starvation" if a thread is not running out of CPU time because the CPU time is completely snatched by other threads. The thread was "starved to death" because it had no chance of running out of CPU time. The solution to hunger is called "fairness" – that is, all threads have fair access to operational opportunities.

Here are the topics discussed in this article:

1. Causes of starvation in Java:

    • High-priority threads devour the CPU time of all low-priority threads.
    • The thread is permanently blocked in a state waiting to enter the synchronization block.
    • The thread waits for an object that itself is also permanently waiting to be completed (such as the Wait method that invokes the object).

2. In Java, the implementation of the Fairness Scheme requires:

    • Use the lock instead of synchronizing the block.
    • Fair lock.
    • Pay attention to performance aspects.

Causes of starvation in Java

In Java, the following three common causes cause thread starvation:

    1. High-priority threads devour the CPU time of all low-priority threads.
    2. The thread is permanently blocked in a state that waits to enter the synchronization block, because the other thread is always able to continue to access the synchronization block before it.
    3. The thread waits for an object (on which it calls Wait ()) to be permanently waiting for completion, because the other threads are always constantly getting woken up.

High-priority threads devour CPU time for all low-priority threads

You can set individual thread priorities for each thread, and the higher the priority thread gets the more CPU time, the higher the thread priority value is set from 1 to 10, and the exact interpretation of the behavior that these priority values represent depends on your application running platform. For most applications, you'd better not change their priority values.

The thread is permanently blocked in a state waiting to enter the synchronization block

Java's synchronous code area is also a cause of starvation. The Java synchronization Code area does not guarantee the order in which threads are allowed to enter. This means that theoretically there is a risk that a thread attempting to enter the synchronization zone is permanently blocked, because other threads are always able to get access before it lasts, which is the "hunger" issue, and a thread is "starved to death" because it does not have the opportunity to run CPU time.

The thread waits for an object that itself (called Wait () on it) is also in a permanent wait for completion

If more than one thread is executing on the Wait () method, and calling notify () on it does not guarantee which thread will wake up, any thread may be in a state of continuing waiting. So there is a risk that a waiting thread never wakes up because the other waiting threads always get awakened.

Achieving fairness in Java

Although Java is not possible to achieve 100% fairness, we can still achieve fairness through synchronization between threads.

First of all, learn a simple sync state code:

 Public class synchronizer{    publicsynchronizedvoid  dosynchronized () {     // Do a lot of work which takes a long time     }}

If more than one thread calls the Dosynchronized () method, the other threads will remain blocked until the first access thread is completed, and in this multi-threaded scenario, it is not guaranteed which thread is going to gain access.

Replacing a synchronization block with a lock method

To improve the fairness of waiting threads, we use lock mode instead of synchronous blocks.

 Public class synchronizer{    new  Lock ();      Public void throws interruptedexception{        this. Lock.lock ();         // critical section, does a lot of work which takes a long        time  This . Lock.unlock ();    }}

Notice that dosynchronized () is no longer declared as synchronized, but instead is replaced with Lock.lock () and Lock.unlock ().

Here is an implementation using the Lock class:

 Public classLock {Private BooleanisLocked =false; PrivateThread Lockingthread =NULL;  Public synchronized voidLock ()throwsinterruptedexception { while(isLocked) {wait (); } isLocked=true; Lockingthread=Thread.CurrentThread (); }     Public synchronized voidunlock () {if( This. lockingthread! =Thread.CurrentThread ()) {            Throw NewIllegalmonitorstateexception ("Calling thread has no locked this lock"); } isLocked=false; Lockingthread=NULL;    Notify (); }}

Notice the above implementation of lock, and if there is a multithreaded concurrent access lock (), these threads will block access to the lock () method. Also, if the lock is locked (Proofing Note: This refers to isLocked equals True), these threads will block the wait () call inside the while (isLocked) loop. Keep in mind that when a thread is waiting to enter lock (), you can call Wait () to release the synchronization lock corresponding to its lock instance, so that many other threads can enter the lock () method and call the Wait () method.

This time look at dosynchronized (), and you'll notice a comment between lock () and Unlock (): The code between these two calls will run for a long period. It is further envisaged that this code will run for a long time, and enter lock () and call Wait () to compare. This means that most of the time spent waiting to enter the lock and enter the critical section is spent waiting in wait () instead of being blocked in trying to enter the lock () method.

As mentioned earlier, the sync block does not guarantee any access to multiple threads waiting to enter, and the wait () is not guaranteed to wake the thread (as for why, see Thread Communication) when notify () is called. So this version of the lock class and the dosynchronized () version is no different in terms of safeguarding fairness.

But we can change the situation. The current lock class version calls its own wait () method, and if each thread calls wait () on a different object, only one thread calls wait () on the object, and the lock class can determine which object can call notify () on it, So it is possible to choose which thread to wake up effectively.

Fair lock

Here's how to turn the lock class above into a fair lock fairlock. You will notice that the new implementation is slightly different from the synchronization in the previous lock class and Wait ()/notify ().

Exactly how the design of a fair lock from the previous lock class is a gradual design process, each step is to solve the problem of the previous step forward: Nested Monitor lockout, slipped conditions and missed signals. Although these discussions are beyond the scope of this article, each of these steps will be subject to discussion. It is important that every thread that calls lock () goes into a queue, and when unlocked, only the first thread in the queue is allowed to lock the Farlock instance, and all other threads are waiting until they are in the queue header.

 Public classFairlock {Private BooleanisLocked =false; PrivateThread Lockingthread =NULL; PrivateList<queueobject> waitingthreads =NewArraylist<queueobject>();  Public voidLock ()throwsinterruptedexception {queueobject queueobject=NewQueueobject (); BooleanIslockedforthisthread =true; synchronized( This) {waitingthreads.add (queueobject); }         while(islockedforthisthread) {synchronized( This) {Islockedforthisthread= IsLocked | | Waitingthreads.get (0)! =Queueobject; if(!islockedforthisthread) {isLocked=true;                    Waitingthreads.remove (Queueobject); Lockingthread=Thread.CurrentThread (); return; }            }            Try{queueobject.dowait (); } Catch(interruptedexception e) {synchronized( This) {waitingthreads.remove (queueobject); }                Throwe; }        }    }     Public synchronized voidunlock () {if( This. lockingthread! =Thread.CurrentThread ()) {            Throw NewIllegalmonitorstateexception ("Calling thread has not locked this lock"); } isLocked=false; Lockingthread=NULL; if(Waitingthreads.size () > 0) {Waitingthreads.get (0). Donotify (); }    }} Public classQueueobject {Private BooleanIsnotified =false;  Public synchronized voidDowait ()throwsinterruptedexception { while(!isnotified) {             This. Wait (); }         This. isnotified =false; }     Public synchronized voiddonotify () { This. isnotified =true;  This. Notify (); }     Public Booleanequals (Object o) {return  This==o; }}

First notice that the lock () method is not declared as synchronized, but instead is nested in synchronized with code that must be synchronized.

Fairlock creates a new instance of Queueobject and queues each thread that calls lock (). The thread that calls unlock () gets queueobject from the head of the queue and calls Donotify () on it to wake up the thread waiting on the object. In this way, only one waiting thread at a time gets awakened, not all waiting threads. This is also the core of achieving fairlock fairness.

Note that in the same synchronization block, the lock status is still checked and set to avoid slip-leaking conditions.

It should also be noted that Queueobject is actually a semaphore. The dowait () and donotify () methods preserve the signal in Queueobject. This is done to prevent a thread from being re-entered by another call to unlock () and calling Queueobject.donotify () before calling Queueobject.dowait (), causing the signal to be lost. The queueobject.dowait () call is placed outside the synchronized (this) block to avoid being locked by the monitor nesting, so additional threads can be unlocked as long as there is no thread in the lock method of synchronized (this) Execution in a block.

Finally, notice how queueobject.dowait () is called in the Try–catch block. In the case of interruptedexception throwing, the thread can leave lock () and need to remove it from the queue.

Performance considerations

If you compare the lock and Fairlock classes, you'll notice that lock () and unlock () in the Fairlock class have more to dig into. These additional code will cause the Fairlock synchronization mechanism to be implemented slightly slower than lock. How much impact it has, but also depends on how long the application is executing in the Fairlock critical section. The larger the execution time, the less the burden the fairlock will have, and of course it is related to the frequency of code execution.

18, Java Concurrency and multithreading-hunger and fairness

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.