Java Concurrency: Inter-thread synchronization-conditional queues and synchronization tool classes

Source: Internet
Author: User
Tags mutex semaphore

Reprint Please specify source: Jiq Technical Blog-Jiyichin

Synchronization between threads, in addition to mutual exclusion (the mutex described earlier), there is a collaborative relationship, here we introduce some common among Java threads of collaboration.

one, built-in conditional queue

Just as each Java object can act as a built-in lock, each object can also act as a conditional queue, called a built-in conditional queue, and object.wait (), notify ()/notifyall () Form the API for the built-in conditional queue.

It is important to note that the API that calls the built-in conditional queue of any object x must first obtain the built-in lock for that object X.

1, API introduction

Wait ()

U call automatically release current lock, request OS to suspend itself

You wake up when a condition on the built-in condition queue occurs

U is awakened after competing with other threads to regain lock

Notify ()

U notifies any waiting thread on the built-in condition queue of the object that is currently acquiring the lock

U release the currently acquired lock as soon as possible to ensure that the waiting thread is able to obtain

Notifyall ()

U notifies all waiting threads on the built-in condition queue of the object that is currently acquiring the lock

U release the currently acquired lock as soon as possible to ensure that the waiting thread is able to obtain

U only one awakened thread can acquire a lock, competing until the thread of the lock executes the exit synchronized block and the other awakened threads re-compete until all the awakened threads have finished executing

2, the use of the environment

must be running in the synchronization control block: wait , Notify , Notifyall as a common inter-task collaboration primitive, is Object part of the class, not Thread , so they can be placed in any synchronous control method.

in fact, only in the synchronous control method / called in the synchronization control block Wait,notify,notifyall These methods, if the method is called in the non-synchronous control method, can be compiled through, but the operation will be obtained illegalmonitorstateexception exception.

In general, wait and notify are placed in the Synchronozed (object) synchronization block and are called by this object. If it is synchronized (this), then it is called directly. Specific as follows:

(1) called on a specified object lockobj:

Synchronized (lockobj)   //Get Lockobj object lock       {           try {              //Release Lockobj object lock, block wait on built-in condition queue              lockobj.wait ();           } catch (Interruptedexception e) {              e.printstacktrace ();              return;           }       }        Synchronized (lockobj)       {           //wake up a thread waiting on Lockobj object's built-in condition queue           lockobj.notify ();       }
 

(2) can also be called on this:

Synchronized (this)//Gets the current object built-in lock       {           try {              //releases the current object lock, blocks on the built-in condition queue              wait ();           } catch (Interruptedexception e) {              e.printstacktrace ();              return;           }       }        Synchronized (this)       {           //wakes a thread on the current object's built-in condition queue           notify ();       }

3. Notice of omission

wait often with While ( conditional judgment ) Use with: In general, you must use a check for the conditions of interest while Loop to surround wait , because if there are multiple tasks waiting for the same lock, the first awakened task may perform a change first while The condition in which the current task has to be suspended until the conditions of interest change.

Synchronized (this) {while (Waxon = = True) wait ();}

This avoids the "notify notification omission problem".

Thread asynchronized (proceedlock) {           proceedlock.wait ();       }//Thread bsynchronized (proceedlock) {           Proceedlock.notifyall ();       }

The job of designing thread B would be to tell thread A to wake it up at some point, but if thread B executes too early, and then thread A does not start, it will wait and wait until thread B wakes it up. This is the so-called notice omission problem.

This problem can be resolved if thread A is judged by the match variable at wait time.

Thread a:synchronized (proceedlock) {           //while loop judgment, the reason for not using if is to prevent early notification while           (oktoproceed = = False) {              Proceedlock.wait ();           }       } Before thread b:synchronized (proceedlock) {           ///notification, set it to true so that even if a notification is omitted           //It will not cause the thread to block at wait           oktoproceed= true;           Proceedlock.notifyall ();       

The variable oktoproceed is initially set to false, which means that thread A is blocked by default, waiting for thread B to wake it up. If thread B is still closed before threads a still moves, but the condition that thread B waits is set to true, thread A does not wait to hibernate.

This avoids the problem of notification omission.


Second, display condition queue

The front has said each Java object has a built-in conditional queue, but it is another obvious flaw: each built-in lock can have only one associated built-in conditional queue!!!

The Lock.newcondition () method can be used in the explicit lock Reentrantlock to obtain a displayed condition condition queue, which provides richer functionality than the built-in conditional queue: Multiple display condition queues can be created at each lock. Conditional waits can be chosen to be interruptible or non-interruptible, waiting for a time limit to be set, as well as fair and unfair queue operations.

in the show condition queue Condition with the built-in conditional queue. wait , Notify , Notifyall the corresponding methods were await , Signal , Signalall .

Here is an example of the implementation of a bounded cache, in the same explicit lock created two display condition queue, a condition indicating the cache dissatisfaction, a condition that indicates that the cache is not empty.

Public Classconditionboundbuffer<t> {Protected final lock lock = new Reentrantlock ();       Cache non-full conditional queue private final Condition Notfullcond = Lock.newcondition ();       Cache non-empty conditional queue private final Condition Notemptycond = Lock.newcondition ();    @SuppressWarnings ("unchecked") private final t[] items = (t[]) new object[100];       private int tail,head,count;       public void put (T x) throws Exception {Lock.lock (); try {//when the cache is full, the blocking waits on the cache is not full on the condition queue and releases the lock while (count = = items.length) notfullcond.await                     ();           Items[tail] = x;           if (++tail = = items.length) tail = 0;                     ++count;       Wakes a thread that waits on a non-empty conditional queue in cache and releases the lock Notemptycond.signal ();       }finally {lock.unlock ();       }} public T take () throws Interruptedexception {Lock.lock ();       try {//when the cache is empty, blocking waits on the cache non-empty condition queue and releases the lock while (count = = 0)       Notemptycond.await ();           Tx = Items[head];           Items[head] = null;           if (++head = = items.length) head = 0;                     --count;           Wakes a thread that waits on the cache for a non-full conditional queue and releases the lock Notfullcond.signal ();       return x;       }finally {lock.unlock (); }    }}


third, the Synchronization tool class

The Java.util.concurrent package contains some synchronization tool classes that provide some useful synchronization between threads.

3.1 blockingqueue (blocking queue)

Blocking queue Blockingqueue expands the queue, adding blocking insertions and fetching operations

Public interface Blockingqueue<e>extends Queue<e> {

Put element, return True if space is accommodated, or throw illegalstateexception exception

Boolean Add (e e);

Put element, return True if space is accommodated, false otherwise

Boolean offer (e e);

Put the element, if there is room to return true, otherwise block wait

Void put (e e) throws interruptedexception;

Retrieves and removes the first element of the team and blocks the wait if not immediately taken

E Take () throws interruptedexception;

Retrieves and removes the first element of the queue, waits if it cannot be taken immediately, returns null after timeout

Epoll (long timeout, timeunit unit) throws Interruptedexception;

}

principles and Applications: Blockingqueue is a thread-safe container and has a blocking feature that internally passes Reentrantlock thread-safe implementation through Condition implement blocking and wakeup. With the put and take methods, it is easy to implement inter-threading collaboration, such as a typical producer - consumer model.

Here are the implementation classes for several Blockingqueue interfaces:

( 1 ) Arrayblockingqueue : An array-based blocking queue implementation, fixed in size , whose constructors must specify an int parameter to indicate the queue size, the inner elements are stored in FIFO (first-in, in-order) sequence, and are often used to implement a bounded cache.

( 2 ) Linkedblockingqueue : Based on the chain list of blocking queue implementation, the size is not fixed , if its constructor with a specified size parameter, then the resulting blockingqueue has a size limit, if no size parameter, the size of the generated blockingqueue by Integer.max_ Value to determine that the inner elements are stored in FIFO (first in, first out) order.

( 3 ) Priorityblockingqueue : An array-based blocking queue implementation, but the sort of objects it contains is not FIFO, but is based on the natural sort order of the objects or the order in which the comparator of the constructors are determined.

(4)synchronousqueue: Special Blockingqueue, The operation must be done alternately with the put and take.

3.2 Countdownlatch (latching)

Let the related thread wait at a certain point, until a certain condition occurs, the waiting thread will continue to execute, that is, all threads blocked waiting for lockout count value minus 0

in a metaphor, the latch is the equivalent of a door, and the door is pressed N Times (N is the initial count value of latching ) to open, but the thread that presses the door does not care how many threads wait outside the door, only the door opens and waits for all threads outside the door to get in.

Step 1: Initialize the latch (set the door several times to open)

Countdownlatch latch = new Countdownlatch (N);

Step 2: Let the thread wait for the latch (wait outside the door)

Latch.await ();

When the waiting thread detects that the current lockout counter has been reduced to 0 (the door is open), the execution continues.

Step 3: Lockout counter minus 1 (press 1 times Gate)

Latch.countdown ();

Application: One thread waits for n threads to complete a task

For example, the main thread needs all the picture resources are ready to use, so open n threads for it to download the image resources, they initialize the initial value of the lock and call await () wait on the lock, each thread after downloading the picture Resources call Countdown () will lock down one, After the last download thread has been reduced by one, the lockout counter becomes 0, and the main thread waiting for latching begins to proceed, using the downloaded picture resource.

Similarly, you can implement 1 threads, such as n threads, and 1 threads waiting for 1 threads to open the door.


3.3 Semaphore (semaphore)

Semaphores are used to control the number of threads that synchronize access to a particular resource.

the number of semaphores represents the number of resources, and when a semaphore is requested, the number of resources is reduced 1 , if a thread is to request a semaphore, but the number of semaphores is 0 , the thread will block the release of the wait semaphore.

Step 1: Initialize the semaphore

Semaphore sem = new Semaphore (N); n represents the number of resources

Step 2: Request a semaphore

Sem.acquire (); The semaphore value is reduced by 1, and if the semaphore value is already 0, it will block the wait

Step 3: Release a semaphore

Sem.release (); Semaphore value plus 1, identifying resource usage complete, blocking waiting threads being awakened

Application: Database connection Pool Management

The available and occupied database connections are managed in two collections, and the function that gets the database connection gets a connection from the available connection collection and transfers the connection to another collection, and the function that frees the database connection places the exhausted connection into the available Connections collection.

We do not want a function that gets a connection when no database connection is available to return the failure directly, but to block the wait. So adding a call to the request semaphore in the function that gets the connection, add the call to release the semaphore in the function that freed the database connection ( Note that a better way to manage database connection pooling might be blockingqueue, Because the number of initial semaphore values is fixed, it needs to be the same size as the database connection pool .

0-1 Semaphore: Also known as the mutex semaphore, there is only one thread that can get exclusive use of the resource, or exclusive access to the function.


3.4 cyclicbarrier (fence)

Multiple threads are executed separately, and the specified task execution is dispatched only after all threads have reached the fence position.

fences and latches are like, the difference is: latching is waiting Events ( lockout count value becomes 0) happened while the fence was waiting for all the other Threads have reached the fence position.

Step 1: Initialize the fence

Cyclicbarrier Barrier = Newcyclicbarrier (count, Runnabletask);

Specifies that count threads are required to reach the fence point before breaking through the fence and calling the Runnabletask task to execute.

Step 2: Set the fence point in the thread

Barrier.wait ();

The Runnabletask task execution is called only after all the threads that have the fence have reached the fence position.

Note: As can be seen from the name of the Cyclicbarrier, the fence has a cyclic feature, that is, after all threads break through the fence, if the thread continues to execute, the next fence will still be valid.

Java Concurrency: Inter-thread synchronization-conditional queues and synchronization tool classes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.