Java concurrency: synchronization between threads-conditional queue and synchronization Tool

Source: Internet
Author: User

Java concurrency: synchronization between threads-conditional queue and synchronization Tool

Synchronization between threads exists in addition to mutex (the mutex lock introduced earlier). Below we will introduce some common collaboration methods between java threads.

I. built-in condition queue

Just as each Java Object can be used as a built-in lock, each Object can also be used as a conditional queue, called a built-in conditional queue, Object. wait () and Policy ()/policyall () constitute APIs with built-in conditional queues.

Note that you must obtain the built-in lock of object X before calling any API of the built-in condition queue of object X.

1. API Introduction

Wait ()

The current lock is automatically released when the u is called, and the OS is requested to suspend itself

U is awakened after conditions in the built-in condition queue occur

After u is awakened, it competes with other threads to obtain the lock again.

 

Notify ()

U notifies you to wake up any waiting thread on the built-in condition queue of the currently locked object

U releases the current lock as soon as possible after the notification is sent to ensure that the waiting thread can obtain the lock.

 

Policyall ()

U notifies you to wake up all waiting threads on the built-in condition queue of the currently locked object

U releases the current lock as soon as possible after the notification is sent to ensure that the waiting thread can obtain the lock.

U only has one wake-up thread to obtain the lock. After the thread competing to the lock executes the lock and exits the Synchronized block, other wake-up threads will re-compete until all wake-up threads are executed.

2. Use Environment

Must run in the synchronization control block: wait, policy, policyall, as common collaboration primitives between tasks, is part of the Object class, not part of the Thread, so they can be put into any synchronization control method.

In fact, you can only call the wait, notify, and policyall methods in the synchronous control method/synchronization control block. If you call these methods in the non-synchronous control method, you can compile and pass them, however, the IllegalMonitorStateException is returned during running.

In general, wait and policy are placed in the synchronozed (object) Synchronization block and called by this object. If synchronized (this) is used, it is called directly. The details are as follows:

(1) call on a specified object lockObj:

 

Synchronized (lockObj) // obtain the lockObj object lock {try {// release the lockObj object lock, blocking and waiting on the built-in condition queue lockObj. wait ();} catch (InterruptedException e) {e. printStackTrace (); return ;}} synchronized (lockObj) {// wake up the thread lockObj waiting on the built-in condition queue of the lockObj object. notify ();}

 

(2) It can also be called on this:

 

Synchronized (this) // obtain the current object's built-in lock {try {// release the current object lock, blocking wait ();} catch (InterruptedException e) {e. printStackTrace (); return ;}} synchronized (this) {// wake up a thread running y () on the built-in condition queue of the current object ();}

 

3. Notification Omission

Wait is often used with while (condition judgment): Generally, wait must be surrounded by a while loop that checks the conditions of interest, because if multiple tasks wait for the same lock, the first wake-up task may first execute the status changed in the while condition judgment, so that the current task has to be suspended again until the conditions of interest change.

Synchronized (this) {while (waxon = true) wait ();}

In this way, we can avoid the "notify notification omission problem ".

// Thread Asynchronized (proceedLock) {proceedLock. wait ();} // thread Bsynchronized (proceedLock) {proceedLock. policyall ();}

 

Originally, the role of thread B is to notify thread A to wake it up at A certain time. However, if thread B executes too early, the execution is completed before thread A starts to run, then thread A will keep wait and wait until thread B wakes up. This is the so-called notification omission problem.

 

This problem can be solved if thread A matches the variable when wait is used.

// Thread A: synchronized (proceedLock) {// while LOOP judgment. The reason that if is not used here is to prevent early notification while (okToProceed = false) {proceedLock. wait () ;}/// thread B: synchronized (proceedLock) {// set it to true before the notification, in this way, even if the notification is omitted, the thread will not be blocked in wait. okToProceed = true; proceedLock. policyall ();}

 

The okToProceed variable is set to false at the initial time, that is, thread A is blocked by default and waits for thread B to wake up. If thread B is finished before thread A is moved, but the condition for waiting for thread B is set to true, thread A will not wait sleep.

This avoids notification omissions.

 

Ii. Display condition queue

As mentioned above, each Java object has a built-in condition queue, but it has another obvious defect: Each built-in lock can only have one associated built-in condition queue !!!

 

You can call Lock on the explicit Lock ReentrantLock. the newCondition () method obtains a displayed Condition queue. Condition provides more functions than the built-in Condition queue: Multiple display Condition queues can be created on each lock, conditional waits can be either disruptive or non-disruptive, and can be set as a time limit. In addition, fair and unfair queue operations are also provided.

 

In the Condition display queue, the methods corresponding to wait, notify, and policyall of the built-in Condition queue are await, signal, and signalAll.

 

The following example shows how to implement a bounded cache. Two display condition queues are created in the same explicit lock, and one condition indicating that the cache is not satisfied, A condition indicating that the cache is not empty.

Public classConditionBoundBuffer
 
  
{Protected final Lock lock = new ReentrantLock (); // Condition queue private final Condition notFullCond = lock when the cache is not full. newCondition (); // cache non-empty Condition queue private final Condition notEmptyCond = lock. newCondition (); @ SuppressWarnings ("unchecked") private final T [] items = (T []) new Object [100]; private int tail, head, count; public void put (T x) throws Exception {lock. lock (); try {// when the cache is full, block the waiting on the queue with a non-full cache condition and release the lock while (count = items. length) notFullCond. await (); items [tail] = x; if (++ tail = items. length) tail = 0; ++ count; // wake up a thread waiting on the non-empty condition queue in the cache and release the lock notEmptyCond. signal ();} finally {lock. unlock () ;}} public T take () throws InterruptedException {lock. lock (); try {// when the cache is empty, block waiting on the non-empty condition queue in the cache, and release the lock while (count = 0) notEmptyCond. await (); Tx = items [head]; items [head] = null; if (++ head = items. length) head = 0; -- count; // wake up a thread waiting on the non-full-condition queue in the cache and release the lock notFullCond. signal (); return x;} finally {lock. unlock ();}}}
 

 

Iii. Synchronization tools

The java. util. concurrent package contains some synchronization tools and provides some practical synchronization functions between threads.

3.1 BlockingQueue (blocking Queue)

Blocking Queue BlockingQueue expands the Queue and adds blocking operations such as insertion and retrieval.

Public interface BlockingQueue
 
  
Extends Queue
  
   
{// Put the element. If there is space, true is returned; otherwise, an IllegalStateException exception is thrown. boolean add (E e); // put the element. If there is space, true is returned, otherwise, false boolean offer (E e) is returned. // if there is space available, true is returned. Otherwise, void put (E e) throws InterruptedException is blocked; // retrieve and remove the first element of the team. If it cannot be obtained immediately, the system blocks waiting for E take () throws InterruptedException; // retrieves and removes the first element of the team. If it cannot be obtained immediately, the system waits, return null Epoll (long timeout, TimeUnit unit) throws InterruptedException ;}
  
 
Principle :: BlockingQueue Is a thread-Safe Container and has the blocking feature. ReentrantLock Thread security is implemented through Condition Implement blocking and wakeup.

 

Application: Through the put and take methods, it is easy to implement thread collaboration, such as a typical producer-consumer model.

Interruptible: andThread. sleep (), Object. wait (),Thread. join () and other blocking interfaces are the same,BlockingQueue. put ()/take () can respond to interruptions

 

The following are implementation classes for several BlockingQueue interfaces:

(1) ArrayBlockingQueue:Array-based blocking queue implementation,Fixed sizeThe constructor must specify the int parameter to specify the queue size. Internal elements are stored in FIFO (first-in-first-out) order and are often used to implement bounded cache.

(2) define blockingqueue:Implement blocking queues Based on linked lists,The size is not fixed.If the constructor contains a specified size parameter, the size of the generated BlockingQueue is limited. If no size parameter is provided, the size of the generated BlockingQueue is determined by Integer. MAX_VALUE determines that the internal elements are stored in FIFO (first-in-first-out) Order.

(3) PriorityBlockingQueue:Array-based blocking queue implementation, but the sorting of its contained objectsNot FIFOIs determined by the natural sorting order of objects or the constructor.

(4) SynchronousQueue:For special BlockingQueue, the operations on it must be put and take alternately.

 

3.2 CountDownLatch (locked)

Wait for the relevant threads at a certain point until a certain condition occurs, these waiting threads continue to run, that is, all threads are blocked, and the count value of waiting for blocking is reduced to 0.

In a metaphor, locking is equivalent to a door, which can be opened only after N times (N is the initial count value of locking, but the door-based thread does not care about how many threads are waiting outside the door. Only when the door is opened, all threads waiting outside the door can enter.

 

Step 1: Initialize locking (set the number of times the door can be opened)

CountDownLatch latch = new CountDownLatch (N);

 

Step 2: Let the thread wait for the lock (wait outside the door)

Latch. await ();

When the waiting thread detects that the current lock counter has been reduced to 0 (the door is opened), the task continues.

 

Step 3: Lock the counter by 1 (by 1)

Latch. countDown ();

 

Application: one thread waits for N threads to complete all tasks

For example, the main thread requires all image resources to be ready before use, so enable N threads to download image resources for it, and initialize the lock with the initial value of N and call await () wait for this lock. After each thread downloads the image resources, it calls countDown () to reduce the lock by one. After the last download thread is reduced by one, the lock counter is changed to 0, at this time, the main thread that waits for the lock starts to execute and uses the downloaded image resources.

Similarly, one thread can open the door for N threads, and one thread can wait for one thread to open the door.

 

3.3 Semaphore (Semaphore)

Semaphores are used to control the number of threads that synchronously access a specific resource.

The number of semaphores indicates the number of resources. After a semaphores are applied, the number of resources is reduced by 1. If a thread wants to apply for semaphores, but the number of semaphores is already 0, changing the thread will block waiting for the release of the semaphore.

 

Step 1: Initialize the semaphore

Semaphore sem = new Semaphore (N); // N indicates the number of resources.

 

Step 2: Apply to occupy a semaphore

Sem. acquire (); // The semaphore value minus 1. If the semaphore Count value is already 0, the wait will be blocked.

 

Step 3: release a semaphore

Sem. release (); // Add the semaphore value to 1 to mark the completion of resource usage and wake up the blocked waiting thread.

 

Application: Database Connection Pool Management

Manage available and occupied database connections in two sets respectively. The function that obtains the database connection obtains a connection from the available connection set and transfers the connection to another set, the function that releases the database connection puts the used connection into the available connection set.

We don't want to get the connection function when there is no database connection available to directly return failure, but want to block wait. Therefore, you can add a request semaphore call to the function that gets the connection, and add a call to release the semaphore to the function that releases the database connection. (the better way to manage the database connection pool may be BlockingQueue, because the number of initial semaphores is fixed, it must be the same size as the database connection pool ).

 

0-1 semaphores, also known as mutex semaphores, have only one thread that can obtain the exclusive use of resources or exclusively access functions.

 

3.4 CyclicBarrier (barrier)

Multiple threads run independently. When all threads reach the barrier position, the specified task is scheduled to run.

The difference between barrier and lock is that lock is waiting.Event(The lock Count value is changed to 0), while the fence is waiting for all otherThreadAll reach the barrier position.

 

Step 1: Initialize the fence

Required icbarrier Barrier = newjavasicbarrier (count, runnableTask );

Specify that there must be count threads to reach the barrier point before they can break through the barrier and call the runnableTask task for execution.

 

Step 2: Set a fence point in the thread

Barrier. wait ();

The runnableTask task is called only when all threads with a fence have reached the fence position.

 

Note: It can be seen from the CyclicBarrier name that the barrier has the cyclability feature, that is, after all threads break through the barrier, if the thread continues to run cyclically, the next change to the barrier will still be valid.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.