In layman's Java Concurrency (21): Concurrent container Part 6 can be blocked blockingqueue (1) [Go]

Source: Internet
Author: User
Tags switches

As you can see in the class diagram in the "Introduction to concurrent container Part 4 concurrent Queue and queue" section, Blockingqueue is the primary thread-safe version for queue. This is a blocking version, which allows the Add/remove element to be blocked until it succeeds.

Blockingqueue adds two operations to the queue: Put/take. Here is a table for finishing.

Seemingly simple API, very useful. This is very beneficial in controlling the concurrency of queues. Since the join queue and the removal queue can be blocked, this is much simpler to implement on the producer-consumer model.

Listing 1 is an example of a producer-consumer model. This example is a real-world scenario. The service side (ICE service) accepts the client's request (accept), requests that the person's friend's birthday is calculated, and then accesses the computed result in the cache (Memcache). In this example, the use of executorservice to achieve multi-threaded functionality, as much as possible to improve the throughput, which is described in detail in the later thread pool section. It can now be understood as new Thread (R). Start (). In addition here the blocking queue is used Linkedblockingqueue.

List of 11 producer-consumer examples

Package xylz.study.concurrency;

Import Java.util.concurrent.BlockingQueue;
Import Java.util.concurrent.ExecutorService;
Import java.util.concurrent.Executors;
Import Java.util.concurrent.LinkedBlockingDeque;

public class Birthdayservice {

final int workernumber;

Final worker[] workers;

Final Executorservice ThreadPool;

Static volatile Boolean running = true;

Public Birthdayservice (int workernumber, int capacity) {
if (workernumber <= 0) throw new IllegalArgumentException ();
This.workernumber = Workernumber;
Workers = new Worker[workernumber];
for (int i = 0; i < Workernumber; i++) {
Workers[i] = new Worker (capacity);
}
//
Boolean B = running;//Kill the Resorting
ThreadPool = Executors.newfixedthreadpool (Workernumber);
for (Worker w:workers) {
Threadpool.submit (w);
}
}

Worker getworker (int id) {
Return workers[id% workernumber];

}

Class Worker implements Runnable {

Final blockingqueue<integer> queue;

public Worker (int capacity) {
Queue = new linkedblockingqueue<integer> (capacity);
}

        public void Run () {
             while (true) {
                 try {
                     consume (Queue.take ());
               } catch ( Interruptedexception e) {
                     return;
               }
           }
       }

void put (int id) {
try {
Queue.put (ID);
} catch (Interruptedexception e) {
Return
}
}
}

public void accept (int id) {
Accept client Request
Getworker (ID). put (ID);
}

protected void consume (int id) {
Do the work
Get the list of friends and save the birthday to cache
}
}

As you can see in Listing 1, either put () or get (), a interruptedexception is thrown. We'll start here and why this exception is thrown.

The previous section mentions the implementation of a concurrent queue in three ways. Obviously only the second Lock can implement a blocking queue. As mentioned in the lock mechanism, lock combined with condition can implement thread blocking, which is described in detail in many tools in the lock mechanism section, and the next linkedblockingqueue to be introduced is this approach.

Linkedblockingqueue principle

Comparing the structure of Concurrentlinkedqueue, Linkedblockingqueue has two reentrantlock and two condition and atomicinteger for counting, Obviously this leads to a bit of complexity in Linkedblockingqueue implementations. Against this structure, the following points are explained:

      1. But, as a whole, linkedblockingqueue and concurrentlinkedqueue are similar in structure, all using the tail node, each node pointing to the structure of the next node, which means that they should be similar in operation.
      2. Linkedblockingqueue introduced the atomic counter count, which means that getting the queue size of sizes size () is already a constant time and no longer needs to traverse the queue. Only the count can be modified each time the queue length is changed.
      3. With the Modify node pointing to the lock, there is no need for the volatile feature. Since there is a lock node's item why the need for volatile in the back will be analyzed in detail, not the table.
      4. Introduced two locks, one into queue lock, one out queue lock. Of course there is also a queue of disgruntled condition and a queue that is not empty condition. In fact, the reference to the lock mechanism before the producer-consumer model is known, into the queue represents the producer, out of the queue on behalf of consumers. Why do I need a two lock? A lock line? In fact, a lock is perfectly possible, but a lock means that there can be only one in the queue and out of the queue at the same time, and the other must wait for its release lock. From the Concurrentlinkedqueue implementation principle, the fact that head and last (Concurrentlinkedqueue is tail) is separate, independent, which means that the incoming queue is actually not modified out of the queue of data, Also the out queue is not modified into the queue, which means that the two operations are non-interfering. More popular will, this lock equivalent to two write locks, into the queue is a write operation, Operation Head, out of the queue is a write operation, Operation Tail. It can be seen that they are irrelevant. But it's not entirely irrelevant, after a detailed analysis.

In the absence of the queue and out of the queue before the process of guessing the implementation of the principle.

Based on the principle of locking mechanism, which is learned in the previous concurrentlinkedqueue, the blocking process in the queue is probably this:

      1. Gets the lock Putlock in the queue, detects the queue size, and, if the queue is full, suspends the thread, waiting for the queue to wake up notfull the signal.
      2. Adds an element to the tail of the queue, and modifies the tail of the queue to refer to last.
      3. Queue Size plus 1.
      4. Release the lock Putlock.
      5. Wake up the Notempty thread (if there is a pending out queue thread) and tell the consumer that a new product is already available.

In comparison to the queue, the blocking process of the out queue is probably this:

      1. Gets the lock takelock of the queue, detects the queue size, and if the queue is empty, suspends the thread and waits for the queue to not wake up to an empty notempty.
      2. Removes the element from the head while modifying the header of the queue to refer to head.
      3. The queue size is reduced by 1.
      4. Release the lock Takelock.
      5. Wake up the notfull thread (if there is a pending queue thread) and tell the producer that there is now free space.

Below to verify the above procedure.

into the queue process (Put/offer)

Listing 2 blocked incoming queue process

public void put (e e) throws interruptedexception {
if (E = = null) throw new NullPointerException ();
int c =-1;
Final Reentrantlock putlock = This.putlock;
Final Atomicinteger count = This.count;
Putlock.lockinterruptibly ();
try {
try {
while (count.get () = = capacity)
Notfull.await ();
} catch (Interruptedexception IE) {
Notfull.signal (); Propagate to a non-interrupted thread
throw ie;
}
Insert (e);
c = count.getandincrement ();
if (c + 1 < capacity)
Notfull.signal ();
} finally {
Putlock.unlock ();
}
if (c = = 0)
Signalnotempty ();
}

Listing 2 describes the blocking process into the queue. You can see the same process as described above in the queue. But there are several issues:

      1. If the thread is interrupted while it is in the queue, a notfull signal is required to indicate that the next queued thread can be awakened (if blocked).
      2. After the queue succeeds, if the queue is not satisfied, a notfull signal needs to be filled. Why? When the queue is not in the queue other blocking threads don't know? It's possible. This is because in order to reduce the number of context switches, each time a thread is awakened (either into the queue or out of the queue) it is only a random wake-up (notify), not all (Notifyall ()). This causes other blocked incoming queue threads to not be able to handle the queue even if they are dissatisfied.
      3. Wakes a queue thread if the queue is not empty and may have an element. This indicates that the queue must be empty, because the queue can be up to 1 after the queue is joined, then 0 if it is not joined, then there may be a blocked queue thread, so it wakes up one out of the queue thread. In particular, why use a temporary variable C instead of count. This is because the overhead of reading a count is larger than reading a temporary one, and here C is able to complete the confirmation queue at most only one element. First C defaults to 1, if the result of the atomic counter after joining the queue is 0, indicating that the queue is empty, it is not possible to consume (out of the queue), it is not possible to enter the queue, because the lock is still on the current thread, then join a post queue is not empty, so you can safely wake up a consumption (out of the
      4. The process of entering the queue is allowed to be interrupted, so the interruptedexception exception is always thrown.

For the 2nd, the special supplement is explained below. Originally this is the lock mechanism in the scope of the condition queue, because there is no application scenario, it was not mentioned at that time.

The previous increase in Notifyall is always more reliable than notify, because notify may lose notice, why not apply Notifyall?

First explain the issue of notify lost notification.

Notify missing notification issue

Suppose thread A waits for a condition in the condition queue, and thread B waits in the same condition queue because of another condition, that is, the thread A/b is suspended by the same conditon.await (), but the conditions for waiting are different. Now, assuming thread B's thread is satisfied, thread C performs a notify operation, at which point the JVM randomly picks a wake from multiple threads (A/b) in conditon.await (), and unfortunately wakes up a. At this point A's condition is not satisfied, so a continues to hang. And at this point B is still waiting for the signal to wake up innocently. In other words, the notice given to B was held by an unrelated thread, but the thread B that really needed to be notified was not notified, and B was still waiting for a notification that had already occurred.

If you use Notifyall, you can avoid this problem. The notifyall wakes up all the waiting threads, and thread A notifies thread A to receive the same, but because a is useless, a continues to hang, and thread B receives this notification, so thread B is woken up normally.

Since notifyall can solve the problem of single notify loss notifications, why not always replace notify with Notifyall?

Assuming that there are N threads waiting in the conditional queue, calling Notifyall wakes all the threads, and then the N threads compete for the same lock, at most one thread can get the lock, and the other threads go back to the suspended state. This means that every wake-up operation can result in a large number of context switches (if n is larger) and a large number of competing lock requests. This can be a disastrous performance for frequent wakeup operations.

If there is always only one thread that can get a lock after waking up, then why not use notify? So the performance of using notify in some cases is higher than notifyall.

You can replace the notifyall operation with a single notify if the following conditions are true:

The same waiting person, that is, the thread that waits for the condition variable to operate the same, each executing the same logic from wait, and the notification of a condition variable can wake up only one thread at a time.

In other words, the Notfull.singal in Listing 2 is superfluous if you use Sinallall to wake up in Put/take.

Outbound queue process (Poll/take)

To see the queue process again. Listing 3 describes the process of the outbound queue. You can see that this and the into queue are symmetric. As you can see from here, the outbound queue uses a different lock than the one in the queue, so the in-queue, out-of-queue operations can be done in parallel.

Listing 3 blocking out-of-queue procedures

Public E take () throws Interruptedexception {
    E x;
    int c =-1;
    Final Atomicinteger count = This.count;
    final Reentrantlock takelock = This.takelock;
    takelock.lockinterruptibly ();
    try {
        try {
             while (count.get () = = 0)
                 notempty.await ();
       } catch (Interruptedexception IE) {
             notempty.signal (); Propagate to a non-interrupted thread
            throw Key
       }

x = Extract ();
c = count.getanddecrement ();
if (C > 1)
Notempty.signal ();
} finally {
Takelock.unlock ();
}
if (c = = capacity)
Signalnotfull ();
return x;
}

Why is there an exception?

After you have entered the queue, out of the queue of the process to answer a few previous questions.

Why do you always throw interruptedexception exceptions? This is a very large piece of content, in fact, Java on the thread interrupt processing problem, hope to be able to in the end of the series of articles to open a separate chapter to talk about.

In the lock mechanism is also always encountered, this is because there is no direct way in Java to interrupt a suspended thread, so usually equal to a thread in the waiting state, allowing the setting of a break bit, once the thread detects that the interrupt bit will be exited from the waiting state, Returned with an interruptedexception exception. So as long as it is suspended on a thread, it can lead to interruptedexception, such as Thread.Sleep (), Thread.Join (), object.wait (). Although Locksupport.park () does not throw a interruptedexception exception, it will place the interrupted state position of the current thread, and for Lock/condition, A Interruptedexception exception is thrown when it is assumed that the thread should terminate the task after snapping to the interrupted state.

See also volatile

There is also a problem that is not easy to understand. Why is node.item a volatile type?

At first I didn't quite understand, because for a node that enters the queue, its item is constant, and the item of the head node element is set to NULL only when the queue is out. Although it is also set to null at Remove (o), it was added with Putlock/takelock two locks, so there must be no problem. So what's the problem?

We know that the value of item is added at the time of Put/offer. This time there is a putlock lock guarantee, that is, it is guaranteed to use the Putlock lock reading must be no problem. Then the problem is only possible in a place where putlock is not applicable but needs to read Node.item.

The peek operation acquires the element of the head node without removing it. Obviously he doesn't operate the tail node, so it doesn't need a putlock lock, which means it has only takelock locks. Listing 4 describes the process.

Listing 4 querying the queue header element procedure

Public E Peek () {
if (count.get () = = 0)
return null;
Final Reentrantlock takelock = This.takelock;
Takelock.lock ();
try {
node<e> first = Head.next;
if (first = = null)
return null;
Else
return first.item;
} finally {
Takelock.unlock ();
}
}

Listing 4 describes the process of peek, and finally returns a non-null node with the result of Node.item. This reads the item value of node, but the whole process uses takelock rather than putlock. In other words putlock operation on Node.item, PEEK () thread may not be visible!

Listing 5 adding elements to a queue trailer

private void Insert (E x) {
last = Last.next = new node<e> (x);
}

Listing 5 is part of the offer/put into the queue, where the key is that last=new node<e> (x) can be reordered. The node constructor is this: node (E x) {item = x;}. In this step we may get one of the following:

      1. Constructs a node object n;
      2. Assigns the N of node to the last
      3. Initialize N, set item=x

When performing step 2, a peek thread might get a new node n, when it reads the item and gets a null. Obviously, this is unreliable.

After the item is volatile, JMM guarantees that the assignment to the item=x must precede the last=n, which means that the last one is a new node n that has been assigned a value. This will not cause the problem of reading the empty element.

The Poll/take and peek are all using the takelock lock, so this problem does not occur.

Delete operations and traversal operations do not cause this problem because both Takelock and Putlock are acquired.

Summary: This element is currently only read when the element is joined to the queue, which can cause inconsistencies. Use volatile to formally avoid this problem.

Additional Features

Blockingqueue has an additional feature that allows the bulk of the exception elements from the queue. This API is:

int Drainto (COLLECTION<? super e> C, int maxelements); Removes a given number of available elements from this queue at most, and adds them to the given collection.

int Drainto (COLLECTION<? super e> C); Removes all the elements that are available in this queue and adds them to the given collection.

Listing 6 describes the process of removing up to a specified number of elements. Because bulk operations require only one acquisition of the lock at a time, it is more efficient than acquiring a lock every time. However, it is necessary to obtain the Takelock/putlock two lock at the same time, because when all elements are removed this involves the modification of the tail node (the last node still points to a node that has been moved).

Because the iterative Operation contains ()/remove ()/iterator ( ) also acquires two locks, the iterative operation is also thread-safe.

Listing 6 Bulk removal operations

public int Drainto (COLLECTION&LT;? super e> C, int maxelements) {
if (c = = null)
throw new NullPointerException ();
if (c = = this)
throw new IllegalArgumentException ();
Fullylock ();
try {
int n = 0;
node<e> p = head.next;
while (P! = null && n < maxelements) {
C.add (P.item);
P.item = null;
p = p.next;
++n;
}
if (n! = 0) {
Head.next = p;
assert head.item = = null;
if (p = = null)
last = head;
if (Count.getandadd (-N) = = capacity)
Notfull.signalall ();
}
return n;
} finally {
Fullyunlock ();
}
}

In layman's Java Concurrency (21): Concurrent container Part 6 can be blocked blockingqueue (1) [Go]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.