In layman's Java Concurrency (20): Concurrent container Part 5 concurrentlinkedqueue[Goto]

Source: Internet
Author: User
Tags cas goto

Concurrentlinkedqueue is a thread-safe implementation of the queue. Let's start with a document description.

A node-based, link-free, thread-safe queue. This queue sorts the elements according to the FIFO (first-in, in-out) principle. The head of the queue is the longest element in the queue. The tail of the queue is the element with the shortest time in the queue. The new element is inserted at the end of the queue, and the queue get operation obtains the element from the queue head. Concurrentlinkedqueue is an appropriate choice when multiple threads share access to a common collection. This queue does not allow the use of NULL elements.

Since Concurrentlinkedqueue simply implements a queue of queues, from the API's point of view, there is not much value to be introduced, and it is simple to use, similar to all FIFO queues encountered earlier. Out of the queue can only operate the head node, into the queue can only operate the tail node, any node operation will need to traverse the full queue.

Emphasis is placed on explaining the principles and implementations of concurrentlinkedqueue.

Before continuing with the discussion, I'll analyze how to design a thread-safe queue, in conjunction with the previous thread-safety-related knowledge.

The first: Use synchronized to synchronize queues, just like vectors or collections.synchronizedlist/collection. Obviously this is not a good concurrency queue, which can lead to a steep drop in throughput.

The second type: Use lock. A good way to achieve this is to use Reentrantreadwritelock instead of Reentrantlock to improve read throughput. However, it is clear that the implementation of Reentrantreadwritelock is more complex and more prone to problems, and is not a common implementation, because Reentrantreadwritelock is suitable for situations where the read volume is much larger than the write volume. Of course, Reentrantlock is a very good implementation, combined with condition can be very convenient to implement blocking function, which is described in the following blockingqueue when the specific analysis.

The third type: use CAS operations. Although the implementation of lock is also used for CAS operations, it is an indirect operation and causes the thread to hang. A good concurrency queue is the use of some non-blocking algorithm to achieve maximum throughput.

Concurrentlinkedqueue adopts the third strategy. It uses the algorithm in reference 1.

As mentioned in the lock mechanism, to use the non-blocking algorithm to complete the queue operation, then a "cyclic attempt" action is required, that is, the loop operation queue, until successful, the failure will try again. This is described more than once in the previous chapters.

In-depth analysis for a variety of functions.

Introduce the data structure of the next concurrentlinkedqueue before you begin.

In the above data structure, concurrentlinkedqueue only the head node, the tail nodes two elements, and for a node nodes in addition to save the queue element item, there is a reference to the next node next. It seems that the entire data structure is relatively simple. But there are also a few points to note:

    1. All structures (Head/tail/item/next) are volatile types. This is because Concurrentlinkedqueue is non-blocking, so only volatile can make the write operation of the variable visible to subsequent read operations (this is guaranteed by the Happens-before law). Also does not cause the reordering of instructions.
    2. All structures operate with atomic operations, which are guaranteed by Atomicreferencefieldupdater, as described in atomic operations. It ensures that changes to variables are atomic when needed.
    3. Because any node in the queue has a reference to the next node only, the queue is unidirectional, according to the FIFO feature, which means that the queue is in the head (head) and queued at the tail (tail). The head holds the element that enters the queue for the longest time, and the tail is the most recently entered element.
    4. The queue length is not counted, so the length of the queue is infinite, and the time to get the length of the queue is not fixed, which requires traversing the entire queue, and this count may also be inaccurate.
    5. Initially, both the queue header and the tail of the queue point to an empty node, but not NULL, which is for ease of operation and does not need to be judged every time head/tail is empty. But head does not act as a node for accessing elements, tail saves a node element without being equal to the head. That is to say head.item this should always be empty, but tail.item is not necessarily empty (if head!=tail, then Tail.item!=null).

For the 5th, you can see it from the initialization of the concurrentlinkedqueue. This head node is also called "Pseudo-node", that is, it is not a real node, just a logo, just like the character array in C after the after, only to identify the end, not a real character array part.

Private transient volatile node<e> head = new node<e> (null, NULL);
Private transient volatile node<e> tail = head;

It is much easier to explain the associated API operations with the 5 points above.

The differences between the Add/offer/remove/poll/element/peek equivalence methods are listed in the previous section, so there is no repetition here.

Listing 1 in-queue operations

Public Boolean offer (E e) {
if (E = = null) throw new NullPointerException ();
node<e> n = new node<e> (E, NULL);
for (;;) {
node<e> t = tail;
node<e> s = t.getnext ();
if (t = = tail) {
if (s = = null) {
if (T.casnext (S, N)) {
Castail (t, N);
return true;
}
} else {
Castail (t, s);
}
}
}
}

Listing 1 describes the process of entering the queue. The whole process is like this.

      1. Gets the tail node T, and the next node s of the tail node. If the tail node is not modified by others, that is, T==tail, 2, or 1.
      2. If S is not empty, which means that there are elements behind the tail node at this point, then the tail node needs to be moved backward, 1. Otherwise proceed to 3.
      3. Modifies the next node of the tail node as the new node, and returns True if the tail node is modified successfully. Otherwise proceed to 1.

You can see from Operation 3 that the next node of the tail node is modified before the tail node position is modified, so this is why the next node of the tail node is not empty in Operation 2.

It is particularly necessary to note that the tail operation of the tail node needs to be replaced by temporary variables T and S, on the one hand to remove the variability of volatile variables and to reduce the performance impact of volatile.

Listing 2 describes the process of the outbound queue, which is similar to the queue and is somewhat interesting.

The head node is used to identify the start of the queue and to reduce the comparison of null pointers, so the head node is always a non-null node with item NULL. That is to say Head!=null and head.item==null always set up. So actually get the head.next, once the head node head is set to Head.next successfully set the item of the new head to null. As for the previous head node H,h.item=null and H.next as the new head, but because there is no reference to H, it will eventually be recycled by GC. This is the entire out-of-queue process.

Listing 2 out-of-queue operations

Public E poll () {
for (;;) {
Node<e> h = head;
node<e> t = tail;
node<e> first = H.getnext ();
if (h = = head) {
if (h = = t) {
if (first = = null)
return null;
Else
Castail (t, first);
} else if (Cashead (h, first)) {
E item = First.getitem ();
if (item! = NULL) {
First.setitem (NULL);
return item;
}
Else skip over deleted item, continue loop,
}
}
}
}

Also for the process of getting the queue size as described in Listing 3, since there is no counter to count the queue size, getting the queue size can only be done through a full traversal queue, obviously at a very high cost. So normally concurrentlinkedqueue needs to be paired with a atomicinteger to get the queue size. This idea is used by the Blockingqueue described later.

Listing 3 Calendar Queue Size

public int size () {
int count = 0;
for (node<e> p = First (); P! = null; p = P.getnext ()) {
if (p.getitem () = null) {
Collections.size () Spec says to max out
if (++count = = Integer.max_value)
Break
}
}
return count;
}

Resources:

      1. Simple, Fast, and practical non-blocking and Blocking Concurrent Queue algorithms
      2. Multithreading Basics Summary 11-concurrentlinkedqueue
      3. Concurrency test for Concurrentlinkedqueue

In layman's Java Concurrency (20): Concurrent container Part 5 concurrentlinkedqueue[Goto]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.