Concurrent1_queue principle (Part 1)

Source: Internet
Author: User

Concurrentincluqueue is a thread security implementation of queue.

It is an unbounded thread security Queue Based on Link nodes. This queue sorts the elements according to the FIFO principle. The queue header is the longest element in the queue.

The tail of a queue is the minimum time element in the queue. Insert the new element to the end of the queue. The queue acquisition operation obtains the element from the queue header.

When multiple threads share access to a public collection, concurrentincluqueue is an appropriate choice. The queue does not allow null elements.

I will analyze several methods to design a thread-safe queue.
First: Use synchronized to synchronize queues, Just like vector or collections. synchronizedlist/collection.

Obviously this is not a good concurrent queue, which will cause a sharp drop in throughput.
Method 2: Use lock. A good implementation method is to use reentrantreadwritelock instead of reentrantlock to increase the read throughput.

However, it is clear that the implementation of reentrantreadwritelock is more complex and may cause problems,

In addition, it is not a general implementation method, because reentrantreadwritelock is suitable for scenarios where the read volume is much larger than the write volume.

Of course, reentrantlock is a good implementation, and the combination of condition can easily implement the blocking function,

This will be analyzed later when we introduce blockingqueue.
Third: using CAS. Although the CAS operation is also used for the implementation of lock, it is an indirect operation and causes the thread to suspend.

A good concurrent queue uses a non-blocking algorithm to achieve the maximum throughput.

Concurrentincluqueue adopts the Third Policy.

It uses reference 1 (http://www.cs.rochester.edu/u/scott/papers/1996_PODC_queues.pdf.
To use a non-blocking algorithm to complete queue operations, a "loop attempt" action is required, that is, to operate the queue cyclically until the operation is successful, and the attempt will be repeated if the operation fails.

In-depth analysis of various features.

First, we will introduce the data structure of concurrent1_queue.

Concurrentincluqueue has only two elements: the header node and the tail node. For a node, besides saving the queue element item, there is also a reference to next node.

It seems that the entire data structure is relatively simple. However, you need to note the following points:

1.All structures (Head/tail/item/next) are of the volatile type.. ThisConcurrentincluqueue is non-blocking,

Therefore, only volatile can make variable write operations visible to subsequent read operations.(This is guaranteed by the happens-before law ). The command will not be reordered.

2.All structured operations carry atomic operations, which are guaranteed by atomicreferencefieldupdater.,

This is introduced in atomic operations. It ensures that the variable modification operation is atomic when necessary.

3.Because any node in the queue only references the next node, this queue is unidirectional. According to the FIFO feature, that is, the output queue is in the header (head ), the inbound queue is at the tail end ).

The header stores the elements that have entered the queue for the longest time, and the tail is the elements that have recently entered the queue.

4.No queue length is countedTherefore, the length of the queue is infinite, and the length of the queue obtained at the same time is not fixed. This requires traversing the entire queue, and this count may be inaccurate.

5.Initially, both the queue header and the queue end point to an empty node, but not null.To facilitate the operation,You do not need to judge whether head/tail is empty each time.. However, the head does not act as a node for accessing elements,

Tail saves a node element when it is not equal to head.. That is to say, head. item should always be empty, but tail. item is not necessarily empty (if head! = Tail, so tail. item! = NULL ).

For, you can see from concurrent1_queue initialization. This header node is also called"Pseudo Node", That is, it is not a real node, just an identifier, just like \ 0 after the character array in C, it is only used to identify the end, not a part of the real character array.

Private transient volatile node <E> head = new node <E> (null, null );

Private transient volatile node <E> tail = head;

With the above five points, it is much easier to explain the relevant API operations.

The differences between Add/offer/remove/poll/element/PEEK equivalence methods are listed in the previous section, so we will not repeat them here.

Listing 1 Queue operations
Public Boolean offer (E ){

If (E = NULL) throw new nullpointerexception ();

Node <E> N = new node <E> (E, null );

For (;;){

Node <E> T = tail;

Node <E> S = T. getnext ();

If (t = tail ){

If (S = NULL ){

If (T. casnext (S, N )){

Castail (t, n );

Return true;

}

} Else {

Castail (t, s );

}

}

}

}

Listing 1 describes the process of entering the queue. The whole process is like this.

1. Obtain the end node T and the next node s of the end node. If the tail node is not modified by others, that is, t = tail, perform 2; otherwise, perform 1.

2. If S is not empty, that is to say, there are elements behind the end node, you need to move the End Node back and perform 1. Otherwise, perform 3.

3. Modify the next node of the last node as the new node. If the last node is modified successfully, true is returned. Otherwise, perform 1.

In operation 3, we can see that the next node of the End Node is modified before the position of the End Node is modified, this is why the next node of the obtained tail node in operation 2 is not empty.

In particular, tail operations on the tail node need to be replaced with temporary variables T and S, on the one hand, to remove the variability of volatile variables, and on the other hand, to reduce the performance impact of volatile.

 

The output queue process described in Listing 2 is similar to the inbound queue.

The header node is used to identify the start of the queue and reduce the comparison of null pointers. Therefore, the header node is always a non-null node where item is null.

That is, head! = NULL and head. Item = NULL is always true. So we actually get head. Next,

Once the header node head is set to head. Next, the item of the new head is set to null. As for the previous header node H, H. Item = NULL and H. Next is the new head,

However, since there is no reference to H, it will be recycled by GC. This is the entire outbound queue process.

Listing 2 output queue operations
Public E poll (){

For (;;){

Node <E> H = head;

Node <E> T = tail;

Node <E> first = H. getnext ();

If (H = head ){

If (H = T ){

If (first = NULL)

Return NULL;

Else

Castail (T, first );

} Else if (cashead (H, first )){

E item = first. getitem ();

If (item! = NULL ){

First. setitem (null );

Return item;

}

// Else skip over Deleted item, continue loop,

}

}

}

}

 

In addition, the process of getting the queue size described in listing 3 does not have a counter to count the queue size. Therefore, you can only retrieve the queue size from start to end by traversing the entire queue, obviously, this cost is very high. Therefore, concurrent1_queue usually needs to be matched with a atomicinteger to obtain the queue size. Blockingqueue introduced later uses this idea.

Listing 3 traverse the queue size
Public int size (){

Int COUNT = 0;

For (node <E> P = first (); P! = NULL; P = P. getnext ()){

If (P. getitem ()! = NULL ){

// Collections. Size () spec says to max out
If (++ COUNT = integer. max_value)
Break;
}
}
Return count;
}
Note 1:For more information about the concurrent1_queue principle, see 《Concurrent1_queue principle (II)"
NOTE 2: For more information about concurrentincluqueue APIs, see 《Concurrent1_queue"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.