Blocking queue Blockingqueue usage (GO)

Source: Internet
Author: User

In a multithreaded environment, data sharing can easily be achieved through queues, such as the classic "producer" and "consumer" models, where data sharing between the two can be conveniently achieved through queues.

Let's say we have a number of producer threads, and another number of consumer threads. If the producer thread needs to share the prepared data to the consumer thread and use the queue to pass the data, it is easy to solve the data sharing problem between them. But what if the producer and consumer are in a certain time period, in case the data processing speed does not match? Ideally, if the producer produces data at a faster rate than the consumer, and when the resulting data accumulates to a certain extent, then the producer must pause (block the producer thread) to wait for the consumer thread to finish processing the accumulated data, and vice versa.

However, before the concurrent package was released, in a multithreaded environment, each of our programmers had to take control of these details, especially in the context of efficiency and thread safety, which would bring a great degree of complexity to our programs. Fortunately at this time, the powerful concurrent package turned out, and he also brought us a strong blockingqueue. (In the Multithreaded realm: so-called blocking, in some cases hangs a thread (that is, blocked), and once the condition is met, the suspended thread is automatically awakened)

Two common blocking scenarios for Blockingqueue:
1. When there is no data in the queue, all threads on the consumer end will be automatically blocked (suspended) until there is data in the queue.
2. When the queue is filled with data, all threads on the producer side are automatically blocked (suspended) until there is an idle position in the queue and the thread is automatically awakened.

This is why we need to blockingqueue in a multi-threaded environment. As a blockingqueue user, we no longer need to care about when we need to block threads and when to wake up threads, because all of this blockingqueue you handedly.


Blockingqueue members Detailed Introduction
1. Arrayblockingqueue
Array-based blocking queue implementation, within Arrayblockingqueue, maintains a fixed-length array to cache data objects in the queue, which is a common blocking queue, in addition to a fixed-length array, the Arrayblockingqueue interior also holds two shaping variables, Identifies the position of the queue's head and tail in the array, respectively.

Arrayblockingqueue the same lock object is shared between the producer and the consumer, which means that the two cannot actually run in parallel, which is particularly different from the linkedblockingqueue, and according to the principle of implementation, The arrayblockingqueue can be fully split-lock, enabling full parallel operation of both producer and consumer operations. Doug Lea did not do this, perhaps because Arrayblockingqueue's data write and fetch operations are lightweight enough to introduce an independent locking mechanism that, in addition to adding additional complexity to the code, is not a good performance benefit. One notable difference between Arrayblockingqueue and Linkedblockingqueue is that the former does not produce or destroy any additional object instances when inserting or deleting elements, while the latter generates an additional node object. This has a certain difference in the effect of GC on a system that needs to handle large quantities of data efficiently and concurrently over a long period of time. When creating Arrayblockingqueue, we can also control whether an object's internal lock is a fair lock, and an unfair lock is used by default.

2. Linkedblockingqueue
A linked list-based blocking queue, similar to Arraylistblockingqueue, maintains a data buffer queue (which is made up of a list of lists), and when the producer puts a data into the queue, the queue fetches the data from the producer and caches it inside the queue. The producer returns immediately; the producer queue is blocked until the queue buffer reaches the maximum cache capacity (Linkedblockingqueue can be specified by the constructor) until the consumer consumes a piece of data from the queue, and the producer thread is awakened. On the contrary, the consumer side of the processing is based on the same principle. While Linkedblockingqueue is able to efficiently handle concurrency data, it also uses separate locks for both producer and consumer to control data synchronization, which means that producers and consumers can operate the data in the queue in parallel with high concurrency, This improves the concurrency performance of the entire queue.

As a developer, it is important to note that if you construct a Linkedblockingqueue object without specifying its capacity size, Linkedblockingqueue will default to a capacity (Integer.max_value) that is like an infinite size, In this case, if the producer's speed is greater than the consumer's speed, perhaps not until the queue is full of congestion, the system memory may have been exhausted.

Arrayblockingqueue and Linkedblockingqueue are the two most common and most commonly used blocking queues, and in general, in dealing with producer consumer issues between multiple threads, use these two classes enough.

The difference between Arrayblockingqueue and Linkedblockingqueue
1. The implementation of locks in the queue is different
Arrayblockingqueue implemented in the queue of the lock is not separated, that is, production and consumption is the same lock;
Linkedblockingqueue implementation of the lock in the queue is separate, that is, production is putlock, consumption is Takelock

2. Different operation during production or consumption
Arrayblockingqueue implementation of the queue in the production and consumption, is directly the enumeration object is inserted or removed;
Linkedblockingqueue implementation of the queue in the production and consumption, the need to convert the enumeration object to node<e> for insertion or removal, will affect performance

3. The queue size is initialized differently
The size of the queue must be specified in the Arrayblockingqueue implementation queue;
Linkedblockingqueue can not specify the size of the queue in the implemented queue, but the default is Integer.max_value

Attention:
1. When using the Linkedblockingqueue, if the default size and when the production speed is greater than the consumption speed, there may be memory overflow
2. When using Arrayblockingqueue and Linkedblockingqueue to perform a queue operation on 10,00000 simple characters, respectively, The consumption of linkedblockingqueue is about 10 times times the consumption of arrayblockingqueue, that is, Linkedblockingqueue consumes about 1500 milliseconds, and arrayblockingqueue only about 150 milliseconds.

Http://www.cnblogs.com/linjiqin/p/5130559.html

Blocking queue Blockingqueue usage (GO)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.