What is a blocking queue?
The blocking queue (Blockingqueue) is a queue that supports two additional operations. The two additional operations are: When the queue is empty, the thread that gets the element waits for the queue to become non-empty. When the queue is full, the thread that stores the element waits for the queue to be available. Blocking queues are often used for producer and consumer scenarios, where the producer is the thread that adds elements to the queue, and the consumer is the thread that takes the elements from the queue. The blocking queue is the container where the producer stores the elements, and the consumer only takes the elements from the container.
The blocking queue provides four methods of handling:
| method \ Processing mode |
throw exception |
return special values |
blocked |
timeout exit |
| insert |
add (e) |
offer (e) |
put ( e) |
Offer (E,time,unit) |
| remove |
remove () |
poll () |
take () |
poll (time,unit) |
| Check |
Element () |
Peek () |
Not available |
Not available |
- Throws an exception: refers to inserting an element into the queue when the blocking queue is full. Throws a IllegalStateException ("Queue full") exception. When the queue is empty, an Nosuchelementexception exception is thrown when an element is fetched from the queue.
- Returns a special value: The Insert method returns a success, or true if successful. Remove the method by taking an element from the queue and returning null if none
- Always blocked: When the blocking queue is full, if the producer thread puts elements into the queue, the queue blocks the producer thread until it gets the data, or the response interrupts the exit. When the queue is empty, the consumer thread tries to take the element from the queue, and the queue blocks the consumer thread until the queue is available.
- timeout exit: When the blocking queue is full, the queue blocks the producer thread for a period of time, if it exceeds a certain time, The producer thread exits
In a multithreaded environment, data sharing can easily be achieved through queues, such as the classic "producer" and "consumer" models, where data sharing between the two can be conveniently achieved through queues. Let's say we have a number of producer threads, and another number of consumer threads. If the producer thread needs to share the prepared data to the consumer thread and use the queue to pass the data, it is easy to solve the data sharing problem between them. But what if the producer and consumer are in a certain time period, in case the data processing speed does not match? Ideally, if the producer produces data at a faster rate than the consumer, and when the resulting data accumulates to a certain extent, then the producer must pause (block the producer thread) to wait for the consumer thread to finish processing the accumulated data, and vice versa. However, before the concurrent package was released, in a multithreaded environment, each of our programmers had to take control of these details, especially in the context of efficiency and thread safety, which would bring a great degree of complexity to our programs. Fortunately at this time, the powerful concurrent package turned out, and he also brought us a strong blockingqueue. (In the Multithreaded realm: so-called blocking, in some cases hangs a thread (that is, blocked), and once the condition is met, the suspended thread is automatically awakened)
Blockingqueue members Detailed introduction
1. Arrayblockingqueue
Array-based blocking queue implementation, within Arrayblockingqueue, maintains a fixed-length array to cache data objects in the queue, which is a common blocking queue, in addition to a fixed-length array, the Arrayblockingqueue interior also holds two shaping variables, Identifies the position of the queue's head and tail in the array, respectively.
Arrayblockingqueue the same lock object is shared between the producer and the consumer, which means that the two cannot actually run in parallel, which is particularly different from the linkedblockingqueue, and according to the principle of implementation, The arrayblockingqueue can be fully split-lock, enabling full parallel operation of both producer and consumer operations. Doug Lea did not do this, perhaps because Arrayblockingqueue's data write and fetch operations are lightweight enough to introduce an independent locking mechanism that, in addition to adding additional complexity to the code, is not a good performance benefit. One notable difference between Arrayblockingqueue and Linkedblockingqueue is that the former does not produce or destroy any additional object instances when inserting or deleting elements, while the latter generates an additional node object. This has a certain difference in the effect of GC on a system that needs to handle large quantities of data efficiently and concurrently over a long period of time. When creating Arrayblockingqueue, we can also control whether an object's internal lock is a fair lock, by default, an unfair lock
2. Linkedblockingqueue
A linked list-based blocking queue, similar to Arraylistblockingqueue, maintains a data buffer queue (which is made up of a list of lists), and when the producer puts a data into the queue, the queue fetches the data from the producer and caches it inside the queue. The producer returns immediately; the producer queue is blocked until the queue buffer reaches the maximum cache capacity (Linkedblockingqueue can be specified by the constructor) until the consumer consumes a piece of data from the queue, and the producer thread is awakened. On the contrary, the consumer side of the processing is based on the same principle. While Linkedblockingqueue is able to efficiently handle concurrency data, it also uses separate locks for both producer and consumer to control data synchronization, which means that producers and consumers can operate the data in the queue in parallel with high concurrency, This improves the concurrency performance of the entire queue.
As a developer, it is important to note that if you construct a Linkedblockingqueue object without specifying its capacity size, Linkedblockingqueue will default to a capacity (Integer.max_value) that is like an infinite size, In this case, if the producer's speed is greater than the consumer's speed, perhaps not until the queue is full of congestion, the system memory may have been exhausted.
Arrayblockingqueue and Linkedblockingqueue are the two most common and most commonly used blocking queues, and in general, in dealing with producer consumer issues between multiple threads, use these two classes enough.
Example:
Producers
Package Cn.com.example.concurrent.blockingqueue;import Java.util.random;import Java.util.concurrent.BlockingQueue ; Import Java.util.concurrent.timeunit;import java.util.concurrent.atomic.atomicinteger;/** * Created by Jack on 2017/1 /24. */public class Producer implements Runnable {Private volatile Boolean isrunning = true; Private Blockingqueue queue; private static Atomicinteger count = new Atomicinteger (); private static final int default_range_for_sleep = 1000; Public Producer (Blockingqueue queue) {this.queue = queue; public void Run () {String data = null; Random r = new Random (); System.out.println ("Start the producer thread!") "); try {while (isrunning) {System.out.println ("In production data ..."); Thread.Sleep (R.nextint (default_range_for_sleep)); data = "Data:" + count.incrementandget (); SYSTEM.OUT.PRINTLN ("Put data:" + Data + "into queue ..."); if (!queue.offer (data, 2, TimeuniT.seconds) {System.out.println ("failed to put data:" + "); }}} catch (Interruptedexception e) {e.printstacktrace (); Thread.CurrentThread (). interrupt (); } finally {System.out.println ("Exit producer Thread! "); }} public void Stop () {isrunning = false; }}
Consumers
Package Cn.com.example.concurrent.blockingqueue;import Java.util.random;import Java.util.concurrent.BlockingQueue ; Import java.util.concurrent.timeunit;/** * Created by Jack on 2017/1/24. */public class Consumer implements Runnable {private blockingqueue<string> queue; private static final int default_range_for_sleep = 1000; Public Consumer (blockingqueue<string> queue) {this.queue = queue; } public void Run () {System.out.println ("Start consumer thread! "); Random r = new Random (); Boolean isrunning = true; try {while (isrunning) {System.out.println ("Fetching data from Queue ..."); String data = Queue.poll (2, timeunit.seconds); if (null! = data) {System.out.println ("Get Data:" + database); SYSTEM.OUT.PRINTLN ("Consuming data:" + data); Thread.Sleep (R.nextint (default_range_for_sleep)); } else {//more than 2s no data, that all production lines have been exited, automatically exitFee thread. IsRunning = false; }}} catch (Interruptedexception e) {e.printstacktrace (); Thread.CurrentThread (). interrupt (); } finally {System.out.println ("Exit the consumer thread! "); } }}
Test
Package Cn.com.example.concurrent.blockingqueue;import Java.util.concurrent.blockingqueue;import Java.util.concurrent.executorservice;import Java.util.concurrent.executors;import java.util.concurrent.linkedblockingqueue;/** * Created by Jack on 2017/1/24. */public class Blockingqueuetest {public static void main (string[] args) throws Interruptedexception {//declares a tolerance Cache queue of Volume 10 blockingqueue<string> queue = new linkedblockingqueue<string> (10); Producer producer1 = new Producer (queue); Producer producer2 = new Producer (queue); Producer Producer3 = new Producer (queue); Consumer Consumer = new Consumer (queue); With executors Executorservice service = Executors.newcachedthreadpool (); Start thread Service.execute (producer1); Service.execute (PRODUCER2); Service.execute (PRODUCER3); Service.execute (consumer); Perform 10s Thread.Sleep (10 * 1000); Producer1.stop (); Producer2. Stop (); Producer3.stop (); Thread.Sleep (2000); Exit executor Service.shutdown (); }}
Results
Start the consumer thread! Getting data from Queue ... Start the producer thread! Start the producer thread! Production data ... Start the producer thread! Production data ... Production data ... Put data: Data:1 into queue ... Production data ... Get the data: Data:1 is consuming data: data:1 data: Data:2 into queue ... Production data ... Getting data from Queue ... Get the data: Data:2 is consuming data: Data:2 data: Data:3 into queue ... Production data ... Put data: Data:4 into queue ... Production data ... Put data: Data:5 into queue ... Production data ... Put data: Data:6 into queue ... Production data ... Put data: Data:7 into queue ... Production data ... Getting data from Queue ... Get the data: Data:3 is consuming data: Data:3 data: Data:8 into queue ... Production data ... Put data: Data:9 into queue ... Production data ... Put data: Data:10 into queue ... Production data ... Getting data from Queue ... Get the data: Data:4 is consuming data: Data:4 data: Data:11 into queue ... Production data ... Put data: Data:12 into queue ... Production data ... Put data: Data:13 into queue ... Production data ... Put data: Data:14 into queue ... Production data ... Put data: Data:15 into queue ...
3. Delayqueue
The element in the Delayqueue can be fetched from the queue only if its specified delay time is reached. Delayqueue is a queue with no size limit, so the operations (producers) that insert data into the queue are never blocked, and only the operations (consumers) that get the data are blocked.
Usage scenarios:
Delayqueue use fewer scenarios, but they are quite ingenious, and common examples include using a delayqueue to manage a connection queue that is not responding to timeouts.
4. Priorityblockingqueue
Priority-based blocking queues (priority judgments are determined by the Compator objects passed into the constructor), but it is important to note that Priorityblockingqueue does not block data producers, but only the consumers who block the data when there is no consumable data. It is therefore important to pay particular attention to the fact that the producer produces data at a speed that is not faster than the consumer's consumption of data, or that it will eventually deplete all available heap memory space for a long time. When implementing Priorityblockingqueue, the internal control thread synchronizes the lock with a fair lock.
5. Synchronousqueue
declares a synchronousqueue in two different ways, and they behave differently. The difference between fair mode and unfair mode:
If the fair mode is used: Synchronousqueue will adopt fair lock, and tie up with a FIFO queue to block the surplus producers and consumers, thus the system overall fairness policy;
But if it's non-fair mode (synchronousqueue default): Synchronousqueue uses an unfair lock, with a LIFO queue to manage excess producers and consumers, the latter pattern, If producers and consumers have a gap in the speed of treatment, it is very easy to have hunger and thirst, that is, there may be some producers or consumers of data will never be processed.
Blockingqueue not only realized the basic functions of a complete queue, but also managed to automatically wait for wake-up between multiple lines in a multithreaded environment, allowing programmers to ignore these details and focus on more advanced features.
Java.util.concurrent Blockingqueue Detailed