Java Queue Basics
Queue: Basically, a line is a first-in, first-out (FIFO) data structure.
Offer,add differences: Some queues have size limitations, so if you want to add a new item to a full queue, the extra items will be rejected. At this point the new offer method will work. It does not throw a unchecked exception to the call to the Add () method, but only gets the false returned by the offer ().
Poll,remove difference: The Remove () and poll () methods remove the first element (head) from the queue. Remove () behaves like the version of the Collection interface, but the new poll () method does not throw an exception when called with an empty collection, but only returns NULL. Therefore, the new method is more suitable for situations prone to abnormal conditions.
Peek,element differences: Element () and peek () are used to query elements in the head of the queue. Similar to the Remove () method, Element () throws an exception when the queue is empty, and peek () returns NULL.
common non-blocking queues
Arraydeque, (array double-ended queue)
Priorityqueue, (priority queue)
Concurrentlinkedqueue, (linked list-based concurrent queue)
Priorityqueue
Class essentially maintains an ordered list. The elements added to the Queue are located according to their natural ordering (implemented through their java.util.Comparable) or based on the Java.util.Comparator implementation passed to the constructor.
Concurrentlinkedqueue
is a thread-safe queue based on a linked node. Concurrent access does not require synchronization. Because it adds elements to the end of the queue and removes them from the head, Concurrentlinkedqueue shared access to public collections works fine as long as you don't need to know the size of the queue. Gathering information about the queue size is slow and requires traversing the queue.
Common blocking Queue Blockingqueue
Arrayblockingqueue and Linkedblockingqueue are the two most common and most commonly used blocking queues, and in general, in dealing with producer consumer issues between multiple threads, use these two classes enough.
Delayqueue, (deferred blocking queue) (blocking queue implements Blockingqueue interface)
Arrayblockingqueue, (array-based concurrency blocking queue)
Linkedblockingqueue, (List-based FIFO blocking queue)
Linkedblockingdeque, (List-based FIFO dual-ended blocking queue)
Priorityblockingqueue, (unbounded blocking queue with priority)
Synchronousqueue (concurrent synchronization blocking queue)
Blockingqueue's core approach:
Put data:
Offer (AnObject): If possible, add AnObject to Blockingqueue, that is, if blockingqueue can accommodate,
Returns TRUE, otherwise false is returned. (This method does not block the thread that is currently executing the method)
Offer (E O, long timeout, timeunit unit), can set the waiting time, if within the specified time, not yet in the queue
Join Blockingqueue, the return fails.
Put (anobject): Add AnObject to Blockingqueue, and if Blockqueue has no space, the thread calling this method is blocked
Until Blockingqueue there is room to continue.
Get Data:
Poll (time): Take the Blockingqueue in the first row of the object, if not immediately removed, you can wait for the timing of the parameters specified in
Returns null when not taken;
Poll (long timeout, timeunit unit): An object that takes the first team from Blockingqueue, if within a specified time,
Once the queue has data, the data in the queue is returned immediately. Otherwise, the time-out is not well-timed and the return fails.
Take (): Takes the Blockingqueue in the first place of the object, if the blockingqueue is empty, blocking into the waiting state until
Blockingqueue has new data to be added;
Drainto (): Get all available data objects from Blockingqueue at once (you can also specify the number of data to be fetched).
This method can improve the efficiency of data acquisition, and does not need to lock or release multiple times in batches.
1. Arrayblockingqueue
Array-based blocking queue implementation, within Arrayblockingqueue, maintains a fixed-length array to cache data objects in the queue, which is a common blocking queue, in addition to a fixed-length array, the Arrayblockingqueue interior also holds two shaping variables, Identifies the position of the queue's head and tail in the array, respectively.
Arrayblockingqueue the same lock object is shared between the producer and the consumer, which means that the two cannot actually run in parallel, which is particularly different from the linkedblockingqueue, and according to the principle of implementation, The arrayblockingqueue can be fully split-lock, enabling full parallel operation of both producer and consumer operations. Doug Lea did not do this, perhaps because Arrayblockingqueue's data write and fetch operations are lightweight enough to introduce an independent locking mechanism that, in addition to adding additional complexity to the code, is not a good performance benefit. One notable difference between Arrayblockingqueue and Linkedblockingqueue is that the former does not produce or destroy any additional object instances when inserting or deleting elements, while the latter generates an additional node object. This has a certain difference in the effect of GC on a system that needs to handle large quantities of data efficiently and concurrently over a long period of time. When creating Arrayblockingqueue, we can also control whether an object's internal lock is a fair lock, and an unfair lock is used by default.
2. Linkedblockingqueue
A linked list-based blocking queue, similar to Arraylistblockingqueue, maintains a data buffer queue (which is made up of a list of lists), and when the producer puts a data into the queue, the queue fetches the data from the producer and caches it inside the queue. The producer returns immediately; the producer queue is blocked until the queue buffer reaches the maximum cache capacity (Linkedblockingqueue can be specified by the constructor) until the consumer consumes a piece of data from the queue, the producer thread is awakened, It is also based on the same principles for consumer-side processing. While Linkedblockingqueue is able to efficiently handle concurrency data, it also uses separate locks for both producer and consumer to control data synchronization, which means that producers and consumers can operate the data in the queue in parallel with high concurrency, This improves the concurrency performance of the entire queue.
As a developer, it is important to note that if you construct a Linkedblockingqueue object without specifying its capacity size, Linkedblockingqueue will default to a capacity (Integer.max_value) that is like an infinite size, In this case, if the producer's speed is greater than the consumer's speed, perhaps not until the queue is full of congestion, the system memory may have been exhausted.
3. Delayqueue
The element in the Delayqueue can be fetched from the queue only if its specified delay time is reached. Delayqueue is a queue with no size limit, so the operations (producers) that insert data into the queue are never blocked, and only the operations (consumers) that get the data are blocked.
Usage scenarios:
Delayqueue use fewer scenarios, but they are quite ingenious, and common examples include using a delayqueue to manage a connection queue that is not responding to timeouts.
4. Priorityblockingqueue
Priority-based blocking queues (priority judgments are determined by the Compator objects passed into the constructor), but it is important to note that Priorityblockingqueue does not block data producers, but only the consumers who block the data when there is no consumable data. It is therefore important to pay particular attention to the fact that the producer produces data at a speed that is not faster than the consumer's consumption of data, or that it will eventually deplete all available heap memory space for a long time. When implementing Priorityblockingqueue, the internal control thread synchronizes the lock with a fair lock.
5. Synchronousqueue
A buffer-free waiting queue, similar to a non-intermediary direct transaction, a bit like a primitive society of producers and consumers, producers take products to the market to sell to the final consumer products, and consumers must go to the market to find the direct producers of goods, if one side did not find the right target, then I am sorry, Everyone is waiting in the market. Compared to the buffer blockingqueue, there is a link between the intermediate dealers (buffer), if there is a dealer, the producers directly wholesale the product to the dealer, regardless of the dealer will eventually sell these products to those consumers, because the dealer can stock a part of the goods, Therefore, compared to the direct trading model, the overall use of intermediate dealer mode will be higher throughput (can be sold in bulk); On the other hand, because of the introduction of the dealer, the product from the producer to the consumer to add additional trading links, the performance of a single product can be reduced in time.
There are two different ways to declare a synchronousqueue, and they behave differently. The difference between fair mode and non-equity mode:
If the use of fair mode: Synchronousqueue will be a fair lock, and with a FIFO queue to block redundant producers and consumers, so that the overall system of fairness strategy;
But if the non-fair mode (synchronousqueue default): Synchronousqueue with an unfair lock, with a LIFO queue to manage the surplus producers and consumers, the latter model, if the producers and consumers have a gap in processing speed, is prone to hunger and thirst, where data from certain producers or consumers may never be processed.
JDK Queue queues