blocking queues in Java

Source: Internet
Author: User
Tags diff sorts throw exception

1. What is a blocking queue?

The blocking queue (Blockingqueue) is a queue that supports two additional operations. The two additional operations are: When the queue is empty, the thread that gets the element waits for the queue to become non-empty. When the queue is full, the thread that stores the element waits for the queue to be available. Blocking queues are often used for producer and consumer scenarios, where the producer is the thread that adds elements to the queue, and the consumer is the thread that takes the elements from the queue. The blocking queue is the container where the producer stores the elements, and the consumer only takes the elements from the container.

The blocking queue provides four methods of handling:

method \ Processing Mode Throw Exception Return Special Values has been blocked Timeout Exit
Insert method Add (E) Offer (e) Put (e) Offer (E,time,unit)
Remove method Remove () Poll () Take () Poll (Time,unit)
Check method Element () Peek () Not available Not available
    • Throw an exception: the IllegalStateException ("Queue full") exception is thrown when the blocking queue is filled and the element is inserted into the queue. When the queue is empty, an Nosuchelementexception exception is thrown when an element is fetched from the queue.
    • Returns a special value that returns True if the Insert method returns success. Remove the method by taking an element from the queue and returning null if none
    • Always blocked: When the blocking queue is full, if the producer thread puts elements into the queue, the queue blocks the producer thread until it gets the data, or the response interrupts the exit. When the queue is empty, the consumer thread tries to take the element from the queue, and the queue blocks the consumer thread until the queue is available.
    • Timeout exit: When the blocking queue is full, the queue blocks the producer thread for a period of time, and if a certain amount of time is exceeded, the producer thread exits.
2. Blocking queues in Java

The JDK7 provides 7 blocking queues. respectively is

    • Arrayblockingqueue: A bounded blocking queue consisting of an array structure.
    • Linkedblockingqueue: A bounded blocking queue consisting of a list structure.
    • Priorityblockingqueue: A unbounded blocking queue that supports priority ordering.
    • Delayqueue: An unbounded blocking queue implemented with a priority queue.
    • Synchronousqueue: A blocking queue that does not store elements.
    • LinkedTransferQueue: An unbounded blocking queue consisting of a linked list structure.
    • Linkedblockingdeque: A two-way blocking queue consisting of a linked list structure.

Arrayblockingqueue is a bounded blocking queue implemented with arrays. This queue sorts the elements according to the principle of first-out (FIFO). By default, the visitor is not guaranteed a fair access queue, the so-called Fair access queue refers to all the producer or consumer threads blocking, when the queue is available, you can access the queue in the order of blocking, that is, the first blocked producer thread, you can first insert elements into the queue, first blocking the consumer thread, You can get the elements from the queue first. In general, the throughput is reduced to ensure fairness. We can create a fair blocking queue using the following code:

Arrayblockingqueue fairqueue = new  arrayblockingqueue (1000,true);

The fairness of the visitor is implemented using a reentrant lock, with the following code:

public arrayblockingqueue (int capacity, Boolean fair) {        if (capacity <= 0)            throw new IllegalArgumentException ();        This.items = new Object[capacity];        lock = new Reentrantlock (fair);        Notempty = Lock.newcondition ();        Notfull =  lock.newcondition ();}

Linkedblockingqueue is a bounded blocking queue implemented with a linked list. The default and maximum length for this queue is integer.max_value. This queue sorts the elements according to the FIFO principle.

The Priorityblockingqueue is a support-priority unbounded queue. By default, elements are arranged in a natural order, or you can specify the collation of an element by using the comparer comparator. The elements are arranged in ascending order.

The delayqueue is a unbounded blocking queue that supports delay-fetching elements. Queues are implemented using Priorityqueue. The elements in the queue must implement the delayed interface, and you can specify how long to get the current element from the queue when the element is created. Elements can be extracted from the queue only when the delay expires. We can apply delayqueue to the following scenarios:

    • Cache system Design: You can use Delayqueue to save the cache element's validity period, using a thread to loop query Delayqueue, once the element can be obtained from delayqueue, it means that the cache is valid.
    • Timed task scheduling. Use Delayqueue to save the task and execution time that will be performed on the day, and once the task is taken from Delayqueue, the Timerqueue is implemented using Delayqueue, for example.

The delayed in the queue must implement CompareTo to specify the order of the elements. For example, put the longest delay time at the end of the queue. The implementation code is as follows:

public int CompareTo (Delayed other) {           if [other = this]//compare zero only if same object                return 0;            if (other instanceof scheduledfuturetask) {                scheduledfuturetask x = (scheduledfuturetask) Other;                Long diff = time-x.time;                if (diff < 0)                    return-1;                else if (diff > 0)                    return 1;   else if (SequenceNumber < x.sequencenumber)                    return-1;                else                    return 1;            }            Long d = (Getdelay (timeunit.nanoseconds)-                      Other.getdelay (timeunit.nanoseconds));            return (d = = 0)? 0: ((d < 0)? -1:1);        }

How to implement the delayed interface

We can refer to the Scheduledfuturetask class in Scheduledthreadpoolexecutor. This class implements the delayed interface. First: When the object is created, use the time record before the object can be used, the code is as follows:

Scheduledfuturetask (Runnable R, V result, long NS, long period) {            super (R, result);            This.time = ns;            This.period = period;            This.sequencenumber = Sequencer.getandincrement ();}

You can then use Getdelay to query the current element for how long it will take, and the code is as follows:

Public long Getdelay (timeunit unit) {            return Unit.convert (Time-now (), timeunit.nanoseconds);        }

Through the constructor can see the delay time parameter NS units are nanoseconds, their own design time is best to use nanosecond, because Getdelay can specify any unit, once in nanoseconds as units, and delay time and precision less than nanosecond trouble. When used, note that Getdelay returns a negative number when time is less than the current period.

How to implement the delay queue

The implementation of the delay queue is simple, and when the consumer gets the element from the queue, it blocks the current thread if the element does not reach the delay time.

Long delay = First.getdelay (timeunit.nanoseconds);                    if (delay <= 0)                        return q.poll ();                    else if (leader! = null)                        available.await ();

Synchronousqueue is a blocking queue that does not store elements. Each put operation must wait for a take operation, or the element cannot continue to be added. Synchronousqueue can be seen as a passer-by, responsible for passing the data of producer threads directly to the consumer thread. The queue itself does not store any elements, and is well suited for transitive scenarios, such as data that is used in one thread, passed to another thread, and synchronousqueue throughput is higher than linkedblockingqueue and Arrayblockingqueue.

LinkedTransferQueue is an unbounded blocking Transferqueue queue composed of linked list structures. Compared to other blocking queues, LinkedTransferQueue has more trytransfer and transfer methods.

Transfer method. If there is currently a consumer waiting to receive elements (consumers use the Take () method or the poll () method with time limit), the transfer method can immediately transfer (transmit) to the consumer the element that the producer has passed in. If no consumer is waiting for an element to be received, the transfer method stores the element in the queue's tail node and waits until the element is consumed by the consumer before returning. The key code for the transfer method is as follows:

Node pred = Tryappend (s, havedata); return Awaitmatch (S, Pred, E, (how = = TIMED), Nanos);

The first line of code is trying to take the S node that holds the current element as the tail node. The second line of code is to let the CPU spin waiting for consumer spending elements. Because spin consumes the CPU, it spins a certain number of times and uses the Thread.yield () method to pause the currently executing thread and execute other threads.

Trytransfer method. is used to test whether the incoming elements of the producer can be passed directly to the consumer. Returns false if no consumer waits for the receiving element. The difference between the transfer method and the Trytransfer method is that the method returns immediately regardless of whether the consumer receives it or not. The transfer method is to wait until the consumer consumes the return.

For the Trytransfer (e E, long timeout, Timeunit unit) method with a time limit, an attempt is made to pass the producer's incoming element directly to the consumer, but if no consumer consumes the element, it waits for the specified time to return, and if the timeout has not consumed the element, Returns False if the element was consumed within the timeout period, which returns true.

Linkedblockingdeque is a two-way blocking queue consisting of a linked list structure. The so-called bidirectional queue means that you can insert and remove elements from both ends of the queue. Double-ended queue because there is an operation queue of the entrance, in the multi-threaded simultaneously queued, also reduced half of the competition. Compared to other blocking queues, Linkedblockingdeque is more addfirst,addlast,offerfirst,offerlast,peekfirst,peeklast and so on, the method of ending with first word, means inserting, Gets (peek) or removes the first element of a double-ended queue. The method ending with the last word, which represents the insertion, gets or removes the final element of the double-ended queue. In addition, the Insert method add is equivalent to AddLast, and removing the method removes the Removefirst. But the Take method is equivalent to Takefirst, not knowing whether it is a JDK bug, or using a method with first and last suffixes to be clearer.

You can set the capacity to prevent the transition bloat when initializing the Linkedblockingdeque. In addition, the two-way blocking queue can be used in "work-stealing" mode.

3. How blocking queues are implemented

If the queue is empty, consumers will wait until the producer adds elements, how does the consumer know that the current queue has elements? If you were to design a blocking queue, how would you design it so that producers and consumers could communicate efficiently? Let's start by looking at how the JDK is implemented.

Implemented using notification mode. The so-called notification mode is that when a producer fills a queue with elements that block the producer, when the consumer consumes an element in a queue, it notifies the producer that the current queue is available. By viewing the JDK source Discovery Arrayblockingqueue is implemented using condition, the code is as follows:

 Private Final Condition notfull;private final Condition notempty;public arrayblockingqueue (int capacity, Boolean        FAIR) {//Omit other code notempty = Lock.newcondition ();    Notfull = Lock.newcondition ();        }public Void put (e e) throws interruptedexception {Checknotnull (e);        Final Reentrantlock lock = This.lock;        Lock.lockinterruptibly ();            try {while (count = = items.length) notfull.await ();        Insert (e);        } finally {Lock.unlock ();        }}public E Take () throws Interruptedexception {final Reentrantlock lock = This.lock;        Lock.lockinterruptibly ();            try {while (count = = 0) notempty.await ();  return extract ();        } finally {Lock.unlock ();        }}private void Insert (E x) {Items[putindex] = x;        Putindex = Inc (PUTINDEX);        ++count;    Notempty.signal (); }

When we insert an element into the queue, if the queue is not available, the blocking producer mainly passes Locksupport.park (this);

Public final void await () throws Interruptedexception {            if (thread.interrupted ())                throw new Interruptedexception ();            Node node = Addconditionwaiter ();            int savedstate = fullyrelease (node);            int interruptmode = 0;            while (!isonsyncqueue (node)) {                Locksupport.park (this);                if ((Interruptmode = checkinterruptwhilewaiting (node))! = 0) break                    ;            }            if (acquirequeued (node, savedstate) && interruptmode! = throw_ie)                interruptmode = Reinterrupt;            if (node.nextwaiter! = null)//clean up if cancelled                unlinkcancelledwaiters ();            if (Interruptmode! = 0) reportinterruptafterwait (interruptmode);        }

Continue to enter the source code, find that the call Setblocker first save the thread that will be blocked, and then call Unsafe.park to block the current thread.

public static void Park (Object blocker) {        Thread t = thread.currentthread ();        Setblocker (t, blocker);        Unsafe.park (False, 0L);        Setblocker (t, null);    }

Unsafe.park is a native method with the following code:

Public native Void Park (Boolean isabsolute, long time);

Park This method blocks the current thread, which is returned only if one of the following four cases occurs.

    • The Unpark that corresponds to park is executed or has been executed. Note: Already executed is the park that Unpark executes first and then executes.
    • When a thread is interrupted.
    • If the time in the parameter is not 0, wait for the specified number of milliseconds.
    • When an exception occurs. These exceptions cannot be determined beforehand.

We continue to look at how the JVM implements the park method, which is implemented in different ways by different operating systems, and is implemented using the System method pthread_cond_wait in Linux. Implement the code in the JVM source path src/os/linux/vm/os_linux.cpp OS::P latformevent::p Ark Method, the code is as follows:

void OS::P latformevent::p Ark () {int v; for (;;)     {v = _event;     if (Atomic::cmpxchg (V-1, &_event, v) = = v) break;     } guarantee (v >= 0, "invariant");     if (v = = 0) {//Do the-the hard-to-blocking ... int status = Pthread_mutex_lock (_mutex);     Assert_status (Status = = 0, status, "Mutex_lock");     Guarantee (_nparked = = 0, "invariant");     + + _nparked;     while (_event < 0) {status = Pthread_cond_wait (_cond, _mutex); For some reason, under 2.7 lwp_cond_wait () may return etime ...//Treat this same as if the wait was interrupt     ed if (status = = ETime) {status = Eintr;}     Assert_status (Status = = 0 | | status = = EINTR, status, "cond_wait");          }--_nparked;     In theory we could move the St of 0 to _event past the unlock (),//But then we ' d need a membar after the St.     _event = 0;     Status = Pthread_mutex_unlock (_mutex); Assert_status (Status = = 0, status, "Mutex_unlock"));     } guarantee (_event >= 0, "invariant"); }     }

Pthread_cond_wait is a multi-threaded conditional variable function, cond is the abbreviation of condition, and the literal meaning can be understood as a thread waiting for a condition to occur, and this condition is a global variable. This method receives two parameters, a shared variable _cond, and a mutex _mutex. The Unpark method is implemented under Linux using Pthread_cond_signal. Park is implemented using WaitForSingleObject under Windows.

When the queue is full, the producer inserts an element into the blocking queue, and the producer thread enters the waiting (parking) state. We can see this using the Jstack dump blocking producer Thread:

"Main" prio=5 tid=0x00007fc83c000000 nid=0x10164e000 waiting on condition [0x000000010164d000]   Java.lang.Thread.State:WAITING (parking) at        Sun.misc.Unsafe.park (Native Method)        -Parking to wait  for <0x0000000140559fe8> (a java.util.concurrent.locks.abstractqueuedsynchronizer$conditionobject) at        Java.util.concurrent.locks.LockSupport.park (locksupport.java:186) at        Java.util.concurrent.locks.abstractqueuedsynchronizer$conditionobject.await (AbstractQueuedSynchronizer.java : 2043) at        java.util.concurrent.ArrayBlockingQueue.put (arrayblockingqueue.java:324) at        Blockingqueue. Arrayblockingqueuetest.main (arrayblockingqueuetest.java:11)

blocking queues in Java

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.