Analysis of the implementation principle and application scene of blocking queues in Java _java

Source: Internet
Author: User
Tags time limit

Some of the common queues we use are non-blocking queues, such as Priorityqueue, LinkedList (LinkedList is a two-way list that implements the Dequeue interface).
One of the big problems with non-blocking queues is that it doesn't block the current thread, so it's very cumbersome to implement a synchronization strategy and a thread wakeup strategy when confronted with a consumer-producer model. But with a blocking queue, it will block the current thread, such as a thread taking elements from an empty blocking queue, at which point the thread will block until there is an element in the blocking queue. When there are elements in the queue, the blocked thread is automatically awakened (no need for us to write code to wake). This provides a great deal of convenience.
I. Several main blocking queues

Since Java 1.5, several blocking queues have been provided under the Java.util.concurrent package, mainly the following:

Arrayblockingqueue: A blocking queue based on an array implementation that must be sized when creating Arrayblockingqueue objects. And you can specify fairness and unfairness, which is unfair by default, that is, the queue with the longest waiting time is not guaranteed to be accessed at the highest priority.

Linkedblockingqueue: A blocking queue that is based on a linked list, the default size is Integer.max_value when the Linkedblockingqueue object is created without specifying a capacity size.

Priorityblockingqueue: The above 2 types of queues are advanced first out of the queue, and Priorityblockingqueue is not, it will be based on the priority of the elements of the elements sorted, in order to prioritize the team, each team out of the elements are the highest priority element. Note that this blocking queue is an unbounded blocking queue, that is, the capacity has no upper limit (it can be known through the source, it does not have a container full of signal flags), the first 2 kinds are bounded queues.

Delayqueue: Based on Priorityqueue, a delay-blocking queue in which elements in the delayqueue can be fetched from the queue only if their specified delay time is up. Delayqueue is also a unbounded queue, so the operations (producers) that insert data into the queue are never blocked, and only the operations (consumers) that get the data are blocked.

Two. Blocking methods in queues VS non-blocking queues

1. Several main methods in non-blocking queues:

    • Add (e E): Inserts element E to the end of the queue, returns True if the insert succeeds, or throws an exception if the insertion fails (that is, the queue is full);
    • Remove (): Removes the team head element, returns true if removal succeeds, or throws an exception if the removal fails (the queue is empty);
    • Offer (e e): Inserts element E to the end of the queue, returns True if the insert succeeds, or FALSE if the insertion fails (that is, the queue is full);
    • Poll (): Remove and get the first element of the team, if successful, return the first element of the team, otherwise return null;
    • Peek (): Gets the first element of the team, if successful, returns the first element of the team; otherwise null

For non-blocking queues, it is generally recommended to use an offer, poll, and peek three methods, and the add and remove methods are not recommended. Because using the offer, poll, and peek three methods can be used to determine the success of the operation by the return value, the Add and remove methods do not achieve this effect. Note that none of the methods in the Non-blocking queue have synchronized measures.

2. Several main methods of blocking queues:

The blocking queue includes most of the methods in the Non-blocking queue, the 5 methods listed above exist in the blocking queue, but note that the 5 methods are synchronized in the blocking queue. In addition, blocking queues provides another 4 very useful methods:

    1. Put (e e)
    2. Take ()
    3. Offer (E e,long timeout, timeunit unit)
    4. Poll (long timeout, timeunit unit)

  

    • The Put method is used to save the element to the tail of the team and wait if the queue is full;
    • The Take method is used to take the first element from the team and wait if the queue is empty;
    • The Offer method is used to save the element to the tail of the team, and if the queue is full, wait for a certain amount of time, return False if the time limit is reached, or return true if it has not been inserted successfully;
    • Poll method is used to take the first element from the team, if the queue is empty, then wait for a certain time, when the time limit is reached, if it is taken, then return null;

Three. The implementation principle of the blocking queue

If the queue is empty, the consumer waits all the time, and when the producer adds the element, how does the consumer know that the current queue has elements? If you were to design a blocking queue, how would you design it so that producers and consumers could communicate efficiently? Let's start by looking at how the JDK is implemented.

Implemented using notification mode. The so-called notification mode is that when the producer adds elements to the full queue, the producer is blocked, and when the consumer consumes the elements in a queue, the producer is notified that the current queue is available. By looking at the JDK source code found arrayblockingqueue using condition to implement the following:

Private final Condition notfull;
Private final Condition Notempty;

public arrayblockingqueue (int capacity, Boolean fair) {
    //Omit other code
    Notempty = Lock.newcondition ();
    Notfull = Lock.newcondition ();
  }

public void put (e e) throws interruptedexception {
    checknotnull (e);
    Final Reentrantlock lock = This.lock;
    Lock.lockinterruptibly ();
    try {while
      (count = = items.length)
        notfull.await ();
      Insert (e);
    } finally {
      lock.unlock ();
    }
}

Public E take () throws Interruptedexception {
    final reentrantlock lock = This.lock;
    Lock.lockinterruptibly ();
    try {while
      (count = = 0)
        notempty.await ();
      return extract ();
 } finally {
      lock.unlock ();
    }
}

private void Insert (E x) {
    Items[putindex] = x;
    Putindex = Inc (PUTINDEX);
    ++count;
    Notempty.signal ();
  }

When we insert an element into the queue, if the queue is not available, the blocking producer mainly passes the Locksupport.park (this);

Public final void await () throws Interruptedexception {
      if (thread.interrupted ())
        throw new Interruptedexception ();
      Node node = Addconditionwaiter ();
      int savedstate = fullyrelease (node);
      int interruptmode = 0;
      while (!isonsyncqueue (node)) {
        Locksupport.park (this);
        if ((Interruptmode = checkinterruptwhilewaiting (node))!= 0) break
          ;
      if (acquirequeued (node, savedstate) && interruptmode!= throw_ie)
        interruptmode = Reinterrupt;
      if (node.nextwaiter!= null)//clean up if cancelled
        unlinkcancelledwaiters ();
      if (interruptmode!= 0)

reportinterruptafterwait (Interruptmode);
    }

Continue to enter the source code, found that call Setblocker first save the thread that will block, and then call Unsafe.park blocking the current thread.

public static void Park (Object blocker) {
    Thread t = thread.currentthread ();
    Setblocker (t, blocker);
    Unsafe.park (False, 0L);
    Setblocker (t, null);
  

Unsafe.park is a native method with the following code:

Public native Void Park (Boolean isabsolute, Long);

Park This method will block the current thread, only if one of the following four cases occurs, the method returns.

The unpark corresponding to the park is executed or has been executed. Note: The execution is the Unpark first executed and then executed by Park.
When the thread is interrupted.
If the time in the argument is not 0, wait for the specified number of milliseconds.
When an exception occurs. These anomalies cannot be determined beforehand.
Let's continue to look at how the JVM implements the park method, which is implemented in different ways by the different operating systems, using the System method pthread_cond_wait implementation under Linux. Implementation code in the JVM source path src/os/linux/vm/os_linux.cpp OS::P latformevent::p Ark Method, the code is as follows:

void OS::P latformevent::p Ark () {int v; for (;;)
   {v = _event;
   if (Atomic::cmpxchg (V-1, &_event, v) = v) break;
   } guarantee (v >= 0, "invariant");
   if (v = = 0) {//Do this hard way by blocking ... int status = Pthread_mutex_lock (_mutex);
   Assert_status (Status = = 0, status, "Mutex_lock");
   Guarantee (_nparked = = 0, "invariant");
   + + _nparked;
   while (_event < 0) {status = Pthread_cond_wait (_cond, _mutex); For some reason, under 2.7 lwp_cond_wait () could return etime ...//Treat the same as if the wait was Interrupte
   D if (status = = ETime) {status = Eintr;}
   Assert_status (Status = = 0 | | | status = EINTR, status, "cond_wait");
   
   }--_nparked;
   In theory we could move the St. of 0 into _event past the unlock (),//But then we ' d need a membar after the St.
   _event = 0;
   Status = Pthread_mutex_unlock (_mutex);
   Assert_status (Status = = 0, status, "Mutex_unlock"); } Guarantee (_evenT >= 0, "invariant");

 }

   }

Pthread_cond_wait is a multithreaded conditional variable function, cond is the abbreviation of condition, the literal meaning can be understood as a thread waiting for a condition to occur, this condition is a global variable. This method receives two parameters, a shared variable _cond, and a mutex _mutex. The Unpark method is implemented using Pthread_cond_signal under Linux. Park is implemented using WaitForSingleObject under Windows.

When the queue is full, the producer inserts an element into the blocking queue, and the producer thread enters the waiting (parking) state. We can see this using the Jstack dump blocking producer Thread:

"Main" prio=5 tid=0x00007fc83c000000 nid=0x10164e000 waiting on condition [0x000000010164d000]
  Java.lang.Thread.State:WAITING (parking) at
    Sun.misc.Unsafe.park (Native method)
    -Parking to wait for < 0x0000000140559fe8> (a java.util.concurrent.locks.abstractqueuedsynchronizer$conditionobject) at
    Java.util.concurrent.locks.LockSupport.park (locksupport.java:186) at
    Java.util.concurrent.locks.abstractqueuedsynchronizer$conditionobject.await (AbstractQueuedSynchronizer.java : 2043) at
    java.util.concurrent.ArrayBlockingQueue.put (arrayblockingqueue.java:324) at
    Blockingqueue. Arrayblockingqueuetest.main (arrayblockingqueuetest.java:11)

Four. Examples and usage scenarios

Use Object.wait () and Object.notify (), non-blocking queues to implement producer-consumer mode:

public class Test {private int queuesize = 10;
   
  Private priorityqueue<integer> queue = new priorityqueue<integer> (queuesize);
    public static void Main (string[] args) {test test = new test ();
    Producer Producer = Test.new Producer ();
     
    Consumer Consumer = Test.new Consumer ();
    Producer.start ();
  Consumer.start ();
    Class Consumer extends thread{@Override public void Run () {consume (); 
            private void Consume () {while (true) {synchronized (queue) {while (queue.size () = 0) {
              try {System.out.println ("queue empty, wait for data");
            Queue.wait ();
              catch (Interruptedexception e) {e.printstacktrace ();
            Queue.notify ();     } queue.poll ();
          Queue.notify () each time the first element of the team is removed;
        System.out.println ("Take an element from the queue, the queue remaining" +queue.size () + "element"); {}}} class Producer exTends thread{@Override public void Run () {produce (); private void produce () {while (true) {synchronized (queue) {while (queue.size () = = que
              Uesize) {try {System.out.println ("queue full, waiting for free space");
            Queue.wait ();
              catch (Interruptedexception e) {e.printstacktrace ();
            Queue.notify ();    } queue.offer (1);
          Inserts one element at a time queue.notify ();
        SYSTEM.OUT.PRINTLN ("Inserts an element into the queue, the queue remaining space:" + (Queuesize-queue.size ()));
 }
      }
    }
  }
}

This is a classic producer-consumer model that is implemented by blocking queues and object.wait () and Object.notify (), and Wait () and notify () are used primarily to communicate between threads.

Specific ways of communicating between threads (use of wait and notify) are described in the following questions.

The following are the producer-consumer patterns that are implemented using blocking queues:

public class Test {private int queuesize = 10;
   
  Private arrayblockingqueue<integer> queue = new arrayblockingqueue<integer> (queuesize);
    public static void Main (string[] args) {test test = new test ();
    Producer Producer = Test.new Producer ();
     
    Consumer Consumer = Test.new Consumer ();
    Producer.start ();
  Consumer.start ();
    Class Consumer extends thread{@Override public void Run () {consume ();
          private void Consume () {while (true) {try {queue.take ();
        System.out.println ("Take an element from the queue, the queue remaining" +queue.size () + "element");
        catch (Interruptedexception e) {e.printstacktrace ();
    Class Producer extends thread{@Override public void Run () {produce ();
          private void produce () {while (true) {try {queue.put (1); SYSTEM.OUT.PRINTLN ("Inserts an element into the queue, and the queue space:" + (Queuesize-queue. Size ());
        catch (Interruptedexception e) {e.printstacktrace ();
 }
      }
    }
  }
}

It is not found that using blocking queue code is much simpler, and there is no need to consider the issue of synchronization and communication between threads alone.

In concurrent programming, blocking queues are generally recommended, so that the implementation can avoid unexpected errors in the program as much as possible.

The most classic scenario for blocking queues is to read and parse the socket client data, and the thread that reads the data keeps the data in the queue, and then parses the thread from the queue to parse the data continuously. There are other similar scenarios where blocking queues can be used as long as the producer-consumer model is met.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.