Linkedblockingqueue Source Analysis _ Multithreading and contract

Source: Internet
Author: User
Tags assert
Objective

In the previous article Arrayblockingqueue source analysis, the JDK in the Blockingqueue has done a review, while the Arrayblockingqueue core methods are described, Linkedblockingqueue as a member of the Blockingqueue family series in JDK because it is used as a blocking queue at the bottom of the fixed size thread pool (Executors.newfixedthreadpool ()). The main purpose of this analysis is 2 points:
(1) Analogical learning with Arrayblockingqueue to deepen the understanding of various data structures
(2) Understand the bottom realization, can better understand each kind of blocking queue to the thread pool performance influence, achieves the true knowledge and so on, and knows its reason source code Analysis Linkedblockingqueue Realization and Arrayblockingqueue carries on the comparison Explains why you chose Linkedblockingqueue as the blocking queue for a fixed size thread pool
If you find that there is no or inaccurate analysis of the place, please correct (in this thank) 1.LinkedBlockingQueue in-depth analysis

Linkedblockingqueue, see name meaning, it is by a linked list of blocking queues, first look at the core composition:

    All the elements are stored through the static inner class of Node, which is exactly the same as LinkedList, static class Node<e> {//using item to save the element itself E I
        Tem
        Saves the successor node of the current node node<e> next;
    Node (E x) {item = x;}
    /** the maximum capacity that a blocking queue can store can manually specify the maximum capacity at creation time, if the user does not specify the maximum capacity, the default maximum capacity is Integer.max_value.

    * Private final int capacity; /** the number of elements in the currently blocked queue PS: If you have seen Arrayblockingqueue source code, you will find that arrayblockingqueue the bottom of the number of elements to save the use of an ordinary I The NT type variable. The reason is that at the bottom of the arrayblockingqueue, the same lock object is used for the element's inbound and outbound queues.
        The number of modifications are in the case of a thread acquisition lock operation, so no thread security issues. Linkedblockingqueue is not, it is in the queue and out of the queue using two different lock objects, so whether in the queue or out of the queue, will involve a concurrent modification of the number of elements (after the source can be more clearly seen) so here
    An atomic action class was used to resolve the thread-safety issue of concurrent modification of the same variable.

    * Private Final Atomicinteger count = new Atomicinteger (0); /** * The head of the list * Linkedblockingqueue has a invariance: * The elements of the head are always null,head.item==null/private Transien T node<e> head;

    /** * The tail of the Linkedblockingqueue is also a invariance: * that is last.next==null/private transient NODE&L T

    E> last; /** the lock acquired by the thread when the element is out of the queue the lock that the thread needs to acquire when performing operations such as take, poll, or private final reentrantlock Takelock = new Reentrantlock ()

    ;

    /** when the queue is empty, this Condition lets the thread that gets the element from the queue be in a waiting state/private final Condition Notempty = Takelock.newcondition (); /** the lock acquired by the thread when the element is queued when executing add, put, offer, etc. when the thread needs to acquire the lock/private final Reentrantlock Putlock = new reentrant

    Lock (); /** when the element of the queue has reached capactiy, the thread that Condition the element into the queue is in the waiting state/private final Condition Notfull = putlock.newcondition ();

Through the analysis above, we can find that Linkedblockingqueue is not using the same lock when entering and out of queues, which means that there is no mutex operation between them. In the case of multiple CPUs, they can be real at the same time both consumption, and production, can do parallel processing.

Let's look at the construction method for the next linkedblockingqueue:

    /** * If the user does not display a value for the specified capacity, the maximum value of int is used by default * * Public Linkedblockingqueue () {This (Integer.max_value)
    ; /** can see that when there is no element in the queue, the head of the queue is equal to the end of the queue, pointing to the same node, and the content of the element is null/public linkedblockingqueue (int c
        apacity) {if (capacity <= 0) throw new IllegalArgumentException ();
        This.capacity = capacity;
    last = head = new node<e> (null);
    }/* When initializing linkedblockingqueue, you can also enter the elements of a collection directly into the queue, at which point the maximum capacity of the queue is still the maximum of int.
        * * Public linkedblockingqueue (COLLECTION&LT; extends e> c) {this (integer.max_value);
        Final Reentrantlock putlock = This.putlock; Get lock Putlock.lock ();
            Never contended, but necessary for visibility try {///iteration collection of each element, let it into the queue, and update the number of elements in the current queue
            int n = 0;
                For (e e:c) {if (E = = null) throw new NullPointerException (); if (n = = capacity) THrow New IllegalStateException ("Queue full");
                Refer to the following Enqueue analysis Enqueue (new node<e> (E));
            ++n;
        } count.set (n);
        finally {//release lock Putlock.unlock ();
     }/** * I go, this code is actually not very readable ah.
     * In fact, the following code is equivalent to the following: * LAST.NEXT=NODE;
     * Last = node;
        * In fact, there is no pattern: to let the new queue element become the original last next, let the entry element is called the last * * */private void Enqueue (node<e> Node) {
        Assert Putlock.isheldbycurrentthread ();
        assert last.next = = null;
    last = Last.next = node; }

After analyzing the core composition of the Linkedblockingqueue, let's take a look at some of the core operations, and first analyze the elements into the queue process:

    public void put (e e) throws interruptedexception {if (E = = null) throw new NullPointerException (); Note:convention in all put/take/etc be to preset local var/* Notice the above sentence, the Convention all Put/take operation will set up the native variable, you can see the next
        There is an operation that assigns a Putlock value to a local variable/int c =-1;
        node<e> node = new node (E); /* Here first get to Putlock, and the number of elements in the current queue is the preset local variable operation described above * * Final reentrantlock Putlock = THIS.P
        Utlock;
        Final Atomicinteger count = This.count;
            /* Perform an interruptible lock fetch, which means that if a thread is in the blocked state because of acquiring a lock, the thread can be interrupted without continuing to wait, which is a way to avoid deadlocks, not because
        After the deadlock is found, the application can only be restarted due to the inability to disconnect the thread.
        * * putlock.lockinterruptibly (); try {*/* when the capacity of the queue is at the maximum capacity, the thread is waiting until the queue has an idle location to continue.
            Using a while judgment is still the case where the thread is "pseudo awakened", that is, when the thread is awakened and the queue size is still equal to capacity, the thread should continue to wait. */while (count.get () = = capacity){notfull.await ();
            //Let the element proceed to the end of the queue, the Enqueue code analyzes the Enqueue (node) above;
            First gets the number of elements in the original queue, and then the number of elements in the queue is +1.
            c = count.getandincrement ();
            /* Note: The result of c+1 is the sum of the queue elements after the new element has been queued. When the number of total elements in the current queue is less than the maximum capacity, then other threads executing into the queue are awakened so that they can put into the element, and if the queue is equal to capacity after the new element is added, then it means that the queue is full and there is no need to wake the other positive
            The threads that wait into the queue, because they continue to wait after they are awakened.
        */if (c + 1 < capacity) notfull.signal ();
        finally {//complete the release of the Lock Putlock.unlock ();
        /* When c=0 means that the queue before is empty, the thread out of the queue is waiting, and now a new element is added, that is, the queue is no longer empty, so it wakes up the thread that is waiting to get the element.
    */if (c = = 0) signalnotempty (); /* * Wake the thread that is waiting to get the element and tell them there are elements in the queue/private void Signalnotempty () {final Reentrantlock Takelock
        = This.takelock;
        Takelock.lock ();
        try {//Wake up the Thread notempty.signal () of the element by Notempty;
 finally {           Takelock.unlock (); }
    }

After reading the Put method, let's look at how the offer is handled in the following ways:

    /**, in addition to defining the Put method in the Blockingqueue interface (blocking after the queue element is full, until the queue has a new space for the method thread to continue to execute), defines an offer method that returns a Boolean value when queued into
    The work returns true and the in queue fails to return false.
    This method is basically consistent with the Put method, but there are slight differences.
        * Public Boolean offer (E e) {if (E = = null) throw new NullPointerException ();
        Final Atomicinteger count = This.count;
            /* When the queue is full, it will not continue to wait, but to return directly.
        So the method is non-blocking.
        */if (count.get () = = capacity) return false;
        int c =-1;
        node<e> node = new node (E);
        Final Reentrantlock putlock = This.putlock;
        Putlock.lock ();
            try {*/* When a lock is acquired, two checks are required, because it is possible that when the queue is capacity-1, two threads seize the lock at the same time, and only one thread succeeds.
            When the thread has the element into the queue, the lock is released, and after the thread grabs the lock, the queue size has reached capacity, so it cannot get the element into the queue.
                The rest of the following operations are the same as put, where the */if (Count.get () < capacity) {Enqueue (node) is no longer detailed here;
                c = count.getandincrement (); if (C + 1 < capacity) notfull.signal ();
        finally {Putlock.unlock ();
        } if (c = = 0) signalnotempty ();
    return c >= 0; }

The

Blockingqueue also defines a timed wait insert, that is, if the queue has space to insert in a certain amount of time, then the element is queued, then returns True, and returns False if there is still no space to insert after the specified time. The following is an analysis of a timed wait operation:

        /** Timeout and timeunit to specify the length of time to wait, timeunit as the unit of time/public boolean of Fer (e E, long timeout, timeunit unit) throws Interruptedexception {if (E = null) throw new Nullpointere
        Xception ();
        Converts the specified length of time to milliseconds for processing long Nanos = Unit.tonanos (timeout);
        int c =-1;
        Final Reentrantlock putlock = This.putlock;
        Final Atomicinteger count = This.count;
        Putlock.lockinterruptibly (); try {while (count.get () = = capacity) {//If the wait time remaining is less than or equal to 0, return directly if (Nanos <
            = 0) return false; /* The wait is completed by condition, at which time the current thread completes the lock and waits until it is awakened by another thread, or when the line is interrupted and the wait is not returned until it returns
              Value is the length of time that is experienced from the method call to the return.
              Note: The above code is a generic notation for the condition Awitnanos () method, which can be consulted in the Condition.awaitnaos API documentation. The rest of the following operations are the same as put, which is no longer detailed here */Nanos = Notfull.awaitnanos (nAnos);
            } enqueue (New node<e> (E));
            c = count.getandincrement ();
        if (c + 1 < capacity) notfull.signal ();
        finally {Putlock.unlock ();
        } if (c = = 0) signalnotempty ();
    return true; }

Through the above analysis, we should be more clearly aware of the linkedblockingqueue into the queue operation, mainly by acquiring the Putlock lock to complete, when the number of queues reached the maximum, this will cause the thread is blocked or returned false ( If there is still room left in the queue, a node object is created and set to the end of the queue as the last element of Linkedblockingqueue.

After analyzing the process of entering the queue, we look at the process of Linkedblockingqueue out queues, and because blockingqueue methods are symmetric, only the implementation of the Take method is analyzed here, and the rest of the methods are implemented in the same way:

 Public E take () throws interruptedexception {e x;
        int c =-1;
        Final Atomicinteger count = This.count;
        Final Reentrantlock takelock = This.takelock;
        Acquire the lock through Takelock and support thread interrupt takelock.lockinterruptibly ();
            try {//When the queue is empty, leave the current thread in wait while (count.get () = = 0) {notempty.await ();
            }//Completion element's out queue x = Dequeue ();
            /* Number of queue elements complete atomization operation-1, you can see that the count element will be concurrently modified by the thread that inserts the element and the thread that gets the element.
            * * c = count.getanddecrement ();
            /* When an element is out of the queue, the queue size is still greater than 1 o'clock the current thread wakes up other threads that execute elements out of the queue so that they can also perform the acquisition of the element.
        if (C > 1) notempty.signal ();
        finally {//complete the release of the Lock Takelock.unlock ();
    }/* When c==capaitcy, that is, before getting the current element, the queue is full, and when the element is fetched, the queue is vacated, so the current thread wakes the line that performed the insert operation.        The process notifies one of the others that the insert operation can take place.
        * * if (c = = capacity) Signalnotfull ();
    return x;
     /** * The process of letting the head element out of the queue * Its ultimate goal is to get the original header to be recycled by the GC so that its next becomes head * and the item for the new heads is null.
     * Because the head of the Linkedblockingqueue is consistent: that is, the element is null.
        * Private E Dequeue () {node<e> h = head;
        Node<e>-H.next; H.next = h;
        Help GC head = i;
        E x = First.item;
        First.item = null;
    return x; }

Linkedblockingqueue the queue's approximate process. png

For the Linkedblockingqueue source analysis is here, let's make a comparison between Linkedblockingqueue and Arrayblockingqueue. Comparison of 2.LinkedBlockingQueue and Arrayblockingqueue

Arrayblockingqueue is a "bounded" blocking queue because its underlying base is array-based and the size of the storage is specified at creation time, and the fixed size array element is immediately allocated in memory when it is finished, so its storage is usually limited, so that it is a blocked , and Linkedblockingqueue can be specified by the user for maximum storage capacity, or without specifying that the maximum storage capacity will be integer.max_value if not specified, which can be considered a "unbounded" blocking queue, because its node creation is dynamically created, And it can be reclaimed by GC after the node is out of queue, so it has flexible scalability. However, because of the boundedness of Arrayblockingqueue, it can better predict the performance, and Linkedblockingqueue because there is no limit size, when the task is very many, keep to store in the queue, it is possible to cause the memory overflow situation occurs.

Second, in Arrayblockingqueue, the same lock is used in the queue and out queue operations, so even in the case of multi-core CPUs, the reads and operations are not parallel. The Linkedblockingqueue read and insert operations use a lock that is two different locks, and the operations between them are not interfered with each other, so the two operations can be done in parallel. Therefore linkedblockingqueue throughput is higher than arrayblockingqueue. 3. Reasons for choosing Linkedblockingqueue

    /**
        The code below is executors to create a fixed size thread pool that uses
        Linkedblockingqueue as a task queue.
    *
    /public static executorservice newfixedthreadpool (int nthreads) {return
        new Threadpoolexecutor ( Nthreads, Nthreads,
                                      0L, Timeunit.milliseconds,
                                      new linkedblockingqueue<runnable> ());
    

The reason that Linkedblockingqueue is chosen as a blocking queue in JDK is its unbounded nature. Because the thread pool is fixed in size, the number of its threads is not scalable, when the task is very busy, it will cause all the threads are working, if the use of a bounded blocking queue for processing, it is very likely to quickly lead to the queue full of the situation, This causes the task to be unable to commit and throws the Rejectedexecutionexception, but uses the unbounded queue because of its good storage capacity scalability, it can be a good buffer task busy scenario, even if the task is very much, can also dynamic expansion, when the task is processed, The nodes in the queue are also recycled by GC, which is very flexible.

At this point, Linkedblockingqueue's analysis is here, if you find any written wrong place, please indicate (thank you very much.) )。

Author: code Agriculture One
Link: HTTP://WWW.JIANSHU.COM/P/CC2281B1A6BC
Source: Jianshu
Copyright belongs to the author. Commercial reprint please contact the author to obtain authorization, non-commercial reprint please indicate the source.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.