Thread security set under Concurrent

Source: Internet
Author: User
Tags comparable

1. ArrayBlockingQueue

ArrayBlockingQueue is a thread-safe, bounded blocking queue supported by arrays. This queue sorts elements according to the FIFO (first-in-first-out) principle. This is a typical "bounded cache zone ",Fixed sizeIn which the elements inserted by the producer and the elements extracted by the user are kept. Once such a cache area is created, its capacity cannot be increased. An attempt to add an element to the full queue results in blocking operations. An attempt to extract an element from the empty queue results in similar blocking. This class supports an optional fair policy for sorting the waiting producer thread and consumer thread. This sorting is not guaranteed by default. However, a queue constructed by setting the fairness (fairness) to true allows access to threads in FIFO order.Fairness usually reduces throughput, but also reduces variability and avoids "imbalance".

 public ArrayBlockingQueue(int capacity, boolean fair) {        if (capacity <= 0)            throw new IllegalArgumentException();        this.items = new Object[capacity];        lock = new ReentrantLock(fair);        notEmpty = lock.newCondition();        notFull =  lock.newCondition();    }

From the transformation method, we can see that the implementation mechanism of ArrayBlockingQueue is implemented by ReentrantLock and Condition.

 

2. Define blockingdeque

LinkedBlockingDeque is implemented using a two-way linked list. It must be noted thatThe shortlist is also part of Deque..

   /** Maximum number of items in the deque */    private final int capacity;
 /**     * Creates a {@code LinkedBlockingDeque} with a capacity of     * {@link Integer#MAX_VALUE}.     */    public LinkedBlockingDeque() {        this(Integer.MAX_VALUE);    }
 public LinkedBlockingDeque(int capacity) {        if (capacity <= 0) throw new IllegalArgumentException();        this.capacity = capacity;    }

 

3. Define blockingqueue

LinkedBlockingQueue is a Implementation of blocking queue Based on linked nodes and with any range. It is also thread-safe. Sort elements by FIFO (first-in-first-out. The queue header is the longest element in the queue. The tail of a queue is the element with the shortest time in the queue.

 /**     * Creates a {@code LinkedBlockingQueue} with a capacity of     * {@link Integer#MAX_VALUE}.     */    public LinkedBlockingQueue() {        this(Integer.MAX_VALUE);    }
    public LinkedBlockingQueue(int capacity) {        if (capacity <= 0) throw new IllegalArgumentException();        this.capacity = capacity;        last = head = new Node<E>(null);    }

Optional capacity range constructor parameters can be used to prevent queue overuse. If no capacity is specified, it is equal to Integer. MAX_VALUE. Unless a node is inserted, the queue exceeds the capacity. Otherwise, a link node is dynamically created after each insertion.

In addition, it does not accept null values:

  public void put(E e) throws InterruptedException {        if (e == null) throw new NullPointerException();        // Note: convention in all put/take/etc is to preset local var        // holding count negative to indicate failure unless set.        int c = -1;        Node<E> node = new Node<E>(e);        final ReentrantLock putLock = this.putLock;        final AtomicInteger count = this.count;        putLock.lockInterruptibly();

 

4. PriorityBlockingQueue

PriorityBlockingQueue is an unbounded thread-safe blocking queue. It uses the same sequence rules as PriorityQueue and provides blocking retrieval operations.

    public PriorityBlockingQueue(int initialCapacity) {        this(initialCapacity, null);    }
  public PriorityBlockingQueue(int initialCapacity,                                 Comparator<? super E> comparator) {        if (initialCapacity < 1)            throw new IllegalArgumentException();        this.lock = new ReentrantLock();        this.notEmpty = lock.newCondition();        this.comparator = comparator;        this.queue = new Object[initialCapacity];    }

The constructor shows that there is a Comparator interface. Yes, this is the key to determining the element Priority: compared with other objects, if the compare method returns a negative number, the Priority in the queue is relatively high. Of course, you can not specify the Comparator object when creating the PriorityBlockingQueue, but you are required to implement it in the stored elements.

  public boolean offer(E e) {        if (e == null)            throw new NullPointerException();        final ReentrantLock lock = this.lock;        lock.lock();        int n, cap;        Object[] array;        while ((n = size) >= (cap = (array = queue).length))            tryGrow(array, cap);        try {            Comparator<? super E> cmp = comparator;            if (cmp == null)                siftUpComparable(n, e, array);            else                siftUpUsingComparator(n, e, array, cmp);            size = n + 1;            notEmpty.signal();        } finally {            lock.unlock();        }        return true;    }
  private static <T> void siftUpComparable(int k, T x, Object[] array) {        Comparable<? super T> key = (Comparable<? super T>) x;        while (k > 0) {            int parent = (k - 1) >>> 1;            Object e = array[parent];            if (key.compareTo((T) e) >= 0)                break;            array[k] = e;            k = parent;        }        array[k] = key;    }

Each offer element has a siftUpComparable operation, that is, sorting. If a comparator is input when it is not constructed, the system uses natural sorting. Otherwise, the system uses the comparator rule to perform binary search, the column header is the maximum or minimum element that the comparator wants.

 

5. ConcurrentHashMap, concurrent1_queue, concurrent1_deque

ConcurrentHashMap supports High-concurrency and high-throughput thread-safe HashMap implementation. Its implementation principle is the lock Separation Mechanism, which manages data in segments. Each Segment has an independent lock.

    /**     * The segments, each of which is a specialized hash table     */    final Segment<K,V>[] segments;

The following code is an element in the Hash chain:

    static final class HashEntry<K,V> {        final K key;        final int hash;        volatile V value;        final HashEntry<K,V> next;    }

We can see that keys, hash, and HashEntry are of the final type, which determines that the ConcurrentHashMap must be inserted in the chain table header, you can only traverse the elements of the corresponding Key from the linked list header to modify the modification. to delete the modification, You need to copy all the previous nodes of the node to be deleted, the last node points to the next node to be deleted. Note that the Value is modified using volatile, so that the program can ensure memory visibility without locking during reading. Of course, in the Cross-Segment operation (contains, size), the lock operation in all segments will still be obtained, and cross-Segment operations should be avoided as much as possible.

Concurrentincluqueue and concurrentincludeque are implemented using a one-way linked list and a two-way linked list respectively. The principle is also the lock separation mechanism.

7. ConcurrentSkipListMap

ConcurrentSkipListMap provides a sort ing table for thread-safe concurrent access. Internally, it is the implementation of the SkipList (skip table) structure. In theory, it can perform search, insert, and delete operations within the O (log (n) time. TreeMap should be used whenever possible in case of non-multithreading. In addition, you can use Collections. synchronizedSortedMap to package TreeMap for parallel programs with relatively low concurrency, and also provide better efficiency. For highly concurrent programs, use ConcurrentSkipListMap to provide higher concurrency. Similarly, ConcurrentSkipListMap supports sorting Map key values (see http://hi.baidu.com/yao1111yao/item/0f3008163c4b82c938cb306d)

 

The performance test of concurrentHashMap and ConcurrentSkipListMap is about 4 times faster than that of ConcurrentSkipListMap under the condition that 4 threads have 16 thousand data.
However, ConcurrentSkipListMap has several incomparable advantages: 1. The keys of ConcurrentSkipListMap are ordered.
2. ConcurrentSkipListMap supports higher concurrency. The access time of ConcurrentSkipListMap is log (N), which is almost independent of the number of threads. That is to say, in the case of a certain amount of data, the more concurrent threads, ConcurrentSkipListMap can reflect his advantages (reference: http://wenku.baidu.com/link? Url = response ).

8. ConcurrentSkipListSet

ConcurrentSkipListSet is a set of threads in a safe and orderly manner. It is suitable for high concurrency scenarios. ConcurrentSkipListSet and TreeSet are both ordered sets. However, first, their thread security mechanisms are different. TreeSet is non-thread-safe, while ConcurrentSkipListSet is thread-safe. Second, ConcurrentSkipListSet is implemented through ConcurrentSkipListMap, while TreeSet is implemented through TreeMap.

 

9. CopyOnWriteArrayList and CopyOnWriteArraySet

The traditional List will throw java when reading and writing multiple threads at the same time. util. concurrentModificationException, while CopyOnWriteArrayList solves this problem using CopyOnWrite (write-time replication) technology, which usually requires a large overhead. However, when the number of traversal operations exceeds the number of variable operations, this method may be more effective than other alternatives.

Copy at write time:

 /**     * Appends the specified element to the end of this list.     *     * @param e element to be appended to this list     * @return {@code true} (as specified by {@link Collection#add})     */    public boolean add(E e) {        final ReentrantLock lock = this.lock;        lock.lock();        try {            Object[] elements = getArray();            int len = elements.length;            Object[] newElements = Arrays.copyOf(elements, len + 1);            newElements[len] = e;            setArray(newElements);            return true;        } finally {            lock.unlock();        }    }

We can see that the lock is added during the write process, because if there is no lock, each thread will generate a snapshot, resulting in memory consumption. First, Arrays. copyOf creates a memory snapshot, then writes the memory snapshot, and finally transfers the application of the memory snapshot to the CopyOnWriteArrayList.

/** The array, accessed only via getArray/setArray. */    private transient volatile Object[] array;

With regard to reading, the stored variables use the volatile keyword to solve the problem of memory visibility without locking. CopyOnWriteArraySet is much simpler. It only holds a CopyOnWriteArrayList and only checks whether the element exists when adding/addAll. If it exists, it is not added to the set.

Finally, we recommend that you use the add operation as little as possible because the Copy memory will be inserted, which will eventually lead to garbage collection.Batch insert operation. This is not recommended for containers that are often inserted.

 

10. DelayQueue

DelayQueue is an unbounded blocking queue. elements can be extracted only when the delay expires. The queue header is the Delayed element with the longest storage time after the delay expires. Based on this feature, we can use DelayQueue to implement the cache system and real-time scheduling system.

DelayQueue is a BlockingQueue, and its special parameter is Delayed. Delayed extends the Comparable interface. The benchmark for comparison is the delay time value. the return value of the implementation class getDelay of the Delayed interface should be a fixed value (final ). DelayQueue is implemented using PriorityQueue internally. We can say that DelayQueue = BlockingQueue + PriorityQueue + Delayed;

public class DelayQueue<E extends Delayed> extends AbstractQueue<E>    implements BlockingQueue<E> {    private final transient ReentrantLock lock = new ReentrantLock();    private final PriorityQueue<E> q = new PriorityQueue<E>();

The implementation of the take method in the viewer can be seen that it is determined whether it is readable Based on the delay period of the element:

  public E take() throws InterruptedException {        final ReentrantLock lock = this.lock;        lock.lockInterruptibly();        try {            for (;;) {                E first = q.peek();                if (first == null)                    available.await();                else {                    long delay = first.getDelay(NANOSECONDS);                    if (delay <= 0)                        return q.poll();                    first = null; // don't retain ref while waiting                    if (leader != null)                        available.await();                    else {                        Thread thisThread = Thread.currentThread();                        leader = thisThread;                        try {                            available.awaitNanos(delay);                        } finally {                            if (leader == thisThread)                                leader = null;                        }                    }                }            }        } finally {            if (leader == null && q.peek() != null)                available.signal();            lock.unlock();        }    }

 

11. Invalid transferqueue

Required TransferQueue = concurrent1_queue + SynchronousQueue (in "fair" mode) + LinkedBlockingQueue, implements an important interface TransferQueue, which contains the following important methods:

1. transfer (E e)
If a consumer thread is waiting to be acquired, it will be handed over immediately. Otherwise, the current Element e will be inserted to the end of the queue and will be blocked, to a consumer thread to take this element away.
2. tryTransfer (E e)
If there is a consumer thread waiting for obtaining (using the take () or poll () function), using this method will immediately transfer/transmit object element e;
If not, false is returned and the queue is not entered. This is a non-blocking operation.
3. tryTransfer (E e, long timeout, TimeUnit unit)
If a consumer thread is waiting to be acquired, it will be transmitted to it immediately; otherwise, Element e will be inserted to the end of the queue and will be consumed by the consumer thread,
If Element e cannot be obtained by the consumer thread within the specified time, false is returned and the element is removed.
4. hasWaitingConsumer ()
Determine whether a consumer thread exists
5. getWaitingConsumerCount ()
Obtains the number of consumption threads for all elements waiting to be retrieved.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.