A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
In the shortest possible space, the features, implementations, and performance of all collections and concurrent collections are smoothed over. For all "proficient in Java" is actually not so confident people read.
Constantly updated, please try to access the original blog.List
Implemented as an array. Space-saving, but the array has a capacity limit. Exceeding the limit adds 50% capacity and is copied to the new array with system.arraycopy (), so it is better to give a pre-estimate of the size of the group. The default is to create an array of size 10 when you insert the element for the first time.
Accessing elements by array subscript –get (i)/set (i,e) has a high performance, which is the basic advantage of arrays.
The performance of adding elements –add (e) directly at the end of the array is also high, but if you press the subscript insert, delete element –add (i,e), remove (i), remove (e), you use System.arraycopy () to move some of the affected elements, the performance becomes worse, This is a fundamental disadvantage.
Implemented in a doubly linked list. There is no capacity limit on the list, but the doubly linked list itself uses more space and requires additional list pointer operations.
Press the subscript access element –get (i)/set (i,e) to the tragic traversal of the linked list moves the pointer in place (if half of the i> array size is moved from the end).
When inserting or deleting an element, you can modify the pointer of the front and back node, but you still want to traverse the pointer of a partial list to move to the position indicated by the subscript, only the Operation –add (), AddFirst (), Removelast () or the Remove () on iterator () at the bottom of the list. Can dispense with pointer movement.
Concurrency-optimized ArrayList. Using the Copyonwrite policy, copy a snapshot to modify the changes, and then let the internal pointer point to the new array.
Because the changes to the snapshot are not visible to the read operation, only the write lock is not read lock, plus the cost of copying, typically suitable for reading and writing less scenes. If the update frequency is high, or if the array is large, or collections.synchronizedlist (list), it is better to use the same lock for all operations to ensure thread safety.
The Addifabsent (e) method is added, which iterates through the array to see if the element already exists and the performance is not as good as expected.
Regardless of the implementation, returns the subscript –contains (e), IndexOf (e), and remove (e) to traverse all elements for comparison, performance can not be imagined too good.
There is no sortedlist in the order of the element value, there is no lock-free algorithm in the thread security class, and it is not necessary to use the equivalent class in set and queue, which is missing some methods that are unique to the list.Map
An array of hash buckets implemented in entry array, with the hash value of key to take the size of the bucket array to get the array subscript.
When inserting an element, if two keys fall in the same bucket (for example, the hash value 1 and 17 modulo 16 are the first hash bucket), entry uses a next property to implement multiple entry in a one-way linked list, and the entry in the bucket will next point to the bucket's current entry.
When looking for a key with a hash value of 17, navigate to the first hash bucket and then walk through all the elements in the bucket and compare their key values one by one.
When the number of entry reaches 75% of the barrel number (many articles say that the number of buckets used is 75%, but the code is not), the bucket array is multiplied and all the original entry are redistributed, so it is also better to have a pre-estimate.
Bit operations (hash & (ARRAYLENGTH-1)) will be faster, so the size of the array is always 2 of the N-square, you give an initial value like 17 will be converted to 32. The default initial value when the element is first placed is 16.
Iterator () is traversed by a hash-bucket array, which appears to be a disorderly order.
In JDK8, a new default of 8 is added to the valve value, when a bucket of entry more than valve value, do not use a one-way list and red-black tree to store to speed up key search speed.
Extended HASHMAP increases the implementation of the doubly linked list, claiming to be the most memory-occupying data structure. Support for iterator () is sorted by entry in the insertion order (but not the update, if the setting Accessorder property is true, all read and write access is counted).
The implementation is on the entry to add the property before/after pointer, insert the add yourself to the front of the header entry. If all read and write access to order, but also to the front and back entry before/after stitching up to delete themselves in the list.
With red and black trees, the space is limited to the introductory tutorial. Support for iterator () is sorted by key value and can be sorted in ascending order of key that implements the comparable interface, or controlled by an incoming comparator. Conceivably, the cost of inserting/deleting elements in a tree must be greater than the hashmap.
Supports the SortedMap interface, such as Firstkey (), Lastkey () to obtain the largest minimum key, or sub (Fromkey, Tokey), Tailmap (Fromkey) to cut a section of the map.
Concurrency-optimized hashmap, the default 16 write lock (can be set more), effectively spread the probability of blocking, and there is no read lock.
The data structure is segment,segment inside is the hash bucket array, each segment a lock. Key first calculates which segment it is in, and then calculates which hash bucket it is in.
Supports the Concurrentmap interface, such as Putifabsent (Key,value) with the reverse replace (key,value) and the Replace (key, OldValue, NewValue) that implements CAs.
There is no read lock because the Put/remove action is an atomic action (such as a put is an assignment operation of an array element/entry pointer), and the read operation does not see an intermediate state of an update action.
JDK6 adds a new concurrency-optimized sortedmap to skiplist implementation. Skiplist is a simplified alternative to red-black trees and is a popular, ordered set algorithm that is limited to introductory tutorials. The concurrent package chooses it because it supports a CAS-based lock-free algorithm, while the red-black tree does not have a good lock-free algorithm.
Very special, its size () can not be arbitrarily adjusted, will be all over the historical statistics.
About Null,hashmap and Linkedhashmap is random, treemap not set comparator when key cannot be null;concurrenthashmap in JDK7 value cannot be null (what is this?). ), JDK8 key and value cannot be null;concurrentskiplistmap all JDK keys and value cannot be null.Set
The set is almost always implemented internally with a map, because the keyset in the map is a set, and value is a false value, all using the same object. The features of set also inherit the characteristics of those internal map implementations.
Add : It seems that there is a concurrenthashset, there should be an internal use of concurrenthashmap simple implementation, but the JDK is not provided. Jetty himself sealed one, guava directly with Java.util.Collections.newSetFromMap (new Concurrenthashmap ()) implementation.Queue
A queue is a list that is in and out of both ends, so it can also be implemented using arrays or linked lists.
– Normal Queue –
Yes, the LinkedList implemented in a doubly linked list is both a list and a queue. It is the only queue that allows NULL to be placed.
A two-way queue implemented in an array of loops. The size is a multiple of 2, and the default is 16.
The normal array can only quickly add elements at the end, in order to support FIFO, quickly remove elements from the array header, you need to use a loop array: There is a team head end two subscript: When the element is ejected, the team head subscript is incremented, if the element is added to the end of the array space, the element is looped to array  ( If this time the team head subscript is greater than 0, indicating that the team head pop-up elements, there is empty space, and the tail of the index point to 0, and then insert the next element is assigned to the array , the tail subscript point 1. If the bottom of the line is chasing the team head, it means that all the space in the array has been exhausted, doubling the array.
The priority queue implemented with the binary heap, see the Getting Started tutorial, is no longer FIFO but by the element implementation of the comparable interface or the results of the comparison of incoming comparator to the team, the smaller the value, the higher the priority, the more first out of the team. However, note that the return of its iterator () is not sorted.
– Thread-Safe queues –
Unbounded concurrency-optimized queue, based on the linked list, implements a non-locking algorithm that relies on CAs.
The structure of the concurrentlinkedqueue is one-way linked list and head/tail two pointers, because the queue needs to modify the tail element of the next pointer, as well as modify the tail point to the newly enqueued element two CAs action cannot atoms, so the need for a special algorithm, See the Getting Started tutorial for a limited space.
Unbounded concurrency-optimized priorityqueue is also based on a binary heap. Use a public read-write lock. Although the implementation of the Blockingqueue interface, in fact, there is no blocking the characteristics of the queue, the space is not sufficient when the automatic expansion.
The interior contains a priorityqueue, which is equally unbounded. The element needs to implement the delayed interface, which is required to return the current off-trigger time for each invocation, and less than 0 to indicate the trigger.
Pull () looks at the elements of the team header with Peek () and checks to see if the trigger time is reached. The scheduledthreadpoolexecutor uses a similar structure.
– Thread-Safe blocking queue –
The Blockingqueue queue length is limited to ensure that producers and consumers are not too far apart to avoid running out of memory. Cannot be changed after the queue length is set. When the queue is full, or the queue is empty when it is queued, the effects of different functions are shown in the following table:
|May report an exception||Returns a Boolean value||May block wait||can set the waiting time|
|Team||Add (E)||Offer (e)||Put (e)||Offer (E, timeout, unit)|
|Out Team||Remove ()||Poll ()||Take ()||Poll (timeout, unit)|
|View||Element ()||Peek ()||No||No|
Blockingqueue of fixed-length concurrency optimizations, based on loop array implementations. There is a public read-write lock with Notfull, notempty two condition to manage the blocking state when the queue is full or empty.
A long concurrency-optimized blockingqueue can be selected, based on a linked list, so the length can be set to Integer.max_value. Using the characteristics of the list, the Takelock and putlock two locks are separated, and the blocking state of the queue full or empty is continued with notempty and Notfull.
JDK7 has a linkedtransferqueue,transfer (e) method to ensure that the producer into the element, was consumer taken away and returned, better than Synchronousqueue, have the space to learn.
Java collection of small copy Java beginners must
Start building with 50+ products and up to 12 months usage for Elastic Compute Service