Concurrenthashmap principle Analysis

Source: Internet
Author: User
Tags rehash

Collections are the most commonly used data structures in programming. When it comes to concurrency, it is almost always supported by a collection of such advanced data structures. For example, two threads need to access an intermediate critical section (Queue) at the same time, such as caching as a copy of an external file (HASHMAP). This article focuses on the concurrenthashmap of the 3 concurrent collection types (concurrent,copyonright,queue) in jdk1.5, allowing us to understand them in a detailed and theoretical way, allowing us to benefit from deep project development.

By analyzing Hashtable, we know that the synchronized is for the whole hash table, that is, each time the whole table is locked to allow the thread to monopolize, CONCURRENTHASHMAP allows multiple modification operations concurrent, the key is to use the lock separation technology. It uses multiple locks to control changes to different parts of the hash table. Concurrenthashmap Internal Use segment (Segment) to represent these different parts, each segment is actually a small hash table, they have their own locks. As long as multiple modification operations occur on different segments, they can be performed concurrently.
Some methods need to span segments, such as size () and Containsvalue (), and they may need to lock the entire table rather than just a segment, which requires all segments to be locked sequentially, and the locks for all segments are released sequentially after the operation is complete. Here "in order" is very important, otherwise there is a great possibility of deadlock, within Concurrenthashmap, the segment array is final, and its member variables are actually final, but it is not guaranteed that the array members are final, but simply declaring the array as final. This requires a guarantee of implementation. This ensures that no deadlocks occur because the order in which the locks are obtained is fixed.

First, structural analysis

The main difference between Concurrenthashmap and Hashtable is the granularity of the lock and how it is locked, which can be simply understood to break a large hashtable into multiple, forming a lock separation. :

And Hashtable's implementation is---Lock the entire hash table

Second, the application scenario

When there is a large array that needs to be shared across multiple threads, consider whether to layer it to multiple nodes and avoid large locks. And we can consider some module positioning through the hash algorithm.

In fact, not only for threading, when designing the data table of the transaction (the transaction in a sense is also the embodiment of the synchronization mechanism), you can think of a table as a need to synchronize the array, if the operation of the table data too long to consider the transaction separation (which is why to avoid the appearance of large tables), such as the data in the field Horizontal sub-table, etc.

Third, the source code interpretation

Concurrenthashmap main entity class is three: Concurrenthashmap (whole hash table), Segment (bucket), Hashentry (node), corresponding to the above diagram can see the relationship between

/** * The segments, each of which is a specialized hash table */  

Invariant (immutable) and variable (Volatile)
Concurrenthashmap allows multiple read operations to be performed concurrently, and read operations do not require locking. If you use traditional techniques, such as implementations in HashMap, if you allow the addition or deletion of elements in the middle of a hash chain, the read operation will get inconsistent data without locking. CONCURRENTHASHMAP implementation technology is to ensure that hashentry is almost immutable. The hashentry represents a node in each hash chain, and its structure is as follows:

1. Static Final class Hashentry<k,v> {  2.     Final K key;  3.     final int hash;  4.     volatile V value;  5.     final hashentry<k,v> next;  6.}  

You can see that except that value is not final, the other values are final, which means that you cannot add or remove nodes from the middle or tail of the hash chain, because this requires modifying the next reference value, and all of the node modifications can only start from the head. For put operations, they can be added to the head of the hash chain. However, for the remove operation, it may be necessary to remove a node from the middle, which requires that all the nodes in front of the node to be deleted are replicated all over again, and the last node points to the next one to delete the nodes. This is also described in detail when explaining the delete operation. To ensure that the read operation can see the most recent value, set the value to volatile, which avoids locking.
Other
In order to speed up the locating segment and the hash slot in the segment, the number of hash slots in each segment is 2^n, which makes it possible to locate the hash slots in segments and segments by bit operations. When the concurrency level is the default value of 16 o'clock, which is the number of segments, the high 4 bits of the hash value determine which segment is allocated. But we also do not forget the "Introduction to the Algorithm" to our lesson: the number of hash slots should not be 2^n, which may lead to the hash slot distribution uneven, which requires the hash value to re-hash once. (This paragraph seems to be a bit superfluous)

Here's how to locate a segment:

1. Final segment<k,v> segmentfor (int hash) {  2.     Return segments[(hash >>> segmentshift) & Segmentmask];  

Data
About the basic data structure of the hash table, here do not want to do too much discussion. A very important aspect of hash table is how to solve the hash conflict, Concurrenthashmap and HashMap use the same way, are the hash value of the same node in a hash chain. Unlike HashMap, Concurrenthashmap uses multiple sub-hash tables, that is, segments (Segment). Here are the data members for CONCURRENTHASHMAP:

1. Public class concurrenthashmap<k, v> extends Abstractmap<k, v>  2.         Implements Concurrentmap<k, V>, Serializable {  3.     /** 4.      * Mask value for indexing into segments. The upper bits of a 5.      * key ' s hash code is used to choose the segment. 6.      */  7.     final int segmentmask;  8.   9.     /**.      * Shift value for indexing within segments. One.      * *  .     final int segmentshift;  .     /**.      * The segments, each of which is a specialized hash table.      * *  .     Final segment<k,v>[] segments;  

All members are final, with Segmentmask and segmentshift primarily for locating segments, see the Segmentfor method above.
Each segment is equivalent to a sub-hash table, and its data members are as follows:

1. Static final class Segment<k,v> extends Reentrantlock implements Serializable {2. private static final long  Serialversionuid = 2249069246763182397L;          3./** 4. * The number of elements in this segment ' s region.         5. */6.  transient volatile int count;         7.8.          /** 9. * Number of updates that alter the size of the table.          This is 10.          * Used during Bulk-read methods to make sure they see a 11.          * Consistent snapshot:if modcounts change during a traversal 12.          * of segments computing size or checking containsvalue, then 13.          * We might have a inconsistent view of state so (usually) 14. * Must retry.         15. */16.  transient int modcount;         17.18.          /** 19. * The table is rehashed when it size exceeds this threshold.          * (The value of this field was always <tt> (int) (capacity * 21). * loadfactor) </tt>.) 22. */23.         transient int threshold;         24.25.          /** 26. * The Per-segment table.         27. */28.  Transient volatile hashentry<k,v>[] table;         29.30.          /** 31.  * The load factor for the hash table.          Even though this value 32.          * is same for all segments, it's replicated to avoid needing 33. * links to outer object.          * @serial 35.         */36.  Final float Loadfactor;  37.}

Count is used to count the number of data in this segment, which is the volatile (volatile variable usage guide ), which is used to coordinate modification and read operations to ensure that read operations are able to read almost the latest modifications. The coordination method is this, each modification operation made a structural change, such as the addition/deletion of nodes (modify the value of the node is not a structural change), write the count value, each read operation starts to read the value of Count. This takes advantage of the enhancement of volatile semantics in Java 5, which has a happens-before relationship to the write and read of the same volatile variable. Modcount the number of times the structure of a statistical segment is changed, mainly in order to detect whether a segment has changed during the traversal of multiple segments and will be described in detail when it comes to cross-section operations. Threashold is used to indicate the threshold value that needs to be rehash. Table array the nodes in the bucket, each array element is a hash chain, expressed in hashentry. Table is also volatile, which makes it possible to read the latest table values without needing synchronization. Loadfactor represents a load factor.

First, take a look at the delete operation remove (key).

1. Public V Remove (Object key) {2.  hash = hash (Key.hashcode ());  3. Return Segmentfor (hash). Remove (key, hash, null); 4. The entire operation is to navigate to the segment and then delegate to the remove operation of the segment. When multiple delete operations are concurrent, they can be performed concurrently, as long as they are in a different segment. The following is the Remove method implementation of segment: 1.     V remove (object key, int hash, object value) {2.  Lock ();         3. try {4.  int c = count-1;  5. hashentry<k,v>[] tab = table;  6. int index = hash & (tab.length-1);  7. Hashentry<k,v> first = Tab[index];  8. hashentry<k,v> e = first;             9. while (E! = null && (E.hash! = Hash | |!key.equals (E.KEY))) 10.  e = E.next;         11.12.  V oldValue = null;             if (E! = null) {14.  V v = e.value;                 if (value = = NULL | | value.equals (v)) {16.  OldValue = v;                 //All entries following removed node can stay 18.                 In list, but all preceding ones need to be 19.  Cloned.    20.             ++modcount;  hashentry<k,v> Newfirst = E.next;                     *for (hashentry<k,v> p = first; P! = e; p = p.next) 23.                                                   *newfirst = new Hashentry<k,v> (P.key, P.hash, 24.  Newfirst, P.value);  Tab[index] = Newfirst; Count = C;             Write-volatile 27.         } 28.         } 29.  return oldValue;         The.} finally {31.  Unlock (); 32.} 33.  }

The entire operation is performed with a segment lock, and the line before the blank line is primarily anchored to the node E to be deleted. Next, if the node does not exist, return null directly, otherwise it is necessary to copy the first node of E, the tail node points to the next node of E. The nodes behind the e do not need to be duplicated, they can be reused.

What does the middle for loop do? (* mark) from the code point of view, is to locate all the entry after cloning and spell back to the front, but it is necessary? Each time you delete an element, you clone the previous element again? This is actually determined by the invariance of the entry, looking closely at the entry definition, and discovering that all other attributes except value are decorated with final, which means that after the next field is set for the first time, it can no longer be changed, instead of cloning all its previous nodes. As to why entry should be set to invariance, this is not required to synchronize with the invariant access to save time about

Here's A

Before deleting an element:

After deleting element 3:

The second figure is actually a bit of a problem, the replication node should be a value of 2 node in front, the value of 1 of the node in the back, that is, exactly the same as the original node order, fortunately this does not affect our discussion.

The entire remove implementation is not complex, but there are several points to note. First, when the node to be deleted exists, the last step to delete is to subtract the value of count by one. This must be the last step, otherwise the read operation may not see the structural modifications made to the previous segment. Second, the start of the Remove execution assigns the table to a local variable tab, because table is a volatile variable, and the cost of reading and writing volatile variables is significant. The compiler does not have any optimizations to read and write volatile variables, and direct access to non-volatile instance variables does not have much effect, and the compiler optimizes them accordingly.

Next look at the put operation, the same put operation is the put method entrusted to the segment. Here is the put method for the paragraph:

1. V Put (K key, int hash, V value, Boolean onlyifabsent) {2.  Lock ();         3. try {4.  int c = count;             5. if (c + + > Threshold)//ensure capacity 6.  Rehash ();  7. hashentry<k,v>[] tab = table;  8. int index = hash & (tab.length-1);  9. Hashentry<k,v> first = Tab[index];  Ten. Hashentry<k,v> e = first;             One. while (E! = null && (E.hash! = Hash | |!key.equals (E.KEY))) 12.  e = E.next;         13.14.  V OldValue;             . if (E! = null) {16.  OldValue = E.value;                 18. if (!onlyifabsent).  E.value = value;         19.} 20.             else {21.  OldValue = null;  ++modcount;  Tab[index] = new hashentry<k,v> (key, hash, first, value); Count = C;         Write-volatile 25.         } 26.  return oldValue;         The.} finally {28.  Unlock ();    29. } 30.  }

This method is also carried out in the case of holding Zhi (locking the entire segment), which is, of course, for concurrency security, the modification of data is not concurrent, you must have a statement to determine whether the overrun is sufficient to ensure that the capacity is rehash. The next is to find if there is a node of the same key, and if it exists, replace the value of the node directly. Otherwise create a new node and add it to the head of the hash chain, so be sure to modify the value of Modcount and count, and also modify the value of count to be the last step. The Put method calls the Rehash method, the Reash method is also very sophisticated, mainly using the table size of 2^n, here is not introduced. And more difficult to understand is the sentence int index = hash & (tab.length-1), the original segment inside is the real Hashtable, that is, each segment is a traditional hashtable, such as, From the structure of the two can see the difference, here is to find out where the entry in the table, and then get the entry is the first node of the chain, if e!=null, the description found, this is to replace the value of the node (onlyifabsent = = False) , otherwise, we need a new entry, its successor is first, and let Tab[index] point to it, what does it mean? is actually inserting this new entry into the chain head, and the rest is very easy to understand.

The modification also has Putall and replace. Putall is to call the Put method multiple times, nothing to say. Replace is not even used as a structural change, the implementation is much simpler than put and delete, understand the put and delete, understand the replace is not a trivial, here also does not introduce.
Get action
First look at the get operation, the same concurrenthashmap get operation is directly delegated to the segment get method, directly see segment Get method:

1. V Get (Object key, int hash) {  2.     if (count! = 0) {//Read-volatile the current bucket has a 0 3 data count.         Hashentry<k,v> e = GetFirst (hash);  Get the Head node 4.         while (E! = null) {  5.             if (E.hash = = Hash && key.equals (E.key)) {  6.                 V v = e.value;  7.                 if (v! = null)  8.                     return v;  9.                 return Readvalueunderlock (e);//Recheck  .             }  One.             e = e.next;  .     }     return null;  15.}  

get operation does not require a lock. The first step is to access the count variable, which is a volatile variable, since all modifications are made to the last step of the count  variable, which guarantees that the get operation can get almost the latest structural updates. For non-structural updates, that is, the change of the node value, because the value of the hashentry variable is  volatile, it is also guaranteed to read the most recent value. The next step is to traverse the hash chain according to hash and key to find the node to get, if not found, directly to return NULL. The reason to traverse the hash chain without locking is that the chain pointer next is final. But the head pointer is not final, which is returned by the GetFirst (hash) method, which is the presence of values in the  table array. This allows GetFirst (hash) to return obsolete header nodes, for example, when the Get method is executed, after the GetFirst (hash) is executed, the other thread performs the delete operation and updates the header node, which results in the header node returned in the Get method not up-to-date. This is allowed, through the coordination mechanism of the count variable, that get can read to almost the latest data, although it may not be up to date. To get the latest data, only full synchronization is used.

Finally, if the desired node is found, its value is returned directly if it is not empty, otherwise it is read again in a locked state. This seems to be somewhat puzzling, theoretically the value of the node can not be empty, this is because  put when the judgment, if the empty will throw nullpointerexception. The only source of the null value is the default value in Hashentry, because value in  hashentry is not final, and non-synchronous reads may read to null values. Take a closer look at the statement:tab[index] = new hashentry<k,v> (Key, hash, first, value) of the put operation, In this statement, the assignment of value in the Hashentry constructor and the assignment to Tab[index] may be reordered, which may result in a null value for the node. Here when the V is empty, it is possible that a thread is changing the node, and the previous get operation is not locked, according to the Bernstein condition, read write or write after reading will cause inconsistent data, so here to re-lock this E to read again, to ensure that the correct value is obtained.

1. V Readvalueunderlock (hashentry<k,v> e) {  2.     Lock ();  3.     try {  4.         return e.value;  5.     } finally {  6.         Unlock ();  7.     }  8.}

Another operation is containskey, and this implementation is much simpler because it does not need to read the value:

1. Boolean ContainsKey (Object key, int hash) {  2.     if (count! = 0) {//Read-volatile  3.         Hashentry<k,v> e = GetFirst (hash);  4. While         (E! = null) {  5.             if (E.hash = = Hash && key.equals (e.key))  6.                 return true;  7.             e = e.next;  8.         }  9.     }     return false;  

Concurrenthashmap principle Analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.