Tag: Next collision AC causes strong resize code SHM hash
Multithreading under
[HashMap]
the problem:
1. After a multi-threaded put operation, a get operation causes a dead loop.
2. After a multithreaded put non-null element, the get operation gets a null value.
3, multi-threaded put operation, resulting in the loss of elements.
The main concern [hashmap]-the cycle of death.
Why is there a cycle of death?
As we all know, HashMap uses the list to solve the hash conflict, the specific HashMap analysis can refer to the Java collection---hashmap analysis of the source code. Because it is a linked list structure, it is easy to form a closed link, so that the loop can generate a dead loop as long as the thread passes a get operation on the HashMap. But what I'm curious about is how this closed link is formed. In single-threaded situations, it is impossible to generate a closed loop if only one thread operates on the HASHMAP data structure. That is only in the case of multithreading concurrency, that is, when put operation, if Size>initialcapacity*loadfactor, then this time HashMap will be rehash operation, The structure of the HashMap will be changed dramatically. It is possible that the rehash operation was triggered at this time by two threads, resulting in a closed loop.
Below we from the source code step by step analysis of how this circuit is produced. Let's take a look at the put operation:
Store data put
Publicv put (K key, V value) {...//Calculate hash value inthash =Hash (Key.hashcode ()); inti =indexfor (hash, table.length); //if the key is already inserted, replace the old value (link operation) for(entry<k,v> e = table[i]; E! =NULL; E =e.next) {Object k; if(E.hash = = Hash && (k = e.key) = = Key | |Key.equals (k))) {V OldValue=E.value; E.value=value; E.recordaccess ( This); returnOldValue; }} Modcount++; //The key does not exist and a node needs to be added.addentry (hash, key, value, I); return NULL;}
When we put the element in the HashMap, it is worthwhile to place the element in the array (i.e. subscript) according to the hash of the key, and then we can put the element in the corresponding position. If there are other elements in the location of this element, then the elements in the same seat will be stored in the form of a list, and the new join is placed on the chain head, and the previous join is placed at the end of the chain.
Check if capacity is exceeded addentry
As you can see, if the size already exceeds the threshold, then you need to do resize, create a larger hash table, and then move the data from the old hash table to the new hash table:
Adjust hash Table Size Resize
void resize (int Newcapacity) {entry[] oldtable = table; int oldcapacity = Oldtable.length; ...... // Create a new hash Table entry[] newtable = new entry[newcapacity]; // migrating data on old hash table to new hash table transfer (newtable); Table = newtable; Threshold = (int ) (newcapacity *
When the table[] array is small, it is easy to generate a hash collision, so the size and capacity of the hash table is very important. In general, the hash table this container when there is data to be inserted, will check whether the capacity is more than the set of Thredhold, if more than, need to increase the size of the hash table, the process is called resize.
When multiple threads add new elements to hashmap at the same time, multiple resize will have a certain probability of a dead loop, because each time resize needs to map the old data to the new hash table, this part of the code is in the Hashmap#transfer () method, as follows:
voidtransfer (entry[] newtable) {entry[] src=table; intNewcapacity =newtable.length; //The following code means://pick an element from the oldtable and put it in the newtable . for(intj = 0; J < Src.length; J + +) {Entry<K,V> e =Src[j]; if(E! =NULL) {Src[j]=NULL; Do {entry<k,v> next = E.next; int i = indexfor (E.hash, newcapacity); E.next = Newtable[i]; Newtable[i] = e; e = next; } while (E! = null); } }}
The green part of the code is the culprit that causes multiple threads to block using HashMap with a sudden rise in CPU usage.
The problem of HashMap cycle under multi-threading