(1) The element to be put into HashMap is key-value.
(2) The bottom is the hash structure that the previous data structure course talked about.
(3) To put elements into the hashmap, then the type of key must implement the implementation of the Hashcode method, the default method is based on the object's address to calculate, specifically I do not remember very clearly, and then must overwrite the object's equal method.
Use a graph to represent the hash structure:
Here the Hashcode function is used to determine where the current key should be placed inside the hash bucket, where the hash can be seen as an array, the simplest way to confirm the position of the key should be placed by some method, and the equal function is to repeat the judgment between the following elements.
Well, let's take a look at the usage of these two kinds of Java libraries here:
Since they all implement the map interface, put the element in the method is put (A, b), here we first analyze the relatively simple HashMap bar:
Public V put (K key, V value) { if (key = = null) return Putfornullkey (value); int hash = hash (key); Gets the hash value of the current key int i = indexfor (hash, table.length); Returns the position inside the hash bucket for (entry<k,v> e = table[i]; E! = null; e = e.next) {//traverse the element following the current Hansh bucket Object K; if (E.hash = = Hash && (k = e.key) = = Key | | key.equals (k)) {///If you have the same key, you need to replace the value V oldValue = E.value; E.value = value; E.recordaccess (this); return oldValue; Returns the previous value } } modcount++; AddEntry (hash, key, value, I); Put entry return null; } |
This function itself is still very simple, first through the hash function to get the hash value of the current key, but it is important to note that the value returned by the Hashcode method HashMap itself will do some processing, the concrete what kind of does not elaborate, Then call the Indexfor method to determine the current key should belong to the current hash bucket position, and then to traverse the current bucket behind the list, here equal method comes in handy, here to see if equal is equal, Then just replace the original value with the new value directly ...
Of course, the most common situation is that the list behind the bucket is not the same as the current key, then you need to call the AddEntry method, the Key-value will be added into the current structure, then take a look at the definition of this method:
void AddEntry (int hash, K key, V value, int bucketindex) { if (size >= threshold) && (null! = Table[bucketindex])) { Resize (2 * table.length); Equivalent to re-setting the hash bucket hash = (Null! = key)? Hash (key): 0; Bucketindex = Indexfor (hash, table.length); } Createentry (hash, key, value, Bucketindex); Create a new entry and add it to the list behind the current bucket } |
In fact, this method is very simple, first to determine the current size of the bucket, if you feel too small, then you need to expand the current bucket size, so that the addition of elements to store more discretization, optimize the efficiency of the wipe and look.
Then you create a new entry that holds the key and value you want to erase and then chain it to the list of buckets that should be placed.
Well, to this place, the whole hashmap of the process of erasing elements has been seen very clearly, in the whole process did not see the process of locking, so it can be explained that HashMap is not support concurrency, not thread-safe, in the concurrency of the use of the environment will produce some inconsistent problems ...
As a result, the new Java Concurrent Class library has concurrenthashmap for use in concurrent environments.
So let's take a look at Concurrenthashmap's put operation:
Public V put (K key, V value) { Segment<k,v> s; if (value = = null) throw new NullPointerException (); int hash = hash (key); Get hash value Int J = (hash >>> segmentshift) & Segmentmask; if ((s = (segment<k,v>) unsafe.getobject//nonvolatile; recheck (Segments, (J << Sshift) + sbase)) = = NULL)//in ensuresegment//To get the corresponding fragment s = ensuresegment (j); This indicates that there is no this fragment, so we need to create this fragment Return S.put (key, hash, value, false); There's a strategy for segmented locking. } |
Here at the beginning with HashMap are not much, it is just to get the hash value of the current key, but the next work is not the same, here is a concept of a segmented:
Concurrenthashmap the entire hash bucket is segmented, that is, the large array into a few small fragments, and each small fragment has a lock on it, then when the element is erased, it is necessary to find out which fragment should be inserted, and then on this fragment to be erased, And here you need to get the lock ....
So let's take a look at this segment put method:
Final V put (K key, int hash, V value, Boolean onlyifabsent) { The lock here is a count lock, the same lock can be obtained multiple times by the same thread, but cannot be obtained by different threads hashentry<k,v> node = Trylock ()? Null://If the current segment lock is obtained, then node is null and will be allocated on its own. Scanandlockforput (key, hash, value); If the lock is not added, then wait, if possible to allocate entry, anyway, there is time to do not do more things V OldValue; try { This means that the lock has been acquired, then the entry will be placed in the appropriate position. hashentry<k,v>[] tab = table; int index = (tab.length-1) & hash; hashentry<k,v> first = Entryat (tab, index); Find the bucket that holds the entry and get the first entry for (hashentry<k,v> e = first;;) {//start with the current first element if (E! = null) { K K; if (k = e.key) = = Key | | (E.hash = = Hash && key.equals (k))) {//If key is equal, replace element directly OldValue = E.value; if (!onlyifabsent) { E.value = value; ++modcount; } Break } e = E.next; } else { if (node! = NULL) Node.setnext (first); Else node = new Hashentry<k,v> (hash, key, value, first); int c = count + 1; if (C > Threshold && tab.length < maximum_capacity) If there are too many elements, then you need to readjust the current hash structure to make the bucket more, so that the elements are more discrete Rehash (node); Else Setentryat (tab, Index, node); ++modcount; Count = C; OldValue = null; Break } } } finally { Unlock (); It is necessary to release the acquired lock in finally, so that the lock must be released. } return oldValue; } |
In fact, here the difference between Concurrenthashmap and HashMap is already obvious:
(1) Concurrenthashmap the entire bucket array, while the HashMap is not
(2) Concurrenthashmap in each segment is protected with a lock, so that the granularity of the lock is finer, concurrency performance is better, and HashMap no lock mechanism, not thread-safe ...
Finally, use a graph to illustrate the Concurrenthashmap:
Finally, in the case of concurrency, either use the container provided in the Concurrent class library, or you need to manage the synchronization of the data yourself ...
Java's HashMap and Concurrenthashmap