Java-linkedhashmap and Linkedhashset Source analysis __ Source code

Source: Internet
Author: User
Tags rehash static class what header concurrentmodificationexception

In the previous article, detailed description of the source code of HashMap and HashSet, from the point of view of the source can be seen in the depth of the relationship between the two, speculation, Linkedhashmap and Linkedhashset must also have in-depth contact. After analysis you will find that the relationship between the two is the same as HashMap and HashSet.

Nonsense not much to say, first Linkedhashmap source code: Linkedhashmap Source Code

* * @param <K> The type of keys maintained by this map * @param <V> The type of mapped values * * @author
 Josh Bloch * @see object#hashcode () * @see Collection * @see Map * @see HashMap * @see TreeMap
 * @see Hashtable * @since 1.4 * inherited from HashMap * The main function is: to save the sequence of the elements of the element * when the elements of the traversal, * can be stored in order to traverse.
 * So performance is weaker than hashmap performance.
 * But it has better performance when traversing all the elements. * * public class Linkedhashmap<k,v> extends hashmap<k,v> implements map<k,v> {private Stati

    C Final Long serialversionuid = 3801124242820219131L;
     /** * The head of the doubly linked list.

    * * Private transient entry<k,v> header; /** * The iteration ordering to this linked hash map: <tt>true</tt> * for Access-order, <
     Tt>false</tt> for Insertion-order.

    * * @serial/private Final Boolean accessorder; /** * Constructs an empty insertion-ordered &LT;TT&GT;LINKEDHASHMAP&Lt;/tt> instance * with the specified initial capacity and load factor. * * @param initialcapacity The initial capacity * @param loadfactor the load factor * @throws Illega  Largumentexception If the initial capacity is negative * or the load factor is nonpositive/public
        Linkedhashmap (int initialcapacity, float loadfactor) {super (initialcapacity, loadfactor);
    Accessorder = false; }/** * Constructs a empty insertion-ordered <tt>LinkedHashMap</tt> instance * with the SPECIF
     IED initial capacity and a default load factor (0.75). * * @param initialcapacity The initial capacity * @throws IllegalArgumentException If the initial capacity is N
        Egative */Public linkedhashmap (int initialcapacity) {super (initialcapacity);
    Accessorder = false; /** * Constructs an empty insertion-ordered <tt>LinkedHashMap</tt> instance * With the default initial capacity and load factor (0.75).
        * * Public Linkedhashmap () {super ();
    Accessorder = false; }/** * Constructs a insertion-ordered <tt>LinkedHashMap</tt> instance with * Same mapping  s as the specified map. The <tt>LinkedHashMap</tt> * instance is created with a default load factor (0.75) and a initial *
     Capacity sufficient to hold the mappings in the specified map. * * @param m The map whose mappings are to is placed in this map * @throws NullPointerException if the Specifie
        D map is null/public linkedhashmap (map<? extends K,? extends v> m) {super (M);
    Accessorder = false; }/** * Constructs a empty <tt>LinkedHashMap</tt> instance with the * specified initial
     ty, load factor and ordering mode. * * @param initialcapacity The initial capacity * @param loadfactor theLoad factor * @param accessorder the ordering mode-<tt>true</tt> for * Access-order, 
     <tt>false</tt> for Insertion-order * @throws IllegalArgumentException If the initial capacity is negative
                         * or the load factor is nonpositive */public linkedhashmap (int initialcapacity,
        Float Loadfactor, Boolean accessorder) {super (initialcapacity, loadfactor);
    This.accessorder = Accessorder; }/** * Called by superclass Constructors and pseudoconstructors (clone, * readobject) before any entries a  Re inserted into the map.
     Initializes * the chain. * Initialization of a head node * head nodes of the forward and back nodes are pointing to the head node itself */@Override void init () {header = new entry<> ( -1, nul
        L, NULL, NULL);
    Header.before = Header.after = header;  }/** * Transfers all entries to new table array. This are called * by SUPERCLAss resize.
     It is overridden for performance, as it's * faster to iterate using our linked list.
     * Transfer all entries elements into the new table array.
     * The transfer here is much simpler than the transfer inside the HashMap. * Mainly because of what header. Bidirectional linked list.
     All the elements in the original array * can be traversed by the header.
     * The common thing to do is to copy the elements from the original array into the new array. * HashMap transfer is looking at trouble because of the two-tier cycle, * The second loop it needs to save the location of the list.
     There are three sentence assignment statements that make people feel around.
        */@Override void transfer (hashmap.entry[] newtable, Boolean rehash) {int newcapacity = newtable.length;  for (entry<k,v> e = header.after e!= header; e = e.after) {if (rehash) E.hash = (E.key = = null)?
            0:hash (E.key);
            int index = indexfor (E.hash, newcapacity);
            E.next = Newtable[index];
        Newtable[index] = e; 
     }/** * Returns <tt>true</tt> If this map maps one or more keys to the * specified value. * * @param value value whose presence in this map are to tested * @return &LT;TT>true</tt> If this map maps one or more keys to the * specified value * contains value values.
     Simple, a look will understand. */Public Boolean containsvalue (Object value) {//overridden to take advantage of faster iterator if (Value==null)
                    {for (Entry e = header.after e!= header; e = e.after) if (e.value==null)
        return true;
                    else {for (Entry e = header.after e!= header; e = e.after) if (Value.equals (E.value))
        return true;
    return false; }/** * Returns the value to which the specified key was mapped, * or {@code null} If this map contains no M
     Apping for the key.  * * <p>more formally, if this map contains a mapping from a key * {@code k} to a value {@code v} such {@code (key==null k==null: * key.equals (k))}, then this method returns {@code V};  otherwise * It returns {@code null}. (ThEre can be at most one such mapping.) * <p>a return value of {@code null} does not <i>necessarily</i> * indicate that map cont Ains no mapping for the key;
     It's also * possible this map explicitly maps the key to {@code null}.
     * The {@link #containsKey ContainsKey} operation May is used to * distinguish these two cases.
     * Get method, obtain value through key.
     * Getentry (Key) uses the method in the parent class, has the key value to get the hash value, * then obtains the array index, the traversal list obtains the entry value.
        * * Public V get (Object key) {entry<k,v> E = (entry<k,v>) getentry (key);
        if (e = null) return null;
        E.recordaccess (this);
    return e.value;
     }/** * Removes all of the mappings from this map.
     * The map would be empty to call returns.
        * Clear all elements * forward references, back references are pointing to themselves/public void clear () {super.clear ();
    Header.before = Header.after = header;
     }/** * Linkedhashmap entry. * Integrate the inner class of the parent class
     * * private static Class Entry<k,v> extends Hashmap.entry<k,v> {//These fields comprise the
        Doubly linked list used for iteration.  /** * The previous node and the latter node are used to record the order in which elements are added, and the difference between the element traversal * and Next is that next is a possible linked list for resolving conflict problems (entry<k,v>

        Before, after;
        Entry (int hash, K key, V value, hashmap.entry<k,v> next) {Super (hash, key, value, next);
         }/** * Removes this entry from the linked list. * Deletes the current element.
            That is to call the method's This * to refer the back of the preceding element to its own forward/private void remove ().
            Before.after = after;
        After.before = before;
         }/** * Inserts this entry before the specified existing in the list.
         * Insert the element before the parameter Existingentry * that is, insert the this element that calls the method in front of the existingentry. 
         * This.after = existingentry;
         * This.before = Existingentry.before; * This.before.after = this;
         * This.after.before = this; * The above four-step assignment operation is all to this operation.

         It's easy to understand.
            * * private void Addbefore (Entry<k,v> existingentry) {after = Existingentry;
            before = Existingentry.before;
            Before.after = this;
        After.before = this; }/** * This is invoked by the superclass whenever the value * of a pre-existing entry I
         s read by Map.get or modified by Map.set. * If The enclosing Map is access-ordered, it moves the entry * to the end of the list;
         Otherwise, it does nothing. */void Recordaccess (hashmap<k,v> m) {linkedhashmap<k,v> LM = (linkedhashmap<k,v>) m
            ;
                if (lm.accessorder) {lm.modcount++;
                Remove ();
            Addbefore (Lm.header);
        } void Recordremoval (hashmap<k,v> m) {remove (); }
    }
    /**
    * Iterator * The iterator now iterates in the order of insertion * * Private abstract class Linkedhashiterator<t> implements Iterator<t> {
        entry<k,v> nextentry = Header.after;

        Entry<k,v> lastreturned = null;  /** * The Modcount value that the iterator believes this backing * List should have.
         If this expectation is violated, the iterator * has detected concurrent.

        * * int expectedmodcount = Modcount;
        public Boolean Hasnext () {return nextentry!= header;
        /** * Deletes the last traversed element * Lastreturned represents the data that was last traversed * lastreturned saves the last traversed data in the NextEntry () method */public void Remove () {if (lastreturned = null) throw new IllegalStateException
            ();

            if (Modcount!= expectedmodcount) throw new Concurrentmodificationexception ();
            LinkedHashMap.this.remove (Lastreturned.key); Lastreturned = null;
        Expectedmodcount = Modcount;
        /** * 1, the key to the iterator is the NextEntry () method * 2, the iterative process, the data in the collection is not allowed to be modified.
        * Only the last traversed data can be deleted using the Remove () method.
        * Otherwise Modcount and Expectedmodcount, an exception will be thrown.
        * 3, the traversal process, will use the Hasnext () method to determine whether there is the next element.
        * So normally, nosuchelementexception exceptions are not thrown.
        * 4, NextEntry represents the next element.
        * E represents the element that is currently being traversed.
        * Lastreturned represents the last traversed element, * for removing the last traversal element from the Remove method.  * 5, Linkedhashmap can be inserted in order to traverse the reason, * is based on the use of before and after, respectively, to save the forward and backwards elements * * * * * * * * * entry<k,v>
            NextEntry () {if (Modcount!= expectedmodcount) throw new Concurrentmodificationexception ();

            if (NextEntry = header) throw new Nosuchelementexception ();
            Entry<k,v> e = lastreturned = NextEntry;
            NextEntry = E.after;
        return e; } private class Keyiterator extends linkedhashiterator<k> {public KNext () {return nextentry (). Getkey ();} Private class Valueiterator extends Linkedhashiterator<v> {public v. Next () {return nextentry (). Valu E } private class Entryiterator extends linkedhashiterator<map.entry<k,v>> {public map.entry
    <K,V> Next () {return nextentry ();} }//These Overrides alter the behavior of Superclass view iterator () methods/** * methods that override the parent class the parent class method is called during the call   Call corresponding to this * three method * * iterator<k> Newkeyiterator () {Return to New Keyiterator ();
    } iterator<v> Newvalueiterator () {return new Valueiterator ();}

    Iterator<map.entry<k,v>> Newentryiterator () {return new Entryiterator ();} /** * This override alters behavior of the superclass put method. It causes newly * allocated entry to get inserted in the end of the linked list and * Removes the eldest I
     F appropriate.
     * This invokes the AddEntry method of the parent class. * The AddEntry method of the parent class will resize the judgment and then invoke the CreateenTry method.
     * Plainly, the method of invoking the parent class is resize judged, and then the data is added using the * own Createentry method.

        * * void addentry (int hash, K key, V value, int bucketindex) {super.addentry (hash, key, value, Bucketindex);
        Remove eldest entry if instructed entry<k,v> eldest = Header.after;
            if (Removeeldestentry (eldest)) {/** * This method deletes the entry element through a key.
            * The following statement will not execute because false is returned in the IF statement, * So the Removeentryforkey method is not overridden. * If you want to perform a delete operation, the Linkedhashmap class must override * This method.
            Because the delete operation of the parent class does not change the direction of the * forward and back references in this class.
        * * Removeentryforkey (Eldest.key); }/** * This override differs from addentry in-it doesn ' t resize the * table or remove the Eldes
     T entry.
     * Add a new element * to get the data from the original Bucketindex position in the table array, create a new entry, and form a linked list with the original Bucketindex position, and then save it in the table array.
     * These steps are consistent with the Createentry () method in HashMap.
     * E.addbefore (header); Implement to add header to front of E. * * */void CREAteentry (int hash, K key, V value, int bucketindex) {hashmap.entry<k,v> old = Table[bucketindex];
        entry<k,v> e = new entry<> (hash, key, value, old);
        Table[bucketindex] = e;
        E.addbefore (header);
    size++;
    Protected Boolean removeeldestentry (map.entry<k,v> eldest) {return false; }
}

The source code in the English annotation section has not been removed, convenient for everyone to view the comparison.

For Linkedhashmap to say, also nothing special place, understand hashmap words, basic understanding linkedhashmap also no pressure.

1, the first clear Linkedhashmap function, is the HASHMAP based on the implementation of the elements of the time to traverse the insertion sequence. To implement this functionality, forward and back references are defined in the entry class. In fact, it can be achieved simply by using a post reference. So why use a two-way linked list?
The reason for personal understanding is: through the use of two-way linked lists, linked lists in the insertion of elements, delete and other operations easy to maintain. If you have only one forward reference, it would be cumbersome to move through the header reference to the previous element of the element you want to delete by deleting and inserting the elements. Must degrade performance. So through the two-way linked list, inserting deletes are very convenient.
2, bidirectional linked list header reference. The header is a entry variable maintained in Linkedhashmap that represents the first inserted element. Traversal occurs, the element begins to traverse.
3, a good picture, had to steal over haha ~ ~

A picture that is very descriptive of the problem.
4, when a new element to join the map will call the entry AddEntry method, will invoke the Removeeldestentry method, this is the implementation of the LRU element expiration mechanism where, By default, the Removeeldestentry method returns only false to indicate that the element never expires.

Protected Boolean removeeldestentry (map.entry<k,v> eldest) {return  
       false;  
   

Returning false means no expiration, but this method can be overridden by inheriting the class, which enables you to periodically delete old data.
This method usually does not modify the mappings in any way, but instead allows the mappings to be modified under the guidance of their return values. If you build the LRU cache with this mapping, it is convenient to allow mappings to reduce memory loss by deleting old entries. Linkedhashset Source

* * @author Josh Bloch * @see object#hashcode () * @see Collection * @see Set * @see HashSet * @see
 TreeSet * @see Hashtable * @since 1.4 * This class also implements the same function: * Traversing all elements can be traversed in the order of insertion.
 * Makes the data sequence and insertion order consistent.
 * This class is very simple, is four constructors.
 * All calls to the same constructor of the parent class complete. * HashSet (int initialcapacity, float loadfactor, Boolean dummy) {* map = new Linkedhashmap<> (initialcapacit
 Y, Loadfactor);
 *} * This is the constructor of the parent class being invoked.
 * The third parameter dummy is ignored and is not useful.
 * It can be seen from the constructor that the Linkedhashmap is used to accomplish the function of the linkedhashset *. * * public class Linkedhashset<e> extends hashset<e> implements Set<e&gt, Cloneable, Java.io.Seria

    lizable {private static final long serialversionuid = -2851667679971038690l;
     /** * Constructs a new, empty linked hash set with the specified initial * capacity and load factor. * * @param initialcapacity The initial capacity of the linked hash set * @param loadfactor the LO Ad factor of the linked hash set
     * @throws IllegalArgumentException If the initial capacity is less * than zero, or if the LO Ad factor is nonpositive */public linkedhashset (int initialcapacity, float loadfactor) {super (Initialca
    Pacity, Loadfactor, true); }/** * Constructs a new, empty linked hash set with the specified initial * capacity and the default load
     Factor (0.75). * * @param initialcapacity The initial capacity of the Linkedhashset * @throws illegalargumentexception if
        The initial capacity is less * than zero */public linkedhashset (int initialcapacity) {
    Super (Initialcapacity,. 75f, True); }/** * Constructs a new, empty linked hash set with the default initial * capacity (a) and load factor (0
     .75).
    */Public Linkedhashset () {super (75f, true); }/** * Constructs a new linked hash set with the same elements as the * specifiedCollection. 
     The linked hash set is created with a initial * capacity sufficient to hold the elements in the specified
     * and the default load factor (0.75). * * @param c The collection whose elements are to is placed into * this set * @throws Nullpointe
        Rexception if the specified collection is null */public linkedhashset (collection<? extends e> c) {
        Super (Math.max (2*c.size (), one),. 75f, True);
    AddAll (c);
 }
}

The annotation section of the code has made it quite clear that the class is very simple, as long as you understand the linkedhashmap. This class has only four constructors without methods. All methods of reusing the parent class are implemented.

If you have any questions or questions, please feel free to criticize real. My level is limited, a lot of advice. "Shake hands ~" ^_^

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.