LRU Cache Implementation-linkedhashmap__ Collection

Source: Internet
Author: User
LRU Cache Implementation-linkedhashmap

LRU is the abbreviation of least recently Used, translated as "least recently used".

LRU Cached thinking fixed cache size, a fixed size needs to be allocated to the cache. Each read cache changes the cache's usage time, refreshing the cache's presence time. You need to remove the most recent unused cache after the cache is full, and then add the latest cache.

By default, sequential storage is implemented by Linkedhashmap itself, sorted by the order in which the elements are added, or enabled in the order of access, and when the third parameter passed in is Accessorder true, the Linkedhashmap is ordered in the order of access. When false, it is in the insertion order and the default is False.

When the Accessorder is set to true, the most recently accessed element can be placed at the front, which satisfies the 2nd above. That is, the most recently read data is placed at the front, the oldest data is put on the last side, and then it has a method to determine whether to delete the oldest data, by default, return False, that is, do not delete the data. The way we use Linkedhashmap to implement LRU caching is to achieve a simple extension of linkedhashmap, there are two ways to extend it, one is inheritance, and it is simpler to implement inheritance method, and it realizes the map interface, You can use the Collections.synchronizedmap () method to implement thread-safe operations while using a multithreaded environment, one that is more elegant in a delegation,delegation way, but because the map interface is not implemented, So thread synchronization needs to be done by itself.

Delegation import Java.util.ArrayList;
Import java.util.Collection;
Import Java.util.LinkedHashMap;

Import Java.util.Map; 
    /** * LRU (least Recently used), Java simulation Implementation, based on Linkedhashmap * @param <K> * @param <V>/public class Lrucache<k,v> {
    Private static final float hashtableloadfactor = 0.75f;
    Private linkedhashmap<k,v> map;

    private int cacheSize; /** * Create lrucache,true Access Order * @param cacheSize * * Public LRUCache (int cacheSize) {this.cachesize
        = CacheSize;
        int hashtablecapacity = (int) Math.ceil (cachesize/hashtableloadfactor) + 1; Map = new linkedhashmap<k, v> (hashtablecapacity, Hashtableloadfactor, true) {privat

            E static final Long serialversionuid = 1; @Override protected Boolean removeeldestentry (Map.entry<k, v> eldest) {return size () &G T
            LRUCache.this.cacheSize;
    }
        }; }/** * Retrieves an ENtry from the cache.<br> * retrieved entry becomes the MRU (most recently used) entry.
     * @param key * The key whose associated value is returned.
     * @return The value associated to this key, or null if no value with this * key exists in the cache.
    * * Public synchronized V get (K key) {return map.get (key); }/** * Adds an entry to this cache. The new entry becomes the MRU (most recently * used) entry. If a entry with the specified key already exists in the * cache, it is replaced by the new entry.
     If the cache is full, the LRU * (least Recently used) entry be removed from the cache.
     * @param key * and which the specified value is associated.
     * @param value * A value to is associated with the specified key.
    * * Public synchronized void put (K key, V value) {map.put (key, value); }

    /**
     * Clears the cache.
    */public synchronized void Clear () {map.clear ();
     }/** * Returns the number of used entries in the cache.
     * * @return The number of entries currently in the cache.
    * * Public synchronized int usedentries () {return map.size ();
     }/** * Returns a <code>Collection</code> that contains a copy of the all cache * entries.
     * * @return a <code>Collection</code> with a copy of the cache content. * * Public synchronized collection<map.entry<k, v>> GetAll () {return new arraylist<map.entry&lt ;
    K, V>> (Map.entryset ());
        public static void Main (string[] args) {lrucache<string,string> c = new lrucache<> (3); C.put ("1", "one"); 1 C.put ("2", "two"); 2 1 C.put ("3", "three"); 3 2 1 C.put ("4", "four"); 4 3 2 if (C.get ("2") = = null) {throw new Error ();/2 4 3} c.put ("5", "Five"); 5 2 4 C.put ("4", "second Four");
        4 5 2 if (C.usedentries ()!= 3) {throw new Error ();
        } if (!c.get ("4"). Equals ("second four") throw new Error ();
        if (!c.get ("5"). Equals ("five")) throw new Error ();
        if (!c.get ("2"). Equals ("two")) throw new Error (); For (map.entry<string,string> Entry:c.getall ()) {System.out.println (Entry.getkey () + ":" + Entry.getva
        Lue ()); }
    }
}
//inheritance import Java.util.LinkedHashMap;

 Import Java.util.Map;

 Public lrucache<k, v> extends Linkedhashmap<k, v> {private int cacheSize;
 Public LRUCache (int cacheSize) {super (0.75, true);
 This.cachesize = cacheSize;
 } protected Boolean removeeldestentry (Map.entry<k, v> eldest) {return size () >= cacheSize; }
 }

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.