Introduction:
We always have a phone book all the Friends of the phone, but, if a friend often contact, those friends of the phone number of the phone book we can remember, but, if long time has not been contacted, to contact the friend again, we have to ask for the phone book, but, Searching by phone book is still time-consuming. However, what our brains can remember is certain, and we can only remember what we are most familiar with, and the long-time unfamiliar nature forgets.
In fact, the computer also used the same concept, we use the cache to store the previously read data, rather than throw it away, so that, again read, you can directly in the cache, and do not have to re-search again, so that the system's response capacity will be greatly improved. However, when we read a very large number of times, we can not put all the data that has been read in the cache, after all, memory size is certain, we generally put the most recently read in the cache (the equivalent of our recently contacted friend's name and phone in the brain). Now we're going to look at such a caching mechanism.
LRU Cache:
The LRU cache takes advantage of such an idea. LRU is the abbreviation for least recently used, which translates to "least recently used", that is, the LRU cache removes the least recently used data for the most recent data read. And the most often read, but also the most read, so, using the LRU cache, we can improve the system performance.
Realize:
To implement an LRU cache, we first use a class Linkedhashmap. There are two advantages to using this class: one is that it has already implemented the storage in order of access, that is, the most recently read will be placed in the front, the most infrequently read will be placed at the end (of course, it can also be implemented in the order of insertion in the storage). Second, Linkedhashmap itself has a method to determine if the least frequently read number needs to be removed, but the original method does not need to be removed by default (this is, Linkedhashmap is equivalent to a linkedlist), so we need to override such a method , so that when the number of data stored in the cache exceeds the specified number, the most infrequently used removal. Linkedhashmap API written very clearly, recommend you can read it first.
To implement the LRU cache based on Linkedhashmap, we can choose inheritance or delegation, I prefer delegation. Based on the implementation of delegation has been written, and written very beautiful, I will not swim. The code is as follows:
Import Java.util.linkedhashmap;import java.util.collection;import Java.util.map;import java.util.ArrayList;/*** an LRU cache, based on <code>linkedhashmap</code>.** <p>* This cache has a fixed maximum number of elements (<code>cacheSize</code>). * If the cache is full and another entry is added, the LRU (least recently used) ENT Ry is dropped.** <p>* this class is Thread-safe. All methods of this class is synchronized.** <p>* Author:christian d ' heureuse, Inventec Informatik AG, Zurich, Swi tzerland<br>* Multi-licensed:epl/lgpl/gpl/al/bsd.*/public class Lrucache<k,v> {private static final F Loat hashtableloadfactor = 0.75f;private linkedhashmap<k,v> map;private int cachesize;/*** Crea TES a new LRU cache.* @param cacheSize The maximum number of entries that would be kept in this cache.*/public LRUCache (in T cacheSize) {this.cachesize = cacheSize; int hashtablecapacity = (int) Math.ceil (cacheSize/Hashtableloadfactor) + 1; Map = new Linkedhashmap<k,v> (hashtablecapacity, Hashtableloadfactor, True) {//(an anonymous inner class) Private static final Long serialversionuid = 1; @Override protected Boolean removeeldestentry (map.entry<k,v> eldest) {return size () > LRUCACHE.THIS.CAC Hesize; }}; }/*** retrieves an entry from the cache.<br>* the retrieved entry becomes the MRU (most recently used) entry.* @para M key The key whose associated value is to being returned.* @return The value associated to this key, or null if no value With this key exists in the cache.*/public synchronized V get (K key) {return map.get (key);} /*** Adds An entry-cache.* the new entry becomes the MRU (most recently used) entry.* If a entry with the Specifi Ed key already exists in the cache, it's replaced by the new entry.* If the cache was full, the LRU (least recently used) Entry is removed from the cache.* @param key, the key with which the SpecifiEd value is to being associated.* @param value a value to being associated with the specified key.*/public synchronized void PU T (K key, V value) {map.put (key, value);} /*** clears the cache.*/public synchronized void Clear () {map.clear ();} /*** Returns The number of used entries in the cache.* @return the number of entries currently in the Cache.*/public synch ronized int usedentries () {return map.size ();} /*** Returns A <code>Collection</code> that contains a copy of the all cache entries.* @return a <code>colle Ction</code> with a copy of the cache content.*/public synchronized collection<map.entry<k,v>> getAll ( {return new arraylist<map.entry<k,v>> (Map.entryset ());}} End Class LRUCache------------------------------------------------------------------------------------------// Test routine for the LRUCache class.public static void Main (string[] args) {lrucache<string,string> c = new Lruc Ache<string, string> (3); C.put ("1", "one"); 1 C.put ("2", "both"); 2 1 C.put ("3", "three"); 3 2 1 C.put ("4", "four"); 4 3 2 if (C.get ("2") = = null) throw new Error (); 2 4 3 C.put ("5", "Five"); 5 2 4 C.put ("4", "second Four"); 4 5 2//Verify cache content. if (c.usedentries ()! = 3) throw new Error (); if (!c.get ("4"). Equals ("second four") throw new Error (); if (!c.get ("5"). Equals ("five")) throw new Error (); if (!c.get ("2"). Equals ("both")) throw new Error (); List cache content. For (map.entry<string, string> e:c.getall ()) System.out.println (E.getkey () + ":" + e.getvalue ());}
Code derived from: http://www.source-code.biz/snippets/java/6.htm
In the blog http://gogole.iteye.com/blog/692103, the author uses a doubly linked list + hashtable approach. If in the interview question how to implement LRU, the examiner will generally require the use of double-linked list + hashtable way. So, I excerpt the original part of the text as follows:
Double linked list + Hashtable implementation principle:
All locations of the cache are concatenated with a double-connected table, and when a position is hit, it is adjusted to the linked list by adjusting the point of the linked list, and the newly added cache is added directly to the list header. In this way, after several cache operations, the most recently hit, will be moved to the head of the chain, without hitting, and want to move behind the list, the end of the list is the least recently used cache. When you need to replace the content, the last position of the list is the least hit position, we only need to eliminate the last part of the list.
public class LRUCache {private int cachesize;private hashtable<object, entry> nodes;//cache container private int currentsize; Private Entry first;//list header private Entry last;//chain footer public LRUCache (int i) {currentsize = 0;cachesize = I;nodes = new Hashtab Le<object, entry> (i);//cache Container}/** * Gets the object in the cache and puts it on the front */public Entry get (Object key) {Entry node = nodes.get (key); if (n Ode! = null) {Movetohead (node); return node;} else {return null;}} /** * Add entry to Hashtable and put entry */public void put (object key, Object value) {////To see if Hashtable exists, and if so, update only its entry Ntry node = nodes.get (key); if (node = = null) {//cache container has exceeded size. if (currentsize >= cacheSize) {nodes.remove (Last.key); rem Ovelast ();} else {currentsize++;} node = new Entry ();} Node.value = value;//places the most recently used node in the list header, indicating the most recently used. Movetohead (node); Nodes.put (key, node);} /** * Remove Entry, note: The delete operation will be executed only if the cache is full */public void remove (Object key) {Entry node = nodes.get (key);//delete the IF (node! = N in the linked list) ull) {if (Node.prev! = null) {Node.prev.next = Node.next;} if (node.next ! = null) {Node.next.prev = Node.prev;} if (last = = node) last = node.prev;if (first = = node) first = Node.next;} Delete Nodes.remove (key) in Hashtable;} /** * Delete the tail node of the linked list, even if the last used entry */private void Removelast () {//the tail of the chain is not empty, the end of the list is pointed to null. Delete the footer (delete least used cache object) if (end! = null) {if ( Last.prev = null) Last.prev.next = Null;elsefirst = Null;last = Last.prev;}} /** * Moves to the list header, indicating that this node is the most recently used */private void Movetohead (Entry node) {if (node = = first) return;if (Node.prev! = null) Node.prev . Next = node.next;if (Node.next! = null) Node.next.prev = node.prev;if (last = = node) last = Node.prev;if (first! = null) {n Ode.next = First;first.prev = node;} First = Node;node.prev = null;if (last = = null) last = First;} /* * Empty cache */public void Clear () {first = Null;last = Null;currentsize = 0;}} Class Entry {Entry prev;//previous node Entry next;//after a Node object value;//value Object key;//key}
Reprint Please specify source: Http://blog.csdn.net/beiyeqingteng