Introduction:
We usually have a phone book to record the phone numbers of all our friends. However, if a friend often contacts us, we can remember that the phone numbers of those friends do not need to be translated. However, if we haven't been in touch for a long time, we have to ask for a phone book when we want to contact the friend again. However, it takes a lot of time to find the phone book. However, what our brain can remember is certain. We can only remember what we are most familiar with, and those who are unfamiliar for a long time will naturally forget.
In fact, computers also use the same concept. We use the cache to store the previously read data, instead of directly dropping the data. In this way, when reading the data again, we can directly retrieve the data in the cache, instead of searching again, the system's response capability will be greatly improved. However, when we read a large number of data, we cannot put all the data that has been read into the cache. After all, the memory size is certain, we usually put the recently read data in the cache (equivalent to storing the name and phone number of the recently contacted friend in the brain ). Now, we will study such a caching mechanism.
LRU cache:
LRU cache uses this idea. LRU is the abbreviation of least recently used. It is translated as "least recently used". That is to say, the LRU cache removes the least recently used data for the latest read data. The most frequently read data is the most frequently read data. Therefore, with the LRU cache, we can improve the system performance.
Implementation:
To implement LRU cache, we first need to use a class named linkedhashmap. There are two benefits to using this class: First, it has implemented storage in the order of access, that is, the last read will be placed at the beginning, the least frequently read will be placed at the end (of course, it can also be stored in the insert order ). Second, linkedhashmap itself has a method to determine whether to remove the least frequently read number. However, the original method does not need to be removed by default (this is, linkedhashmap is equivalent to a writable list). Therefore, we need an override method to remove the least commonly used data when the number of data stored in the cache exceeds the specified number. The linkedhashmap API is clearly written. We recommend that you read it first.
To implement LRU caching Based on linkedhashmap, we can choose inheritance or delegation. I prefer delegation. Someone has already written the delegation-based implementation, and it is very beautiful. I will not be able to make an axe. The Code is as follows:
import java.util.LinkedHashMap;import java.util.Collection;import java.util.Map;import java.util.ArrayList;/*** An LRU cache, based on <code>LinkedHashMap</code>.** <p>* This cache has a fixed maximum number of elements (<code>cacheSize</code>).* If the cache is full and another entry is added, the LRU (least recently used) entry is dropped.** <p>* This class is thread-safe. All methods of this class are synchronized.** <p>* Author: Christian d'Heureuse, Inventec Informatik AG, Zurich, Switzerland<br>* Multi-licensed: EPL / LGPL / GPL / AL / BSD.*/public class LRUCache<K,V> {private static final float hashTableLoadFactor = 0.75f;private LinkedHashMap<K,V> map;private int cacheSize;/*** Creates a new LRU cache.* @param cacheSize the maximum number of entries that will be kept in this cache.*/public LRUCache (int cacheSize) { this.cacheSize = cacheSize; int hashTableCapacity = (int)Math.ceil(cacheSize / hashTableLoadFactor) + 1; map = new LinkedHashMap<K,V>(hashTableCapacity, hashTableLoadFactor, true) { // (an anonymous inner class) private static final long serialVersionUID = 1; @Override protected boolean removeEldestEntry (Map.Entry<K,V> eldest) { return size() > LRUCache.this.cacheSize; }}; }/*** Retrieves an entry from the cache.<br>* The retrieved entry becomes the MRU (most recently used) entry.* @param key the key whose associated value is to be returned.* @return the value associated to this key, or null if no value with this key exists in the cache.*/public synchronized V get (K key) { return map.get(key); }/*** Adds an entry to this cache.* The new entry becomes the MRU (most recently used) entry.* If an entry with the specified key already exists in the cache, it is replaced by the new entry.* If the cache is full, the LRU (least recently used) entry is removed from the cache.* @param key the key with which the specified value is to be associated.* @param value a value to be associated with the specified key.*/public synchronized void put (K key, V value) { map.put (key, value); }/*** Clears the cache.*/public synchronized void clear() { map.clear(); }/*** Returns the number of used entries in the cache.* @return the number of entries currently in the cache.*/public synchronized int usedEntries() { return map.size(); }/*** Returns a <code>Collection</code> that contains a copy of all cache entries.* @return a <code>Collection</code> with a copy of the cache content.*/public synchronized Collection<Map.Entry<K,V>> getAll() { return new ArrayList<Map.Entry<K,V>>(map.entrySet()); }} // end class LRUCache------------------------------------------------------------------------------------------// Test routine for the LRUCache class.public static void main (String[] args) { LRUCache<String,String> c = new LRUCache<String, String>(3); c.put ("1", "one"); // 1 c.put ("2", "two"); // 2 1 c.put ("3", "three"); // 3 2 1 c.put ("4", "four"); // 4 3 2 if (c.get("2") == null) throw new Error(); // 2 4 3 c.put ("5", "five"); // 5 2 4 c.put ("4", "second four"); // 4 5 2 // Verify cache content. if (c.usedEntries() != 3) throw new Error(); if (!c.get("4").equals("second four")) throw new Error(); if (!c.get("5").equals("five")) throw new Error(); if (!c.get("2").equals("two")) throw new Error(); // List cache content. for (Map.Entry<String, String> e : c.getAll()) System.out.println (e.getKey() + " : " + e.getValue()); }
Code from: http://www.source-code.biz/snippets/java/6.htm
In blog http://gogole.iteye.com/blog/692103, the author uses double-stranded tables + hashtable implementation. If you find out how to implement LRU in the interview questions, the examiner generally requires the use of double-stranded tables + hashtable. Therefore, I excerpted some of the original content as follows:
Implementation principle of double-stranded table + hashtable:
Connect all the locations of the cache to a dual-connected table. When a position is hit, you can adjust the orientation of the linked list to the position of the linked list header, the newly added cache is directly added to the linked list header. In this way, after multiple cache operations, the recently hit items will be moved in the direction of the linked list header, but will not be hit, but will move behind the linked list, the end of the linked list indicates the cache with the least recent usage. When you need to replace the content, the last position of the linked list is the least hit position. We only need to remove the last part of the linked list.
Public class lrucache {private int cachesize; private hashtable <object, entry> nodes; // cache container private int currentsize; private entry first; // The chain table header private entry last; // public lrucache (int I) {currentsize = 0; cachesize = I; nodes = new hashtable <object, entry> (I ); // cache container}/*** get the cached object and put it at the beginning */Public entry get (Object key) {entry node = nodes. get (key); If (node! = NULL) {movetohead (node); Return node;} else {return NULL ;}}/*** Add entry to hashtable, run the entry */Public void put (Object key, object Value) {// to check whether the hashtable entry exists. If so, only the valueentry node = nodes is updated. get (key); If (node = NULL) {// whether the cache container has exceeded the size. if (currentsize> = cachesize) {nodes. remove (last. key); removelast ();} else {currentsize ++;} node = new entry ();} node. value = value; // place the latest node in the linked list header to indicate the latest node. moveTo Head (node); nodes. put (Key, node);}/*** Delete the entry. Note: The delete operation will be executed only when the cache is full */Public void remove (Object key) {entry node = nodes. get (key); // Delete if (node! = NULL) {If (node. Prev! = NULL) {node. Prev. Next = node. Next;} If (node. Next! = NULL) {node. next. prev = node. prev;} If (last = node) Last = node. prev; If (first = node) First = node. next;} // Delete nodes in hashtable. remove (key);}/*** delete the end node of the linked list, that is, use the last entry */private void removelast () {// The End Of The linked list is not empty, point the end of the linked list to null. delete the end of a connected table (delete the least used cached object) if (last! = NULL) {If (last. Prev! = NULL) Last. prev. next = NULL; elsefirst = NULL; last = last. prev;}/*** move to the linked list header, indicating that this node is the latest */private void movetohead (Entry node) {If (node = first) return; if (node. prev! = NULL) node. Prev. Next = node. Next; If (node. Next! = NULL) node. Next. Prev = node. Prev; If (last = node) Last = node. Prev; If (first! = NULL) {node. next = first; first. prev = node;} First = node; node. prev = NULL; If (last = NULL) Last = first;}/** clear cache */Public void clear () {First = NULL; last = NULL; currentsize = 0 ;}} class entry {entry Prev; // entry next of the previous node; // object Value of the next node; // value of the object key; // key}
Reprinted please indicate the source: http://blog.csdn.net/beiyeqingteng