LRU Cache Introduction and implementation (Java)

Source: Internet
Author: User

We always have a phone book all the Friends of the phone, but, if a friend often contact, those friends of the phone number of the phone book we can remember, but, if long time has not been contacted, to contact the friend again, we have to ask for the phone book, but, Searching by phone book is still time-consuming. However, what our brains can remember is certain, and we can only remember what we are most familiar with, and the long-time unfamiliar nature forgets.

In fact, the computer also used the same concept, we use the cache to store the previously read data, rather than throw it away, so that, again read, you can directly in the cache, and do not have to re-search again, so that the system's response capacity will be greatly improved. However, when we read a very large number of times, we can not put all the data that has been read in the cache, after all, memory size is certain, we generally put the most recently read in the cache (the equivalent of our recently contacted friend's name and phone in the brain). Now we're going to look at such a caching mechanism.

LRU Cache:

The LRU cache takes advantage of such an idea. LRU is the abbreviation for least recently used, which translates to "least recently used", that is, the LRU cache removes the least recently used data for the most recent data read. And the most often read, but also the most read, so, using the LRU cache, we can improve the system performance.

Realize:

To implement an LRU cache, we first use a class Linkedhashmap. There are two advantages to using this class: one is that it has already implemented the storage in order of access, that is, the most recently read will be placed in the front, the most infrequently read will be placed at the end (of course, it can also be implemented in the order of insertion in the storage). Second, Linkedhashmap itself has a method to determine if the least frequently read number needs to be removed, but the original method does not need to be removed by default (this is, Linkedhashmap is equivalent to a linkedlist), so we need to override such a method , so that when the number of data stored in the cache exceeds the specified number, the most infrequently used removal. Linkedhashmap API written very clearly, recommend you can read it first.

To implement the LRU cache based on Linkedhashmap, we can choose inheritance or delegation, I prefer delegation. Based on the implementation of delegation has been written, and written very beautiful, I will not swim. The code is as follows:

package lru;import java.util.linkedhashmap;import java.util.collection;import  java.util.map;import java.util.arraylist;/** * an lru cache, based on  <code>LinkedHashMap</code>. *  * <p> * This cache  has a fixed maximum number of elements  (<code>cacheSize</code>) .  * if the cache is full and another entry is added,  the LRU  (least recently * used)  entry is dropped. *   * <p> * this class is thread-safe. all methods of  this class are synchronized. *  * <p> * Author:  Christian d ' heureuse, inventec informatik ag, zurich, switzerland<br>  * multi-licensed: epl&Nbsp;/ lgpl / gpl / al / bsd. */public class lrucache<k,  v> {private static final float hashtableloadfactor = 0.75f;private  linkedhashmap<k, v> map;private int cachesize;/** * creates a  new LRU cache. *  *  @param  cacheSize *             the maximum number of entries that  will be kept in this cache. */public lrucache (Int cacheSize)  {this.cacheSize = cacheSize;int hashTableCapacity =  (int)  math.ceil ( Cachesize / hashtableloadfactor)  + 1;map = new linkedhashmap<k, v > (hashtablecapacity, hashtableloadfactor,true)  {//  (An anonymous inner class) Private static final long serialVersionUID = 1; @Overrideprotected  boolean removeeldestentry ( map.entry<k, v> eldest)  {return size ()  > lrucache.this.cachesize;};} /** * retrieves an entry from the cache.<br> * the  retrieved entry becomes the mru  (most recently used)  entry. *   *  @param  key *             the key whose associated value is to be returned. * @ return the value associated to this key, or null if no  Value with this *         key exists in  the cache. */public synchronized v get (K key)  {return map.get ( key);} /** * adds an entry to this cache. The new entry becomes the MRU  ( most recently * used)  entry. if an entry with the specified  key already exists in the * cache, it is replaced by  the new entry. If the cache is full, the LRU *  ( least recently used)  entry is removed from the cache. *   *  @param  key *            the  key with which the specified value is to be associated.  *  @param  value *            a  value to be associated with the specified key. */public  synchronized void p ut (k key, v value)  {map.put (key, value);} /** * clears the cache. */public synchronized void clear ()  { Map.clear ();} /** * returns the number of used entries in the cache.  *  *  @return  the number of entries currently in the  Cache. */public synchronized int usedentries ()  {return map.size ();} /** * returns a <code>collection</code> that contains a  copy of all cache * entries. *  *  @return  a <code >collection</code> with a copy of the cache content. */public  synchronized collection<map.entry<k, v>> getall ()  {return new  Arraylist<map.entry<k, v>> (Map.entryset ());} PublIc static void main (String[] args)  {lrucache<string, string> c =  new LRUCache<String, String> (3); C.put ("1",  "one");  // 1c.put ("2",  "  // 2 1c.put ("3",  "three")  // 3 2 1c.put ("4",  "four");  // 4 3 2if  (C.get ("2")  == null) Throw new error ();  // 2  4 3c.put ("5",  "Five")  // 5 2 4c.put ("4",  "Second four");  / / 4 5 2c.put ("4",  "Second four");  // 4 5 2c.put ("4",  "second  four ");  // 4 5 2c.put (" 4 ", " Second four ");  // 4 5 2//  Verify cache content.//if  (C.usedentries ()  != 3)//throw new error (); if  (!c.get ("4"). Equals ("Second four")) Throw new error ();if  (!c.get ("5"). Equals ("Five" )) throw new&nbsp Error ();if  (!c.get ("2"). Equals ("both")) Throw new error ();// list cache  content.for  (Map.entry<string, string> e : c.getall ())  {System.out.println (E.getkey ()  +  " : "  + e.getvalue ());}}

code derived from:http://www.source-code.biz/snippets/java/6.htm


In the blog http://gogole.iteye.com/blog/692103, the author uses a doubly linked list + hashtable approach. If in the interview question how to implement LRU, the examiner will generally require the use of double-linked list + hashtable way. So, I excerpt the original part of the text as follows:


Double linked list + Hashtable implementation principle:

All caches are connected by a double-connected table, and when a position is hit, it is adjusted to the linked list by adjusting the point of the linked list, and the newly added cache is added directly to the list header. In this way, after several cache operations, the most recently hit, will be moved to the head of the chain, without hitting, and want to move behind the list, the end of the list is the least recently used cache. When you need to replace the content, the last position of the list is the least hit position, we only need to eliminate the last part of the list.

package lru;import java.util.hashtable;public class lrucached {private int  cachesize;private hashtable<object, entry> nodes;//Cache Container private int currentsize; private entry first;//chain head private entry last;//chain footer public lrucached (int i)  { Currentsize = 0;cachesize = i;nodes = new hashtable<object, entry > (i);//cache container}/** *  Gets the object in the cache and puts it on the front  */public entry get (object key)  {entry  node = nodes.get (Key);if  (node != null)  {movetohead (node); Return node ;}  else {return null;}} /** *  add  entry to hashtable,  and put Entry  */public void put (Object key,  object value)  {//First see if the Hashtable exists the entry,  if present, update only its valueentry node =  Nodes.get (Key);if  (node == null)  {//cache container has exceeded size .if  (CURRENTSIZE&Nbsp;>= cachesize)  {nodes.remove (Last.key); Removelast ();}  else {currentsize++;} Node = new entry ();} node.value = value;//the most recently used node to the list header, indicating the latest used. Movetohead (node); Nodes.put (Key, node); /** *  Remove Entry,  Note: The delete operation will be executed only if the cache is full  */public void remove (Object key)  {entry node = nodes.get (key);//delete if  (node != null) in the list  {if  ( Node.prev != null)  {node.prev.next = node.next;} if  (Node.next != null)  {node.next.prev = node.prev;} if  (Last == node) last = node.prev;if  (First == node) first =  Node.next;} Delete Nodes.remove (key) in Hashtable;} /** *  Delete the tail node of the linked list, even with the last   used Entry */private void removelast ()  {//the end of the list is not empty, Then point the end of the list to null.  delete the footer (delete the least used cache object) if  (last != null)  {if  (last.prev !=  NULL) Last.prev.next = nulL;elsefirst = null;last = last.prev;}} /** *  moves to the list header, indicating that this node is the most recently used  */private void movetohead (Entry node)  {if  ( Node == first) return;if  (node.prev != null) node.prev.next = node.next;if   (Node.next != null) node.next.prev = node.prev;if  (Last == node) Last  = node.prev;if  (first != null)  {node.next = first;first.prev =  node;} first = node;node.prev = null;if  (Last == null) Last = first;} /* *  empty Cache  */public void clear ()  {first = null;last = null; currentsize = 0;}} class entry {entry prev;//previous node entry next;//object value;//value object key;//key}


LRU Cache Introduction and implementation (Java)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.