"Algorithm"--LRU algorithm

Source: Internet
Author: User

LRU principle

The core idea of the LRU (Least recently used, least recently used) algorithm is to retire data based on the historical access records of the data, with the heart being that "if the data has been accessed recently, the chances of being accessed in the future are higher".

Implementation 1

The most common implementation is to use a linked list to save the cached data, the detailed algorithm is implemented as follows:

1. Inserting new data into the list head;
2. Whenever the cache hits (that is, the cached data is accessed), the data is moved to the list header;
3. When the list is full, discard the data at the end of the list.
Analysis
Hit rate
When there is hot data, LRU efficiency is very good, but the occasional, periodic batch operation will cause the LRU hit rate drops sharply, the cache pollution is more serious.
"Complexity"
Simple to implement.
Cost
A hit will need to traverse the linked list, find the hit block index, and then need to move the data to the head.

Importjava.util.ArrayList; Importjava.util.Collection; ImportJava.util.LinkedHashMap; ImportJava.util.concurrent.locks.Lock; ImportJava.util.concurrent.locks.ReentrantLock; ImportJava.util.Map; /*** Class Description: To implement a simple cache with Linkedhashmap, the Removeeldestentry method must be implemented, see the JDK documentation * *@authorDennis * *@param<K> *@param<V>*/  Public classLrulinkedhashmap<k, v>extendsLinkedhashmap<k, v> {      Private Final intmaxcapacity; Private Static Final floatDefault_load_factor = 0.75f; Private FinalLock lock =NewReentrantlock ();  PublicLrulinkedhashmap (intmaxcapacity) {          Super(Maxcapacity, Default_load_factor,true);  This. maxcapacity =maxcapacity; } @Overrideprotected BooleanRemoveeldestentry (Java.util.map.entry<k, v>eldest) {          returnSize () >maxcapacity; } @Override Public BooleanContainsKey (Object key) {Try{lock.lock (); return Super. ContainsKey (key); } finally{lock.unlock (); }} @Override PublicV get (Object key) {Try{lock.lock (); return Super. Get (key); } finally{lock.unlock (); }} @Override Publicv put (K key, V value) {Try{lock.lock (); return Super. Put (key, value); } finally{lock.unlock (); }      }       Public intsize () {Try{lock.lock (); return Super. Size (); } finally{lock.unlock (); }      }       Public voidClear () {Try{lock.lock (); Super. Clear (); } finally{lock.unlock (); }      }       PublicCollection<map.entry<k, v>>GetAll () {Try{lock.lock (); return NewArraylist<map.entry<k, V>> (Super. EntrySet ()); } finally{lock.unlock (); }      }  }  
Implementation 2

LRUCache Linked list +hashmap implementation

The traditional LRU algorithm is to set a counter for each cache object, each cache hit to the counter +1, and the cache run out, need to retire old content, put new content, the view of all the counters, and the least used to replace the content.

Its drawbacks are obvious, if the number of caches is small, the problem is not very large, but if the cache space is too large to reach 10W or more than 100W, once the need to eliminate, you need to traverse all the calculators, the performance and resource consumption is huge. The efficiency is also very slow.
Its principle: The cache all locations are connected with a double-linked table, when a position is hit, it will be adjusted by the link list point, the position is adjusted to the location of the link header, the new cache added directly to the list header.
In this way, after several cache operations, the most recently hit, will be moved to the head of the chain, without hitting, and want to move behind the list, the end of the list is the least recently used cache.
When you need to replace the content, the last position of the list is the least hit position, we only need to eliminate the last part of the list.
It says so many theories, the following code is used to implement an LRU policy cache.
Non-thread safe, if security is implemented, locks the method in response.

ImportJava.util.HashMap;ImportJava.util.Map.Entry;ImportJava.util.Set; Public classLrucache<k, v> {    Private intcurrentcachesize; Private intcachecapcity; PrivateHashmap<k,cachenode>caches; PrivateCachenode first; PrivateCachenode last;  PublicLRUCache (intsize) {Currentcachesize= 0;  This. Cachecapcity =size; Caches=NewHashmap<k,cachenode>(size); }     Public voidput (K k,v V) {cachenode node=Caches.get (k); if(node = =NULL){            if(Caches.size () >=cachecapcity)                {Caches.remove (Last.key);            Removelast (); } node=NewCachenode (); Node.key=K; } node.value=v;        Movetofirst (node);    Caches.put (k, node); }     PublicObject get (k k) {Cachenode node=Caches.get (k); if(node = =NULL){            return NULL;        } movetofirst (node); returnNode.value; }     PublicObject remove (k k) {Cachenode node=Caches.get (k); if(Node! =NULL){            if(Node.pre! =NULL) {Node.pre.next=Node.next; }            if(Node.next! =NULL) {Node.next.pre=Node.pre; }            if(node = =First ) { First=Node.next; }            if(node = =Last ) { Last=Node.pre; }        }        returnCaches.remove (k); }     Public voidClear () { First=NULL; Last=NULL;    Caches.clear (); }    Private voidMovetofirst (Cachenode node) {if(First = =node) {            return; }        if(Node.next! =NULL) {Node.next.pre=Node.pre; }        if(Node.pre! =NULL) {Node.pre.next=Node.next; }        if(node = =Last ) { Last=Last.pre; }        if(First = =NULL|| last = =NULL) { First= last =node; return; } Node.next=First ; First.pre=node; First=node; First.pre=NULL; }    Private voidRemovelast () {if(Last! =NULL) { last=Last.pre; if(Last = =NULL) { First=NULL; }Else{Last.next=NULL; } }} @Override PublicString toString () {StringBuilder sb=NewStringBuilder (); Cachenode node=First ;  while(Node! =NULL) {sb.append (String.Format ("%s:%s", Node.key,node.value)); Node=Node.next; }        returnsb.tostring (); }    classcachenode{Cachenode Pre;        Cachenode Next;        Object key;        Object value;  PublicCachenode () {}} Public Static voidMain (string[] args) {LRUCache<Integer,String> LRU =NewLrucache<integer,string> (3); Lru.put (1, "a");//1:aSystem.out.println (lru.tostring ()); Lru.put (2, "B");//2:b 1:aSystem.out.println (lru.tostring ()); Lru.put (3, "C");//3:c 2:b 1:aSystem.out.println (lru.tostring ()); Lru.put (4, "D");//4:d 3:c 2:bSystem.out.println (lru.tostring ()); Lru.put (1, "AA");//1:aa 4:d 3:cSystem.out.println (lru.tostring ()); Lru.put (2, "BB");//2:bb 1:aa 4:dSystem.out.println (lru.tostring ()); Lru.put (5, "E");//5:e 2:bb 1:aaSystem.out.println (lru.tostring ()); Lru.get (1);//1:aa 5:e 2:bbSystem.out.println (lru.tostring ()); Lru.remove (11);//1:aa 5:e 2:bbSystem.out.println (lru.tostring ()); Lru.remove (1);//5:e 2:BBSystem.out.println (lru.tostring ()); Lru.put (1, "AAA");//1:aaa 5:e 2:bbSystem.out.println (lru.tostring ()); }}

"Algorithm"--LRU algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.