A brief analysis of LRU (K-V) algorithm caching tutorial

Source: Internet
Author: User
Tags current time prev tomcat

LRU (least recently Used) algorithm is a common idea in caching technology, as the name suggests, the least recently used, that is, two dimensions to measure, one is time (nearest), one frequency (least). If you need to prioritize the K-V entities in the cache, you need to consider these two dimensions, in LRU, the most frequently used in the front, or simply the most recent access to the front. This is the general idea of LRU.

In the operating system, LRU is used for memory management of the page replacement algorithm, for in-memory but not in the block of data (memory block) is called LRU, the operating system according to which data belong to the LRU and remove it out of memory to make room to load additional data.


Wikipedia's description of LRU:

n Computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are Opti Mizinginstructions-or algorithms-that A computer program or a hardware-maintained structure can follow into order to manage A cache of information stored on the computer. When the cache was full, the algorithm must choose which items to discard to make room for the new ones.

Least recently Used (LRU)

Discards the least recently used items a. This algorithm requires keeping track of what be used when, which are expensive if one wants to make sure the algorithm al Ways discards the least recently used item. General implementations of this technique require keeping ' age bits ' for cache-lines and track the ' least recently ' Used ' C Ache-line based on Age-bits. In such a implementation, every time a cache-line are used, the age of all other cache-lines changes. LRU is actually a family of caching algorithms to members including 2Q by Theodore Johnson and Dennis shasha,[3] and LRU /k by Pat o ' Neil, Betty o ' Neil and Gerhard Weikum. [4]

the analysis and realization of LRUCache

1. First, you can implement a FIFO version, but this is only in the order of insertion to determine the priority, did not consider the access order, and did not fully implement LRUCache.

The LINKEDHASHMAP implementation in Java is very simple.

private int capacity;
private Java.util.linkedhashmap<integer, integer> cache = new Java.util.linkedhashmap<integer, Integer       > () {
@Override
protected Boolean removeeldestentry (Map.entry<integer, integer> eldest) {
return size () > capacity;
}
};


The Removeeldestentry () method is overridden in the program to delete the lowest priority element if the size exceeds the set capacity, and the lowest priority is the first inserted element in the FIFO version.


2. The realization of LRUCache is also very simple if you know enough about linkedhashmap. A construction method that can set capacity, load factor, and order is provided in Linkedhashmap. If you want to implement LRUCache, you can set the order parameters to true, representing the access order, not the default FIFO insertion order. This sets the load factor to the default of 0.75. Also, rewrite the Removeeldestentry () method to maintain the current capacity. There are two ways to implement the Linkedhashmap version of LRUCache. One is inheriting one is a combination.

Inherited:

package lrucache.one;
import java.util.linkedhashmap;
import java.util.map;
/**  *lru cache's Linkedhashmap implementation, inheritance.  * @author  wxisme  * @time  2015-10-18  a.m. 10:27:37  */Public class lrucache  extends linkedhashmap<integer, integer>{         
private int initialcapacity;               public lrucache (int 
initialcapacity)  {        super (initialCapacity,0.75f,true);
        this.initialCapacity = initialCapacity; &NBSP;&NBSP;&NBSP;&NBSP}           @Override      Protected boolean removeeldestentry (             map.entry<integer, integer> eldest)  {&nbsP;       return size ()  > initialCapacity; &NBSP;&NBSP;&NBSP;&NBSP}      @Override     public String  ToString ()  {                
 stringbuilder cachestr = new stringbuilder ();
        cachestr.append ("{");                  for  ( Map.entry<integer, integer> entry : this.entryset ())  {             cachestr.append ("["  + entry.getkey ()  +  ","
 + entry.getvalue ()  +  "]");         }         
Cachestr.append ("}");         retUrn cachestr.tostring (); &NBSP;&NBSP;&NBSP;&NBSP}          }


Combination:

package lrucache.three;
import java.util.linkedhashmap;
import java.util.map; /**  *LRU Cache  Linkedhashmap Implementation, combined  * @author  wxisme  * @time  2015-10-18  Morning 11:07:01  */public class lrucache {    private final int
 initialCapacity;
         private Map<Integer, Integer> cache;          public lrucache (final int initialcapacity)  {        this.initialCapacity = initialCapacity;          cache = new linkedhashmap<integer, integer > (initialcapacity, 0.75f, true)  {              @Override             protected  boolean&nBsp;removeeldestentry (                     map.entry<integer, integer> eldest)  {                 return size ()  
> initialCapacity;             }       
  }; &NBSP;&NBSP;&NBSP;&NBSP}          public void put (int 
Key, int value)  {        cache.put (Key, value); &NBSP;&NBSP;&NBSP;&NBSP}          public int get (Int key
)  {        return cache.get (key);     }          public void remove (INT&NBSp;key)  {        cache.remove (key);     }           @Override     public String  ToString ()  {                
 stringbuilder cachestr = new stringbuilder ();
        cachestr.append ("{");                  for  ( Map.entry<integer, integer> entry : cache.entryset ())  {             cachestr.append ("["  + entry.getkey ()  +  ","
 + entry.getvalue ()  +  "]");         }         
Cachestr.append ("}");       &nBsp; return cachestr.tostring (); &NBSP;&NBSP;&NBSP;&NBSP}}



Test code:


public static void Main (string[] args) {
LRUCache cache = new LRUCache (5);
Cache.put (5, 5);
Cache.put (4, 4);
Cache.put (3, 3);
Cache.put (2, 2);
Cache.put (1, 1);
System.out.println (cache.tostring ());
cache.put (0, 0);
System.out.println (cache.tostring ());
}

The


Run Result:

{[5,5][4,4][3,3][2,2][1,1]}
{[4,4][3,3][2,2][1,1][0,0]}

Visibility has already implemented the basic functionality of LRUCache.

3. How do I implement the LRU algorithm without the Linkedhashmap provided by the Java API? First we have to determine the operation, the LRU algorithm is to insert, delete, find and maintain a certain order, so we have a lot of choices, can be used in arrays, lists, stacks, queues, a map of one or several. First look at the stack and queue, although you can explicitly order to achieve FIFO or filo, but LRU is required to both sides of the operation, both the need to remove tail elements and need to move head elements, can imagine the efficiency is not ideal. We want to be clear about the fact that the array and map read-only operation is O (1), and the complexity of the non read-only operation is O (n). Chain structure is the opposite. So if we just use one of these, it's going to take too much time on a read-only or read-only operation. Then we can choose the list +MAP combination structure. If you choose a one-way list, you will still be consuming O (n) When you are working on both ends of the list. In comprehensive consideration, the two-way linked list +MAP structure should be the best.

In this implementation, a bidirectional linked list is used to maintain the order of precedence, which is the access order. Implement a read-only operation. Use a map to store k-v values and implement read-only operations. Access Order: Recent access (insert is also an access) moved to the head of the linked list, and if the upper limit is reached, delete the element at the end of the list.

package lrucache.tow;
import java.util.hashmap;
import java.util.map;
/**  *lrucache link list +hashmap  * @author  wxisme  * @time  2015-10-18  pm 12:34:36  */ public class lrucache<k, v> {         private  final int initialcapacity; //capacity          private  node head; //head node     private node tail; //end point   
       private Map<K, Node<K, V>> map;          public lrucache (int initialcapacity)  { 
       this.initialCapacity = initialCapacity;         map = new hashmap<k, node<k, v
>> ();     }          /**      *  bidirectional linked list node      * @
Author wxisme      *      *  @param  <K>      *  @param  <V>      */     private class node<k, v> {        
public node pre;
        public Node next;
        public K key;
        public V value;                  public  Node () {}                  public  node (k key, v value)  {            this.key = key;
            this.value = value;         }             &NBSP;&NBSP}               /**       *  Add a k,v to the cache      *  @param  key       *  @param  value      */    public void  Put (k key, v value)  {        Node<K, V>
 node = map.get (key);
                 //node not in cache         if (node == null)  {             //at this point, the cache is full             if (Map.size ()   >= this.initialcapacity)  {                 map.remove (tail.key);  //deleted the longest time in map k,v      
           removetailnode ();             }       
      node = new node ();
            node.key = key;         }         node.value
 = value;
        movetohead (node);
        map.put (Key, node);     }    &nbSp     /**      *  get a k,v from the cache      *   @param  key      *  @return  v      */     public v get (k key)  {        
Node<k, v> node = map.get (key);         if (node == null)  {    
        return null;
&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP}         //recently accessed, moved to the head.
        movetohead (node);
        return node.value; &NBSP;&NBSP;&NBSP;&NBSP}          /**      *   Remove k,v      *  @param  key  &n from cachebsp;   */    public void remove (K key)  {  
      node<k, v> node = map.get (key);                  map.remove (key );  //removed from HashMap                   //deletes         if (node != null)  {  in a two-way list            if (node.pre != null)  {                 node.pre.next =
 node.next;             }              if (node.next != null)  {                 node.next.pre = node.pre;             }              if (node == head)  {      
          head = head.next;             }              if (node == tail)  {      
          tail = tail.pre;             }                           // Drop node's Reference             node.pre = null;
            node.next = null;
            node = null;         }             &NBSP;&NBSP}               /**       *  move node to list head      *  @param  node       */    private void movetohead (node node)  {                  //cut Node                   if (Node == head)  return
 ;                  if (node.pre& NBsp;! =null)  {            node.pre.next =
 node.next; &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP}         if (Node.next  != null)  {            
node.next.pre = node.pre; &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP}         if (node  == tail)  {            tail = 
Tail.pre;         }                   if (tail == null | |  head == null)  {            
tail = head = node;             return ;         }                            //transfer node to head   
      node.next = head;
        head.pre = node;
        head = node;
        node.pre = null;              }           /**      *  Delete the tail node of the list      */     private void removetailnode ()  {        if (tail  != null)  {            tail =
 tail.pre;             tail.next = null; &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP}     }        
        @Override     public string tostring ()  {                  stringbuilder
 cachestr = new stringbuilder ();
        cachestr.append ("{");         //because the order in which the elements are accessed is maintained in the chain, you have to traverse the list      
   Node<K, V> node = head;         while (node != null)  {             cachestr.append ("["  + node.key +  ","  
+ node.value +  "]");             node = node.next;         }            
      cachestr.append ("}");                  return 
Cachestr.tostring (); &NBSP;&NBSP;&NBSP;&NBSP}}


Test data:

Public static void main (String[] args)  {                  lrucache<integer, integer> cache =
 new LRUCache<Integer, Integer> (5);                  cache.put (5,
&NBSP;5);
    cache.put (4,&NBSP;4);
    cache.put (3, 3);
    cache.put (2,&NBSP;2);
    cache.put (1, 1);                  
System.out.println (Cache.tostring ());                  cache.put (0,
&NBSP;0);                  
System.out.println (Cache.tostring ());                 } 

The



Run Result:

{[1,1][2,2][3,3][4,4][5,5]}
{[0,0][1,1][2,2][3,3][4,4]}

also implements the LRUCache basic operation.

and so on! The same test data why the results and above LINKEDHASHMAP implementation is not the same!

Careful observation may find that, although all implementations are LRU, the bidirectional linked list +hashmap is indeed the order of access, while Linkedhashmap is still an insertion order?

Deep Source analysis:


private static final long serialversionuid = 3801124242820219131l;     /**      * the head of the doubly
 linked list.      */    private transient Entry<K,V> 
Header     /**      * the iteration ordering method  for this linked hash map: <tt>true</tt>     
 * for access-order, <tt>false</tt> for insertion-order.
     *      *  @serial      */
    private final boolean accessOrder;
 /**      * linkedhashmap entry.      */&NBSP;&NBSP;&NBSP;&NBSP;PRIVATE&NBSP;STATIC&NBSP;CLAss entry<k,v> extends hashmap.entry<k,v> {         // These fields comprise the doubly linked list used 
For iteration.
        Entry<K,V> before, after;     entry (int hash, k key, v value, hashmap.entry<k,v>  next)  {            super (Hash, key
,  value, next);
        } private transient entry<k,v> header; Private static class entry<k,v> extends hashmap.entry<k,v> {&
nbsp;   entry<k,v> before, after;    &nbsp ...}


You can see from the code fragment above that Linkedhashmap also uses a two-way list, and uses the hash algorithm in the map. Linkedhashmap inherits the HashMap and realizes the map.


/**      * Constructs an empty <tt>LinkedHashMap</tt>  instance with the      * specified initial capacity,
 load factor and ordering mode.      *      *  @param   initialcapacity the  initial capacity      *  @param   loadFactor       the load factor      *  @param   accessorder      the ordering mode - <tt>true</tt> for       *         access-order, <tt> False</tt> for insertion-order      *  @throws   illegalargumentexception if the initial capacity is&Nbsp;negative      *         or the  load factor is nonpositive      */    public  linkedhashmap (int initialcapacity,               float loadfactor,                           boolean accessorder)  {        super (initialcapacity, loadfactor);   
      this.accessOrder = accessOrder;     }

The code above


is the construction method that we use.

Public v get (object key)  {        Entry<K,V>
 e =  (entry<k,v>) getentry (key);         if  (e == null)      
       return null;
        e.recordaccess (this);
        return e.value; &NBSP;&NBSP;&NBSP;&NBSP} void recordaccess (hashmap<k,v> m)  {             LinkedHashMap<K,V> lm =  (Linkedhashmap
<K,V>) m;             if  (lm.accessorder)  { 
               lm.modCount++;               &Nbsp; remove ();                 addbefore (
Lm.header);             }     } void  Recordremoval (hashmap<k,v> m)  {remove ();}



This is the key code to implement the access order.


/**          * Inserts this entry before 
The specified existing entry in the list.          */         Private void addbefore (entry<k,v> existingentry)  {     
       after  = existingEntry;             before = 
Existingentry.before;
            before.after = this;
            after.before = this; &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP} void addentry (int hash, k key, v  value, int bucketindex)  {        creAteentry (Hash, key, value, bucketindex);         // remove eldest entry if instructed,  else grow capacity if appropriate         
entry<k,v> eldest = header.after;         if  (Removeeldestentry (eldest))  {   
         removeentryforkey (Eldest.key);         } else {             if  (Size >= threshold)       
          resize (2 * table.length);              }     /**       * This override DIFFERS&NBSP;FROM&NBSP;ADDENTRY&NBSP;IN&NBSP;THAT&NBSP;IT&NBSP;DOESN ' T resize the    
  * table or remove the eldest entry.      */    void createentry (Int hash, K key,  v value, int bucketindex)  {        
hashmap.entry<k,v> old = table[bucketindex];     Entry<K,V> e = new Entry<K,V> (hash, key, 
Value, old);
        table[bucketIndex] = e;
        e.addbefore (header);
        size++;     }


With these two pieces of code, we can see that the reason for the above problem is that the order of access is not the same, the list +hashmap is the access order priority from the go, and the Linkedhashmap is the opposite.

Expand:

Public hashmap (int initialcapacity, float loadfactor)  {    if   (initialcapacity < 0)         throw new  IllegalArgumentException ("illegal initial capacity: "  +                                            
  initialcapacity);     if  (initialcapacity > maximum_capacity)     
    initialCapacity = MAXIMUM_CAPACITY;     if  (loadfactor <= 0 | |  float.isnan (loadfactor))         throw new  IllegalArgumentException ("illegal load factor: "  +                                         
    loadfactor);     // find a power of 2 >= initialcapacity  
   int capacity = 1;     while  (capacity < initialcapacity)      
   capacity <<= 1;
    this.loadFactor = loadFactor;
    threshold =  (int) (capacity * loadfactor);
    table = new Entry[capacity];
    init (); }


The above code is HASHMAP initialization code, you can know, the initial capacity is set to 1, and then constantly doubly know the capacity is greater than the set. This is a way to conserve storage. If the load factor is set, the capacity is the initial set capacity and the product of the load factor in subsequent expansion operations.

All of the above implementations are single-threaded. Not applicable in concurrent cases. Concurrent modifications can be made using the tool class and the Collections tool class under the Java.util.concurrent package.

The LINKEDHASHMAP implementation in JDK is still very high. Can see the application of a leetcode: http://www.cnblogs.com/wxisme/p/4888648.html



The LRU of Cache elimination algorithm


1. LRU


1.1. Principle

LRU (least recently used, the least recently used) algorithm eliminates data based on historical access records of the data, with the core idea that "if the data has been recently accessed, the chances of future visits are also higher".


1.2. To achieve

The most common implementation is to use a linked list to save cached data, the detailed algorithm implementation is as follows:


1. New data inserted into the head of the linked list;

2. Whenever a cache hit (that is, cached data is accessed), the data is moved to the head of the linked list;

3. When the chain table is full, the data at the end of the linked list is discarded.


1.3. Analysis

Shooting

When there are hot data, the efficiency of LRU is very good, but the accidental and periodic batch operation can cause the LRU hit rate to drop drastically and the cache pollution is serious.

"Complexity"

Simple to achieve.

Price

When hit, you need to traverse the list, find the hit block index, and then move the data to the head.


2. Lru-k


2.1. Principle

The k in Lru-k represents the number of recent uses, so LRU can be considered LRU-1. The main purpose of lru-k is to solve the problem of "cache pollution" of LRU algorithm, whose core idea is to extend the criterion of "recently used 1 times" to "Recently used K times".


2.2. To achieve

More than lru,lru-k need to maintain a queue to record the history of all cached data being accessed. Data is cached only when the number of accesses to the data reaches K times. When the data needs to be eliminated, Lru-k will eliminate the maximum data from the current time of the K access time. Detailed implementation is as follows:


1. Data was first accessed and added to the Access History list;

2. If the data has not reached the K-visit after visiting the history list, it will be eliminated according to certain rules (FIFO,LRU);

3. When the number of data accesses in the Access history queue reaches K, the data index is deleted from the history queue, the data is moved to the cache queue, and the data is cached, and the cache queues are sorted back by time;

4. After being accessed again in the cached data queue, reorder;

5. When data needs to be eliminated, the data that is at the end of the cache queue is eliminated, i.e. the elimination of the "Countdown to the longest" data.

Lru-k has the advantages of LRU, at the same time to avoid the shortcomings of LRU, the actual application of LRU-2 is a combination of various factors after the optimal choice, LRU-3 or greater K-value hit rate will be high, but poor adaptability, requires a large number of data access to the history of access records can be cleared away.


2.3. Analysis

Shooting

Lru-k reduces the problem of "cache pollution" and has a higher hit rate than LRU.

"Complexity"

Lru-k queue is a priority queue, the algorithm complexity and cost is relatively high.

Price

Since Lru-k also needs to record objects that have been accessed but have not yet been cached, memory consumption will be more than LRU, and memory consumption will be considerable when the volume of data is large.

Lru-k needs to be sorted based on time (it can be sorted out when eliminated, or sorted instantly), and CPU consumption is higher than LRU.

3. Two queues (2Q)


3.1. Principle

The two queues (the 2Q substitution) algorithm is similar to LRU-2, except that 2Q changes the access history queue in the LRU-2 algorithm (note that this is not cached data) to a FIFO cache queue, that is, the 2Q algorithm has two cache queues, one is the FIFO queue and the other is the LRU queue.


3.2. To achieve

When the data is first accessed, the 2Q algorithm caches the data in the FIFO queue, and when the data is accessed for the second time, the data is moved from the FIFO queue to the LRU queue, and two queues each eliminate the data in their own way. Detailed implementation is as follows:


1. The newly accessed data is inserted into the FIFO queue;

2. If the data has not been accessed again in the FIFO queue, it will be eliminated in accordance with the FIFO rules;

3. If the data is accessed again in the FIFO queue, the data is moved to the LRU queue head;

4. If the data is accessed again in the LRU queue, move the data to the LRU queue header;

5. LRU queue to eliminate the end of the data.



Note: Above the FIFO queue is shorter than the LRU queue, but does not mean that this is the algorithm requirements, the actual application of the ratio is not mandatory.


3.3. Analysis

Shooting

The 2Q algorithm has a higher hit ratio than LRU.

"Complexity"

Two queues are required, but the two queues themselves are relatively simple.

Price

The sum of the costs of FIFO and LRU.

The 2Q algorithm is similar to the LRU-2 algorithm, and memory consumption is close, but for the last cached data, 2Q reduces the amount of time it takes to read or compute data from the original store.

4. Multi Queue (MQ)


4.1. Principle

The MQ algorithm divides the data into multiple queues based on the frequency of access, and the different queues have different access priorities, and the core idea is to cache the data that has been accessed more frequently.


4.2. To achieve

The MQ algorithm divides the cache into multiple LRU queues, with each queue corresponding to different access priorities. The access priority is calculated based on the number of accesses, such as

The detailed algorithm structure diagram is as follows, q0,q1 .... QK represents different priority queues, q-history represents the elimination of data from the cache, but a queue that records the index and number of references of the data:


 

as shown above, The algorithm is described in detail as follows:

1. The newly inserted data is placed in the Q0;

2. Each queue manages the data according to LRU;

3. When the number of accesses to a certain number of times, need to raise the priority, the data from the current queue deleted, add to the head of the high level queue ;

4. To prevent high priority data from ever being eliminated, you need to lower the priority, remove the data from the current queue, and join the lower-level queue headers when the data is not accessed at a specified time,

5. When data needs to be eliminated, From the lowest level of the queue to start out in accordance with LRU; When each queue eliminates data, the data is removed from the cache, and the data index is added to the q-history header;

6. If the data is again accessed in Q-history, the priority is recalculated and the head of the target queue is moved;

7. Q-history The index of the data to be eliminated by LRU.


4.3. Analysis

"hit ratio"

MQ reduces the problem of "cache pollution" and has a higher hit rate than LRU. The

Complexity

MQ needs to maintain multiple queues, and it needs to maintain the access time for each data, and the complexity is higher than LRU. The

Cost

MQ needs to record the access time for each data, and it needs to scan all queues periodically, at a higher cost than LRU.

Note: Although MQ queues appear to be larger in number, because the sum of all queues is limited to the size of the cache capacity, the sum of multiple queue lengths is the same as for a LRU queue, so the queue scan performance is similar.

 
5. Comparison of the LRU class algorithm

because different access models lead to a greater percentage variance, the comparison is based only on theoretical qualitative analysis and does not make quantitative analysis.

Comparison      comparison

hit LRU-2 > MQ (2) > 2Q > LRU

Complexity LRU-2 > MQ (2) > 2Q &G T LRU

Cost lru-2  > MQ (2) > 2Q > LRU

The actual application needs to be based on business needs and access to the data, not the higher the hit rate the better. For example: Although LRU seems to have a lower hit rate and a "cache pollution" problem, it is more applied in practical applications because of its simplicity and cost.

 
The simplest LRU algorithm in Java, which is to overwrite the Removeeldestentry (map.entry) method by using JDK's Linkedhashmap
If you go to see the source code of LINKEDHASHMAP, LRU algorithm is implemented through a two-way linked list, when a location is hit, by adjusting the point of the list to adjust the location of the end of the position, the new content directly placed in the chain header, so that the most recently hit the contents of the list to move to the head, need to replace, The last location of the list is the least recently used location.
 

import java.util.arraylist;  import java.util.collection;  import  java.util.linkedhashmap;  import java.util.concurrent.locks.lock;  import  java.util.concurrent.locks.reentrantlock;  import java.util.map;          /**  *  class Description: Using Linkedhashmap to implement a simple cache,  must implement the Removeeldestentry method, see JDK documentation  *   *
  @author  dennis  *   *  @param  <K>  *  @param  <V>  */  Public class lrulinkedhashmap<k, v> extends linkedhashmap<k,  V> {      private final int maxCapacity;           private static final float default_load_factor =  0.75f;          private final lock lock =  new reentrantlock ();  &NBSP;&NBsp;      public lrulinkedhashmap (int maxcapacity)  {           super (maxcapacity, default_load_factor, true);           this.maxCapacity = maxCapacity;      }            @Override       protected boolean  removeeldestentry (java.util.map.entry<k, v> eldest)  {      
    return size ()  > maxCapacity;      }       @Override       public boolean containskey (Object  key)  {          try {               lock.lock ();           &nbSp;   return super.containskey (key);           } finally {              lock.unlock ();          }      }                    @Override        public v get (object key)  {          try  {              lock.lock ();               return super.get (key);           } finally {               lock.unlock ();          }   &NBSP;&NBSP;&Nbsp; }           @Override       public  V put (k key, v value)  {          try  {              lock.lock ();               return super.put (Key, value);           } finally {               lock.unlock ();           }      }          public int  Size ()  {          try {               lock.lock ();             &Nbsp; return super.size ();          } finally  {              lock.unlock ();           }      }           public void clear ()  {          try  {              lock.lock ();               super.clear ();           } finally {               lock.unlock ();          }       }          public collection<map.entry<k, v >> getAll ()  {          try {               lock.lock ();               return new ArrayList<Map.Entry<K, V>> (Super.entryset ( ));          } finally {               lock.unlock ();           }      } }

    

  LRU based on double list :

The traditional LRU algorithm is to set a counter for each cache object. Each cache hit to the counter +1, and the cache used up, the need to eliminate the old content, the new content, to see all the counters, and the least used content to replace.

   Its drawbacks are obvious, if the number of cache is small, the problem will not be very large, but if the cache space is too large, to 10W or 100W, once the need for elimination, you need to traverse all calculators, its performance and resource consumption is enormous. The efficiency is also very slow.

    Its principle: All locations of the cache are connected with a double table, when a position is hit, it will be adjusted to the point of the linked list, the position of the link table, the new added cache directly added to the list header.

     So, after many cache operations, the most recently hit, will be moved to the head of the chain, but not hit, and want to move behind the linked list, the end of the list indicates the most recently used cache.

     When you need to replace the content, the last position of the list is the least hit location, we only need to eliminate the last part of the list.

  With so many theories, the following code is used to implement a cache of LRU policies.

    We use an object to represent the cache and implement a doubly-linked list,

public class lrucache {    /**       *  list node      *  @author  administrator       *      */    class cachenode {              }     private int  cachesize;//Cache size     private hashtable nodes;//Cache container      private int currentsize;//the current number of cached objects     private cachenode first;//(double linked list implementation) The Chain table header     private cachenode last;//(realizes double list) the tail of the chain list} 

                  

  A complete implementation is given below, and this class is also used by Tomcat ( Org.apache.tomcat.util.collections.LRUCache), but in the tomcat6.x version, it has been deprecated, replacing it with another cache class.

public class lrucache {    /**      *  linked list node      *  @author  administrator      *       */    class cachenode {         cachenode prev;//Previous Node         cachenode next;//         object value;//value of the latter node          object key;//key         cachenode ()  {   &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP}     }     public lrucache (int  i)  {        currentSize = 0;    
     cacheSize = i;         nodes = nEw hashtable (i);//cache container     }          /**       *  Get Cache object      *  @param  key       *  @return      */    public object get ( Object key)  {        CacheNode node =  (
Cachenode)  nodes.get (key);         if  (node != null)  {   
         movetohead (node);
            return node.value;         } else {       
     return null;         }     }    &nbSp     /**      *  Add Cache      * @ Param key      *  @param  value      */     public void put (object key, object value)  {   
     CacheNode node =  (Cachenode)  nodes.get (key);                  if  (node  == null)  {            //
Whether the cache container has exceeded its size.             if  (currentsize >=  cacheSize)  {                 if  (Last != null)//will be the least used deletion                       nodes.remove (Last.key);
                removelast ();             } else {   
             currentSize++;             }                           
Node = new cachenode ();         }         node.value
 = value;
        node.key = key;
        //puts the most recently used node in the list header, indicating the most recent use.
        movetohead (node);     &nbsP;   nodes.put (Key, node); &NBSP;&NBSP;&NBSP;&NBSP}     /**      *  Remove cache       *  @param  key      *  @return       */    public object remove (object key)  {    
    CacheNode node =  (Cachenode)  nodes.get (key);         if  (node != null)  {             if  (node.prev != null)  {                 node.prev.next =
 node.next;             }              if  (node.next != null)  {&NBSp;               node.next.prev
 = node.prev;             }              if  (Last == node)        
         last = node.prev;
            if  (First == node)                 first =
 node.next;         }         return 
Node &NBSP;&NBSP;&NBSP;&NBSP}     public void clear ()  {    
    first = null;         last =&Nbsp;null; &NBSP;&NBSP;&NBSP;&NBSP}     /**      *  Delete linked list tail node       *   Express   Remove the least-Used cache object      */     Private void removelast ()  {the end of the         //chain is not empty, point the end of the list to null .  Delete the footer (delete the least-Used cache object)         if  (last != null)  {            if  (last.prev !=  null)                 
last.prev.next = null;             else      
           first = null;
            last = last.prev;  &nBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP}     }           /**      *  moves to the header of the linked table, indicating that this node is the most recently used      *  @param  node      */    private void movetohead (CacheNode  node)  {        if  (Node == first)  
           return;         if  (node.prev != null)     
        node.prev.next = node.next;         if  (node.next != null)     
        node.next.prev = node.prev;         if  (Last == node)             last = node.prev;         if  (first != null)  {   
         node.next = first;
            first.prev = node;         }         first =
 node;
        node.prev = null;         if  (last == null)      
       last = first;
    }     private int cacheSize;     private hashtable nodes;//Cache Container     private int 
CurrentSize;     private cachenode first;//Chain header     private cachenode last;//chain table Tail} 


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.