LRU's understanding and Java implementation

Source: Internet
Author: User
Tags ming prev
Brief introduction

The LRU (Least recently used) literal is "least recently used". In fact, many foreigners invented the word literal translation come to us is not particularly good understanding, and even some of the words are not in the minds of people, such as the rapid sequencing of the pivot, analog signal analog and so on. I think the best way to understand is to see the reason for his birth, to see how the concept of the emergence of a step-by-step evolution for the present look. If you think you have a solution to a problem, then you give him a name according to semantics, and if you say it directly to someone else, it's probably hard to understand who doesn't know the origin of this word. So in order to make it easy to understand, let's take a look at what LRU is, mainly to solve the problem.

In fact, the concept of LRU map to real life is very good understanding, like said Xiao Ming's wardrobe has a lot of clothes, assuming that his clothes can only be placed in this cabinet, xiaoming every time Xiao Ming will buy new clothes, soon Ming's wardrobe is filled with clothes. This Xiao Ming thought of a way, according to the clothes last time to wear the sort, lost the longest time did not go through that. This is the LRU strategy.

Mapping to the computer concept, the above example, the small-sized closet is memory, and Xiaoming's clothes are cached data. Our memory is limited. So when the cache data is growing in memory so that it cannot hold the incoming new cache data, it must throw away the most infrequently used cache data. So the abstract summary for LRU is as follows:

    • The capacity of the cache is limited
    • When cache capacity is not sufficient to store new data that needs to be cached, the least commonly used cache data must be discarded

After understanding the principle of LRU, it is not very difficult to convert it to code. Our access caches typically use a string to locate the cached data (in fact other data forms are not related), which we think of as reflective hashmap. So let's start with a simple definition of the Lrucachce class.

public class LRUCache {    private HashMap<String, Object> map;    private int capacity;    public Object get(String key) {        return map.get(key);    }    public void set(String key, Object value) {, value);    }    public LRUCache(int capacity) {        this.capacity = capacity; = new HashMap<String, Object>();    }}

So we're just defining a lrucache with limited capacity to access the data, but not the ability to discard the least-used data when the cache is out of capacity, and it seems a little more cumbersome, the problem is how we find the longest unused cache.

One of the easiest ways to think about this is that we're adding a timestamp to the cached data, updating the timestamp every time we get the cache, so that the longest unused cache data problem can be solved, but with two new problems:

    • While it is possible to find the longest useless data using timestamps, we have to iterate over the cached data at the least cost, unless we maintain a sortedlist that is sequenced by time stamp.
    • Adding timestamps is a hassle for our data, and we're not too good at adding timestamps to the cache data, which may require the introduction of new wrapper objects with timestamps.

And all we need is to find the longest unused cache data, without the need for precise time. The way you add timestamps obviously doesn't take advantage of this feature, which makes this approach logically not the best.

However, there is always a way, we can maintain a linked list, when the data every time the data is placed into the head of the list, when new data is added to the head. So the list of tail is the longest unused cache data, each time the capacity is insufficient to delete the tail, and the previous element is set to tail, obviously this is a doubly linked list structure, so we define Lrunode as follows:

class LRUNode {    String key;    Object value;    LRUNode prev;    LRUNode next;    public LRUNode(String key, Object value) {        this.key = key;        this.value = value;    }}

The simple implementation of LRUCache is ultimately as follows:

public class LRUCache {private hashmap<string, lrunode> map;    private int capacity;    Private Lrunode head;    Private Lrunode tail;        public void Set (String key, Object value) {lrunode node = map.get (key);            if (node! = null) {node = Map.get (key);            Node.value = value;        Remove (node, false);            } else {node = new Lrunode (key, value);            if (Map.size () >= capacity) {//per capacity is insufficient, remove the oldest unused element, remove (tail, true);        } map.put (key, node);    }//Set the element you just added to head sethead (node);        } public Object get (String key) {lrunode node = map.get (key);            if (node! = NULL) {//Put the newly-manipulated element to head remove (node, false);            Sethead (node);        return node.value;    } return null;     } private void Sethead (Lrunode node) {///first remove the element from the linked list if (head! = null) { = head;       Head.prev = node;        } head = node;        if (tail = = null) {tail = node; }}//delete this node from the linked list, at which point it is important to note that node is a head or tail case private void Remove (Lrunode node, Boolean flag) {if (        EV! = null) { =;        } else {head =;        } if (! = null) { = Node.prev;        } else {tail = Node.prev;        } = null;        Node.prev = null;        if (flag) {map.remove (Node.key);        }} public LRUCache (int capacity) {this.capacity = capacity; = new hashmap<string, lrunode> (); }}
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.