LRU algorithm-LRU Cache

Source: Internet
Author: User

This is the more classic LRU (Least Recently used, the least recently used) algorithm, the algorithm based on the historical access records of data to retire data, the core idea is "if the data has been recently accessed, then the chances of future access is higher". The general application is in the cache substitution policy. The "use" includes access to get and update set.

LRU algorithm

LRU is the least recently used algorithm for least recently used. A page replacement algorithm for memory management, for data that is not used in memory (memory block) is called Lru,oracle will make space to load additional data according to the data belonging to the LRU, and it is generally used for data that is seldom utilized when large data processing is requested directly from the database. If the frequently requested data is read directly in the cache.

The most recent unused (LRU) page replacement algorithm is based on the usage of the page into memory. Because the future usage of each page cannot be predicted, only the recent past is used as the approximation of the nearest future, so the LRU permutation algorithm chooses the last unused page to retire. The algorithm gives each page an Access field to record the time that a page has gone through since it was last accessed, and when a page has to be eliminated, select the largest t value in the existing page, which is the most recently unused page (which can be implemented using this method).

The implementation of LRU

1) A stack can be used to save the page number of each page currently in use. Whenever a process accesses a page, it moves the page number out of the stack and presses it into the top of the stack. Therefore, the top of the stack is always the number of the most recently visited page, and the bottom of the stack is the page number of the most recent unused page. (The code is not posted because the efficiency is too low to time out on the Leetcode)

2) can also be achieved by cross-linked lists and HashMap.

A doubly linked list is used to store data nodes, and it is stored by the time that the node was most recently used. If a node is accessed, we have reason to believe that it will be accessed more than any other node over the next period of time. So, we put it on the head of the doubly linked list. When we insert a node into the two-way link, it is also possible that we will soon be using it, as well as inserting it into the head. We use this method to constantly adjust the two-way list, the end of the linked list is naturally the most recent period of time, the longest unused nodes. So, when our cache is full, what needs to be replaced is the last node in the doubly linked list (not the tail node, which does not store the actual content).

The following is a doubly linked list, noting that the end-to-end nodes do not store actual content:

     <--1  <--2 <--3  <--points---------------

If the cache is full, we have to replace the node 3.

What is the function of a hash table ? If there is no hash table, we have to access a node, we need to search sequentially, time complexity is O (n). Using a hash table allows us to find the node we want to access at the time of O (1), or the return is not found.

Java implementation

Linkedhashmap happens to be a Java collection class implemented by a doubly linked list, one of the major features of which is that, when a location is hit, it adjusts the point of the linked list, places the position to the head, and the newly added content is placed directly on the list header, so that the recently hit content moves to the list header, When a replacement is needed, the last position of the list is the least recently used. For the specific implementation of LINKEDHASHMAP, you can refer to this article: Linkedhashmap principle of implementation.

Assume that the sequence of pages accessed by an existing process is:

4,7,0,7,1,0,1,2,1,2,6

As the process accesses, the page number changes in the stack. Page 4 is the last page that has been visited for the longest time, and should be replaced when the page is missing.

The requirement of the topic is to implement the following three methods:

Class Lrucache{public:    LRUCache (int capacity) {    }    int get (int key) {    }    void set (int key, int value) {    }};

C + + implementation

A simple LRU cache written in c++//Hash map + doubly linked list#include <iostream> #include <vector> #inclu De <ext/hash_map>using namespace std;using namespace __gnu_cxx;template <class K, class t>struct node{k ke    Y    T data; Node *prev, *next;}; Template <class K, class T>class Lrucache{public:lrucache (size_t size) {Entries_ = new Node<k,t>[si        Ze];        for (int i=0; i<size; ++i)//Store the address of the available nodes Free_entries_.push_back (entries_+i);        Head_ = new node<k,t>;        Tail_ = new node<k,t>;        Head_->prev = NULL;        Head_->next = Tail_;        Tail_->prev = Head_;    Tail_->next = NULL;        } ~lrucache () {delete head_;        Delete Tail_;    Delete[] Entries_;        } void Put (K key, T data) {node<k,t> *node = Hashmap_[key];            if (node) {//node exists detach (node);            Node->data = data;        Attach (node); } else{if (Free_entries_.empty ()) {//Available nodes are empty, that is, cache is full node = tail_->prev;                Detach (node);            Hashmap_.erase (Node->key);                } else{node = Free_entries_.back ();            Free_entries_.pop_back ();            } Node->key = key;            Node->data = data;            Hashmap_[key] = node;        Attach (node);        }} T Get (K key) {node<k,t> *node = Hashmap_[key];            if (node) {Detach (node);            Attach (node);        Return node->data; } else{//returns the default value of T if it is not in the cache.        Consistent with HashMap behavior return T ();        }}private://Detach node void Detach (node<k,t>* node) {Node->prev->next = node->next;    Node->next->prev = node->prev;        }//Insert node into head void Attach (node<k,t>* node) {node->prev = Head_;        Node->next = head_->next; Head_->next = node;       Node->next->prev = node;    }private:hash_map<k, node<k,t>* > Hashmap_; vector<node<k,t>* > Free_entries_;    Stores the address of the available nodes node<k,t> *head_, *tail_; Node<k,t> *entries_;    Nodes in a doubly linked list};int main () {hash_map<int, int> map;    map[9]= 999;    cout<<map[9]<<endl;    cout<<map[10]<<endl;    Lrucache<int, string> lru_cache (100); Lru_cache.    Put (1, "one"); Cout<<lru_cache.    Get (1) <<endl; if (Lru_cache. Get (2) = = "") Lru_cache.    Put (2, "both"); Cout<<lru_cache.    Get (2); return 0;}

Reference: http://hawstein.com/posts/lru-cache-impl.html;http://www.cnblogs.com/LZYY/p/3447785.html

LRU algorithm-LRU Cache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.