How to implement an LRU Cache in C + +

Source: Internet
Author: User

What is the LRU Cache

LRU is an acronym for least recently used, meaning that it is a cache replacement algorithm that is least recently used. What is the cache? The narrowly defined cache refers to the fast ram that sits between the CPU and main memory, usually unlike the DRAM technology used in the main memory of the system, and uses expensive but faster SRAM technology. The broad-sense cache refers to the structure that is used to coordinate the difference in the speed of data transmission between two hardware with a large speed difference. In addition to the cache between the CPU and main memory, there is also a cache between the RAM and the hard disk, and even between the hard disk and the network there is a sense of cache── called the Temporary Internet folder or network content cache.

Cache capacity is limited, so when the cache capacity is exhausted, and new content needs to be added, it is necessary to select and discard the original part of the content, so as to make room to put new content. The LRU Cache replacement principle is to replace the least recently used content. In fact, LRU translates into 最久未使用 more image, because the algorithm each time the replacement is the longest period of time has not been used content.

Data

The typical implementation of LRU is hash map + doubly linked list that a doubly linked list is used to store data nodes, and it is stored by the time that the node was most recently used. If a node is accessed, we have reason to believe that it will be accessed more than any other node over the next period of time. So, we put it on the head of the doubly linked list. When we insert a node into the two-way link, it is also possible that we will soon be using it, as well as inserting it into the head. We use this method to constantly adjust the two-way list, the end of the linked list is naturally the most recent period of time, the longest unused nodes. So, when our cache is full, what needs to be replaced is the last node in the doubly linked list (not the tail node, which does not store the actual content).

The following is a doubly linked list, noting that the end-to-end nodes do not store actual content:

<--1 <--2 <--3 <--points---------------

If the cache is full, we have to replace the node 3.

What is the function of a hash table? If there is no hash table, we have to access a node, we need to search sequentially, time complexity is O (n). Using a hash table allows us to find the node we want to access at the time of O (1), or the return is not found.

Cache interface

The cache has two main interfaces:

T Get (K key); void Put (K key, T data);

The Get function is called when we access data of type T through a key value. If the key value is already in the cache, the data is returned, and the node that stores the data is moved to the doubly linked list header. If the data we are querying is not in the cache, we can insert the data into the doubly linked list via the put interface. If the cache is not full at this point, then we insert the new node into the list header and use the hash table to save the node's key value and the node address pair. If the cache is full, we'll replace the last node in the list (note not the tail node) with the new content, then move to the head and update the hash table.

C + + code

Note that hash map is not part of the C + + standard, I use Linux under g++ 4.6.1, Hash_map placed under/usr/include/c++/4.6/ext, need to use the __gnu_cxx name space, Linux platform can switch to C + + Include directory: cd/usr/include/c++/version and then Grep-ir "Hash_map"./view in which file, the last lines of the generic header file indicate the namespace in which it resides. Explore other platforms on your own. Xd

Of course, if you're already fashion using C + + 11, you won't have these little problems.

 A simple LRU cache written in C++// Hash map +  doubly linked list#include <iostream> #include  <vector> #include  <ext/ Hash_map>using namespace std;using namespace __gnu_cxx;template <class k,  class T>struct Node{    K key;    T data;     Node *prev, *next;}; template <class k, class t>class lrucache{public:     LRUCache (size_t size) {        entries_ = new node <k,t>[size];        for (int i=0; i<size; ++i)/ /  stores the address of the available nodes             free_entries_.push_ Back (entries_+i);         head_ = nEw node<k,t>;        tail_ = new node<k,t >;        head_->prev = NULL;         head_->next = tail_;        tail_ ->prev = head_;        tail_->next = null;     }    ~lrucache () {         delete head_;        delete tail_;         delete[] entries_;    }    void put ( K key, t data) {        node<k,t> *node  = hashmap_[key];        if (node) { // node exists             detach (node);             node->data = data;             attach (node);        }         else{            if (free_ Entries_.empty ()) {//  available nodes are empty, that is, cache is full                  node = tail_->prev;                 detach (node);                 hashmap_.erase (Node->key);             }             else{ &nbsP;              node = free_ Entries_.back ();                 free_entries_.pop_back ();            }             node->key = key;             node->data = data;             hashmap_[key] = node;             attach (node);         }     }    t get (K key) {         Node<K,T> *node = hashmap_[key];         if (node) {             detach (node);             attach (node);             return node->data;        }         else{//  returns the default value of T if it is not in the cache. Consistent with HashMap behavior             return t ();         }    }private:    //  Isolated junction     void detach (Node<k,t>* node) {         node->prev->next = node->next;         node->next->prev = node->prev;    }    //  Insert the node into the head     void attach (node<k,t>* node) {        node->prev = head_;         node->next = head_->next;         head_->next = node;        node->next->prev =  node;    }private:    hash_map<K, Node<K,T>*  > hashmap_;    vector<node<k,t>* > free_entries_; //   Store the address of the available nodes     node<k,t> *head_, *tail_;    node <K,T> *entries_; //  node};int main () {    hash_map<int in a doubly linked list,  int> map;    map[9]= 999;    cout<<map[9]< <endl;    cout<<map[10]<<endl;    lrucache<int,  string> lru_cAche (+);     lru_cache. Put (1,  "one");     cout<<lru_cache. Get (1) <<endl;    if (Lru_cache. Get (2)  ==  "")         lru_cache. Put (2,  "both");     cout<<lru_cache. Get (2);     return 0;}

Reference links

Http://www.cs.uml.edu/~jlu1/doc/codes/lruCache.html

How to implement an LRU Cache in C + +

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.