LEETCODE-LRU Cache

Source: Internet
Author: User
Tags int size

Links: http://oj.leetcode.com/problems/lru-cache/

Reference: http://www.acmerblog.com/leetcode-lru-cache-lru-5745.html


design and implement a data structure for Least recently Used (LRU) cache. It should support the following Operations: get  and set .

get(key)-Get The value ('ll always be positive) of the key if the key exists in the cache, otherwise return-1.
set(key, value)-Set or insert the value if the key is not already present. When the cache is reached its capacity, it should invalidate the least recently used item before inserting a new item.

This is a more classic replacement algorithm, in the virtual memory and cache strategy, the use of data based on historical access records to eliminate old data.

The strategy is to replace the most recent "if the data has been recently accessed, then the chances of future access are higher". The general application is in the cache substitution policy. The "use" includes access to get and update set.

LRU algorithm

      LRU is least recently used  least recently used algorithm, page into memory after the use of the decision-making. Because the future usage of each page cannot be predicted, only the recent past is used as the approximation of the nearest future, so the LRU permutation algorithm chooses the last unused page to retire. The algorithm gives each page an Access field to record the time that a page has gone through since it was last accessed, and when a page has to be eliminated, select the largest t value in the existing page, which is the most recently unused page (which can be implemented using this method). for memory blocks that are not used in memory, Oracle frees up space to load additional data based on those data that are part of the LRU, and is typically used for data that is seldom spent on big data processing, and requests the database directly if the data that is requested is read directly in the cache.

      consider implementing the Span style= "Color:rgb (199,37,78); line-height:30px; Background-color:rgb (249,242,244) ">get,set Two methods, in fact, to record a certain capacity key-value pair, similar to Redis.

Analysis: In order to maintain the cache performance, so that the search, insert, delete have high performance, we use the bidirectional linked list (std::list) and hash table (STD::UNORDERED_MAP) as the data structure of the cache, because:

    • Two-way linked list insertion and deletion efficiency (when one-way linked list is inserted and deleted, also find the node's front node)
    • Hash table saves the address of each node, and can basically guarantee to find the node in O (1) time

Specific implementation Details:

    • Closer to the head of the list, indicating that the node last access is now the shortest time, the tail node represents the least recently accessed
    • when querying or accessing a node, if the node exists, swap the node to the head of the list and update the address of the node in the hash table
    • when inserting a node, if the cache size reaches the upper limit, the tail node is deleted and the corresponding item is deleted in the hash table. The new nodes are inserted into the list header.     

can use map to implement Key-value to storage, because key disorderly use UNORDED_MAP performance higher, faster (in fact, can also use map or handwritten red black tree, but do the problem too much time, unsafe, moreover BST up to O (LOG N) can not O (1) ).

So although the storage problem solved, but time can not be compared, it should use the list, the front is new, each inserted head, full of space to remove the tail.

The doubly linked list is used more than a few ways, so use list. In order to index whether key is in a linked list, the list comes with the Find complexity O (N)

So use Unorded_map to implement key-and iterator-to-store, so look for key complexity O (log n)


struct cachenode{int key;    int value;    Cachenode (int k, int v): Key (k), value (v) {}};class lrucache{public:lrucache (int capacity) {size = capacity;        } int get (int key) {if (Cachemap.find (key) = = Cachemap.end ()) return-1; else {//move the currently accessed node to the list header, and update the address of the node in map Cachelist.splice (Cachelist.begin (), Cachelist, Cachema            P[key]);            Cachemap[key] = Cachelist.begin ();        Return cachemap[key]->value; }} void set (int key, int value) {if (Cachemap.find (key) = = Cachemap.end ()) {I                F (cachelist.size () = = size) {//delete the tail node of the list (least accessed node) cachemap.erase (Cachelist.back (). key);            Cachelist.pop_back ();            }//Insert a new node into the list header and update the map to add the node Cachelist.push_front (Cachenode (key, value));        Cachemap[key] = Cachelist.begin ();       } else {//update the value of the node, move the currently accessed node to the list header, and update the address of the node in the map     Cachemap[key]->value = value;            Cachelist.splice (Cachelist.begin (), Cachelist, Cachemap[key]);        Cachemap[key] = Cachelist.begin ();    }}private:list<cachenode> cachelist;    Unordered_map<int, list<cachenode>::iterator> Cachemap; int size;};


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

LEETCODE-LRU Cache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.