Interview question: Implement Lrucache::least recently used abbreviation, meaning is least recently used, it is a cache replacement algorithm

Source: Internet
Author: User

Design and implement a data structure for Least recently Used (LRU) cache. It should support the following operations: get and set .

get(key)-Get The value ('ll always be positive) of the key if the key exists in the cache, otherwise return-1.
set(key, value)-Set or insert the value if the key is not already present. When the cache is reached its capacity, it should invalidate the least recently used item before inserting a new item.

    • Design a data structure for the LRU cache, which supports two operations:

1) Get (key): Returns the corresponding value value if key is in the cache, otherwise returns-1

2) Set (Key,value): If key is not in the cache, insert the (Key,value) into the cache (note that if the cache is full, you must remove the last unused element from the cache); If key is in the cache, The value is reset.

    • Problem-solving ideas: topic let design an LRU cache, that is, according to the LRU algorithm design a cache. Before this, we need to figure out the core idea of the LRU algorithm, the LRU full name is least

Recently used, the most recent unused meaning. In the memory management of the operating system, there is a kind of important algorithm is the memory page replacement algorithm (including FIFO,LRU,LFU and so on several common page substitution algorithm). In fact, the core idea of the cache algorithm and the memory page replacement algorithm is the same: it is to design a principle to update and access the elements in a given space of limited size. The following is the core idea of the LRU algorithm, the design principle of the LRU algorithm is: if a data in the recent period of time has not been accessed, then in the future it is very unlikely to be accessed . That is, when the confined space is filled with data, the data that has not been accessed for the longest time should be eliminated.

And what data structure is used to implement the LRU algorithm? Most people might think of it: to store data in an array, to mark each item with an access timestamp, to increment the timestamp of the data item that exists in the array each time the new item is inserted, and to set the timestamp of the new data item to 0 and insert it into the array. Each time the data item in the array is accessed, the timestamp of the data item being accessed is set to 0. When the array space is full, the data item with the largest timestamp is retired.

This idea of implementation is simple, but what is the flaw? The access timestamp of the data item needs to be maintained continuously, and in addition, the time complexity is O (n) when inserting data, deleting data, and accessing data.

So is there a better way to achieve this?

That's using lists and HashMap. When a new data item needs to be inserted, if the new item exists in the list (commonly called hit), the node is moved to the head of the list, and if it does not exist, a new node is placed on the list head, and if the cache is full, the last node of the linked list is deleted. When accessing the data, if the data item exists in the linked list, the node is moved to the list header, otherwise 1 is returned. This way, the node at the end of the list is the last data item that has been accessed for the longest time.

To summarize: according to the requirements of the topic, the LRU cache has the operation:

1) Set (Key,value): If key is present in HashMap, then the corresponding value value is reset, then the corresponding node cur is removed, the cur node is deleted from the list and moved to the head of the list; If the key does not exist in HashMap, a new node is created. and place the node in the head of the linked list. When the cache is full, delete the last node of the list.

2) Get (key): If key is present in HashMap, the corresponding node is placed in the list header and the corresponding value value is returned, or 1 if it does not exist.

 PackageCom.Netesay.interview;ImportJava.util.HashMap;ImportJava.util.Map;/*** @Author: Weblee * @Email: [Email protected] * @Blog:http://www.cnblogs.com/lkzf/* @Time: October 24, 2014 PM 6:29:40 * ************* Function Description *************** * ************************ **************************** */ Public classLRUCache {Map<integer, cachenode>Cachemap;    Cachenode head, tail; intcapacity; //use doubly linked lists and Map,map to associate K with linked list nodes//save K and value in the list     PublicLRUCache (intcapacity) {     This. Capacity =capacity; Cachemap=NewHashmap<integer, cachenode>(capacity); Head=NewCachenode (-1,-1); Tail=NewCachenode (1, 1); Head.next=tail; Tail.pre=Head; }     Public intGetintkey) {    if(Cachemap.containskey (key)) {Cachenode node=(Cachenode) cachemap.get (key);                Put2head (node); returnNode.value; } Else {        return-1; }    }     Public voidSetintKeyintvalue) {    if(Cachemap.containskey (key)) {Cachenode P=Cachemap.get (key); P.value=value;    Put2head (P); } Else if(Cachemap.size () <capacity) {cachenode node=NewCachenode (key, value);        Put2head (node);    Cachemap.put (key, node); } Else{Cachenode P=NewCachenode (key, value);        Put2head (P);                Cachemap.put (key, p); intTmpkey =Removeend ();    Cachemap.remove (Tmpkey); }    }        Private voidput2head (Cachenode p) {if(P.next! =NULL&& P.pre! =NULL) {P.pre.next=P.next; P.next.pre=P.pre; } p.pre=Head; P.next=Head.next; Head.next.pre=p; Head.next=p; }        Private intRemoveend () {Cachenode P=Tail.pre; Tail.pre.pre.next=tail; Tail.pre=P.pre; P.pre=NULL; P.next=NULL; returnP.key; }}classCachenode {intkey; intvalue;    Cachenode Pre;    Cachenode Next;  PublicCachenode (intKeyintvalue) {     This. Key =key;  This. Value =value; }}

Interview question: Implement Lrucache::least recently used abbreviation, meaning is least recently used, it is a cache replacement algorithm

Large-Scale Price Reduction
  • 59% Max. and 23% Avg.
  • Price Reduction for Core Products
  • Price Reduction in Multiple Regions
undefined. /
Connect with us on Discord
  • Secure, anonymous group chat without disturbance
  • Stay updated on campaigns, new products, and more
  • Support for all your questions
undefined. /
Free Tier
  • Start free from ECS to Big Data
  • Get Started in 3 Simple Steps
  • Try ECS t5 1C1G
undefined. /

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.