How to design an LRU Cache

Source: Internet
Author: User

How to design an LRU cache?

In both Google and Baidu interview questions, the question of designing a cache is as follows: what is cache? How to design a simple cache? by collecting data, this article provides a summary.

 Common Problem DescriptionIt can be like this:

Question:

[1] design a layer in front of a system which cache the last n requests and the responses to them from the system.

A cache is designed on top of a system to cache the last n requests and system responses.
What data structure wocould you use to implement the cache in the later to support following operations.

What kind of data structure can this cache meet the following operations?
[A] when a request comes look it up in the cache and if it hits then return the response from here and do not pass the request to the System
[B] if the request is not found in the cache then pass it on to the system
[C] Since cache can only store the last n requests, insert the N + 1th request in the cache and delete one of the older requests from the cache

Because the cache only caches the latest n requests, the oldest request is deleted from the cache when n + 1 requests are inserted to the cache.

[D] design one cache such that all operations can be done in O (1)-lookup, delete and insert.

 Cache introduction:

Cache (high-speed cache) is a concept that is almost accessible to computers at any time. The cache in the CPU can greatly increase the time required to access data and commands, so that the entire memory (Cache + Memory) can have both high cache speed and large memory capacity; the cache used in the memory page in the operating system can replace the memory with less frequently read memory disk files, thus improving the access speed; data Query in the database also uses cache to improve efficiency. Even the datawindow Data Processing of PowerBuilder uses a similar design of cache. Cache algorithms are commonly designed with FIFO (first in first out) and LRU (least
Recently used ). According to the requirements of the question, it is obvious that an LRU cache should be designed.

 Solution:

The storage space in the cache is often limited. When the storage block in the cache is used up and new data needs to be loaded into the cache, we need to design a good algorithm to replace data blocks. The LRU concept is implemented based on the design rule "the probability that recently used data is reused is much larger than previously used.

In order to quickly delete data items that have not been accessed for a long time and insert the latest data items, we connect the data items in the cache in a two-way linked list and ensure that the linked list maintains the sequence of data items from recent access to the oldest access. When a data item is queried, it is moved to the head of the linked list (O (1) in time complexity ). In this way, after multiple search operations, the recently used content moves to the head of the linked list, and the unused content moves to the back of the linked list. When you need to replace it, the last position of the linked list is the least recently used data item. We only need to place the latest data item in the head of the linked list. When the cache is full, the last position of the linked list is eliminated.

Note: the use of two-way linked list is based on two considerations. First, the block hits in the cache may be random and irrelevant to the load-in sequence. Second, two-way linked list insertion and deletion are fast, and the order between them can be adjusted flexibly. the time complexity is O (1 ).

The time complexity of searching for elements in a linked list is O (n). Each time a hit is hit, we need to spend O (n) Time to search, if no other data structure is added, this is the highest efficiency we can achieve. At present, it seems that the bottleneck of the entire algorithm is to search here. How can we improve the search efficiency? Hash table, right, that is, it exists in the data structure because its search time complexity is O (1 ).

Sort out the following ideas: for each data block in the cache, we design a Data Structure to store the content of the cache block and implement a two-way linked list, the two pointers of the two-way linked list for the attribute "Next" and "Prev" are used to store the key value of the object. The value user stores the cache block object itself.

 Cache interface:

Query:

  • Query hashmap by key value. If hit, the node is returned; otherwise, null is returned.
  • Delete the hit node from the two-way linked list and insert it to the header again.
  • The complexity of all operations is O (1 ).

Insert:

  • Associate a new node with a hashmap
  • If the cache is full, delete the end node of the two-way linked list and delete the records corresponding to the hashmap.
  • Insert a new node into the head of the two-way linked list.

Update:

  • Similar to query

Delete:

  • Delete the corresponding records from the two-way linked list and hashmap at the same time.
JAVA Implementation of LRU cache:
Public interface cache <k extends comparable, V> {v get (k obj); // query void put (K key, v obj ); // insert and update void put (K key, v obj, long validtime); void remove (K key); // Delete pair [] getall (); int size ();} public class pair <k extends comparable, V> implements comparable <pair> {public pair (K key1, V value1) {This. key = key1; this. value = value1;} public K key; Public V value; Public Boolean equals (Object o BJ) {If (OBJ instanceof pair) {pair P = (pair) OBJ; Return key. equals (P. key) & value. equals (P. value);} return false;} @ suppresswarnings ("unchecked") Public int compareto (pair p) {int v = key. compareto (P. key); If (V = 0) {If (P. value instanceof comparable) {return (comparable) value ). compareto (P. value) ;}} return v ;}@ override public int hashcode () {return key. hashcode () ^ value. hashcode () ;}@ ov Erride Public String tostring () {return key + ":" + value ;}} public class lrucache <k extends comparable, V> implements cache <K, V>, serializable {Private Static final long serialversionuid = 3674312987828041877l; Map <K, item> m_map = collections. synchronizedmap (New hashmap <K, item> (); item m_start = new item (); // header item m_end = new item (); // int m_maxsize at the end of the table; object m_listlock = new object ();// Used for concurrent lock static class item {public item (comparable K, object v, long e) {key = K; value = V; expires = E;} public item () {} public comparable key; // key value public object value; // object public long expires; // validity period public item previous; Public item next;} void removeitem (item) {synchronized (m_listlock) {item. previous. next = item. next; item. next. previous = item. previous;} void inserthead (item ite M) {synchronized (m_listlock) {item. previous = m_start; item. next = m_start.next; m_start.next.previous = item; m_start.next = item;} void movetohead (item) {synchronized (m_listlock) {item. previous. next = item. next; item. next. previous = item. previous; item. previous = m_start; item. next = m_start.next; m_start.next.previous = item; m_start.next = item ;}} public lrucache (INT maxobjects) {M_maxsize = maxobjects; m_start.next = m_end; m_end.previous = m_start;} @ suppresswarnings ("unchecked") Public pair [] getall () {pair P [] = new pair [m_maxsize]; int COUNT = 0; synchronized (m_listlock) {item cur = m_start.next; while (cur! = M_end) {P [count] = new pair (cur. key, cur. value); ++ count; cur = cur. next ;}} pair NP [] = new pair [count]; system. arraycopy (p, 0, NP, 0, count); Return NP ;}@ suppresswarnings ("unchecked") Public v get (K key) {item cur = m_map.get (key ); if (cur = NULL) {return NULL;} // If (system. currenttimemillis ()> cur. expires) {m_map.remove (cur. key); removeitem (cur); return NULL;} If (cur! = M_start.next) {movetohead (cur);} return (v) cur. value;} public void put (K key, v obj) {put (Key, OBJ,-1);} public void put (K key, V value, long validtime) {item cur = m_map.get (key); If (cur! = NULL) {cur. value = value; If (validtime> 0) {cur. expires = system. currenttimemillis () + validtime;} else {cur. expires = long. max_value;} movetohead (cur); // get the latest object and move it to the return header;} If (m_map.size () >=m_maxsize) {cur = m_end.previous; m_map.remove (cur. key); removeitem (cur);} Long expires = 0; If (validtime> 0) {expires = system. currenttimemillis () + validtime;} else {expires = long. max_value;} item = new item (Key, value, expires); inserthead (item); m_map.put (Key, item);} public void remove (K key) {item cur = m_map.get (key); If (cur = NULL) {return;} m_map.remove (key); removeitem (cur);} public int size () {return m_map.size ();}}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.