LeetCode: LRU Cache Design

Source: Internet
Author: User

LeetCode: LRU Cache Design
Description: Design and implement a data structure for Least Recently Used (LRU) cache. it shoshould support the following operations: get and set. get (key)-Get the value (will always be positive) of the key if the key exists in the cache, otherwise return-1.set( key, value) -Set or insert the value if the key is not already present. when the cache reached its capacity, it shocould invalidate the least rece Ntly used item before inserting a new item. design and implement the cache data structure that has not been used for the last time, and support get and set operations. get ()-if the key exists, the corresponding value is returned; otherwise,-1 is returned. set ()-insert the value corresponding to the key to the cache. If the cache is full, remove the element that has not been used for the most recent time from the cache. To achieve this design, let's first review the knowledge in the university class. LRU, or least recently used, is a page replacement algorithm for operating system memory management. It is a common page replacement algorithm and the best replacement algorithm (OPT, ideal replacement algorithm ), first-in-first-out-of-first-out (FIFO) algorithm, least algorithm used recently (LRU. The best replacement algorithm is an ideal page replacement algorithm, which is practically impossible. The basic idea of this algorithm is that when a page is missing, some pages are in the memory, and one of them will be accessed soon (the page that also contains the next command ), other pages may be accessed only after 10, 100, or 1000 commands. Each page can be marked with the number of commands to be executed before the page is accessed for the first time. The optimal page replacement algorithm specifies that the page with the largest mark should be replaced. However, when a page is missing, the operating system cannot know when each page is accessed next time. This algorithm cannot be implemented, but it can be used to measure the performance of the implemented algorithm. The other two main algorithms are LFU algorithm-implement cache and FIFO algorithm-implement cache. For details, refer to here. There are many LRU implementation methods. The traditional LRU implementation method is: 1. counter. The simplest case is that each page table item corresponds to a time field and a logical clock or counter is added to the CPU. This clock is incremented by 1 for each storage access. When you access a page, the content of the clock register is copied to the time field used by the corresponding page. In this way, we can keep the "time" of the last access to each page ". On the Replace page, select the page with the minimum time value. 2. Stack. Use a stack to retain the page number. Every time a page is accessed, it is taken out from the stack and placed on the top of the stack. In this way, the top of the stack always has the most used pages, while the bottom of the stack contains the pages that are currently the least used. To remove an item from the middle of the stack, use a bidirectional link with a head and tail pointer. Java can use LinkedHashMap. LinkedHashMap is an ordered hash table that stores the insert sequence of records and is arranged in the order of use. Override the removeEldestEntry (Map. Entry) method to implement the LRU algorithm. I have read that many Jar packages of Mysql Jdbc Util and Apache use javashashmap to implement LRUCache. The following code comes from the mysql-connector-java-5.1.18-bin.jar. Package com. mysql. jdbc. util; import java. util. linkedHashMap; import java. util. map; public class LRUCache extends hashmap {public LRUCache (int maxSize) {super (maxSize, 0.75F, true); maxElements = maxSize;} protected boolean removeEldestEntry (java. util. map. entry eldest) {return size ()> maxElements;} private static final long serialVersionUID = 1L; protected int maxElements;} But LeetCode's OJ does not This implementation is supported. After the code above is modified and submitted, a Comoile Error is prompted. we can see that LinkedHashMap is implemented by inheriting HashMap and maintaining a double-stranded table. When a Cache position is hit, it is adjusted to the header position by adjusting the link of the linked list, the newly added content is directly placed in the head of the linked list. After multiple Cache operations, the recently used Cache will move to the head of the linked list. The tail of the linked list will have the least hits, the longest unused Cache. When the space is full, you can remove the data at the end. Note the following points: one is that the Key does not exist, and the other is that the cache design requires the Key to be unique. The following code uses a two-way linked list (double-stranded table) to implement LRU Cache. Import java. util. hashMap;/*** Design and implement a data structure for Least Recently Used (LRU) cache. * It shoshould support the following operations: get and set. * get (key)-Get the value (will always be positive) of the key if the key exists in the cache, otherwise return-1. * set (key, value)-Set or insert the value if the key is not already present. when the cache reached its capacity ,* It shoshould invalidate the least recently used item before inserting a new item. * At least recently used algorithm design cache */public class LRUCache {private int cacheSize; // cache capacity private int currentSize; // current capacity private HashMap <Object, CacheNode> nodes; // cache container private CacheNode head; // chain table header private CacheNode last; // class CacheNode {CacheNode prev; // previous node CacheNode next; // last node int value; // value int key; // key CacheNode () {}} // initialize Cache Public LRUCache (int capacity) {currentSize = 0; cacheSize = capacity; nodes = new HashMap <Object, CacheNode> (capacity);} public Integer get (int key) {CacheNode node = nodes. get (key); if (node! = Null) {move (node); return node. value;} else {return-1; // error code} public void set (int key, int value) {CacheNode node = nodes. get (key); // duplicate Key if (node! = Null) {node. value = value; move (node); nodes. put (key, node);} else {// key is not repeated. Normal process node = new CacheNode (); if (currentSize> = cacheSize) {if (last! = Null) {// when the cache is full, remove nodes. remove (last. key);} removeLast (); // remove the tail of the linked list and move it back} else {currentSize ++;} node. key = key; node. value = value; move (node); nodes. put (key, node) ;}}// move the linked list node to the private void move (CacheNode cacheNode) {if (cacheNode = head) return; // if (cacheNode. prev! = Null) cacheNode. prev. next = cacheNode. next; if (cacheNode. next! = Null) cacheNode. next. prev = cacheNode. prev; // if (last = cacheNode) last = cacheNode. prev; if (head! = Null) {cacheNode. next = head; head. prev = cacheNode;} // move the linked list head = cacheNode; cacheNode. prev = null; // if (last = null) last = head;} // removes the specified public void remove (int key) {CacheNode cacheNode = nodes. get (key); if (cacheNode! = Null) {if (cacheNode. prev! = Null) {cacheNode. prev. next = cacheNode. next;} if (cacheNode. next! = Null) {cacheNode. next. prev = cacheNode. prev;} if (last = cacheNode) last = cacheNode. prev; if (head = cacheNode) head = cacheNode. next ;}// delete the end node, that is, remove the private void removeLast () {if (last! = Null) {if (last. prev! = Null) {last. prev. next = null;} When else {// The space size is 1, head = null;} last = last. prev ;}} public void clear () {head = null; last = null;} // test case // public static void main (String [] args) {// LRUCache lCache = new LRUCache (2); // lCache. set (2, 1); // lCache. set (1, 1); // lCache. set (2, 3); // lCache. set (4, 1); // System. out. println (lCache. get (1); // System. out. println (lCache. get (2); //} below are the test cases encountered in the submission: Input: 2, [get (2), set ), get (1), set (), set (), get (1), get (2)] Expected: [-1,-, 6] Input: 2, [set (), set (), get (1), get (2)] Expected: [-] Input: 1, [get (0)] Expected: [-1]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.