Objective
These two days to meet the interview problem, said is the page scheduling algorithm, prior to the operating system books have known, LRU (least recently used), and opt (best page replacement algorithm), FIFO (first-out page replacement algorithm), today first to achieve the least recently used LRU.
LRU principle
The core idea of the LRU (Least recently used, least recently used) algorithm is to retire data based on the historical access records of the data, with the heart being that "if the data has been accessed recently, the chances of being accessed in the future are higher".
Referring to the online wording, give the following examples to make a reference:
1#include <iostream>2#include <unordered_map>3#include <list>4#include <utility>5 using namespacestd;6 using namespaceStdext;7 8 classLRUCache {9 Public:TenLRUCache (intcapacity) { OneM_capacity =capacity; A } - - int Get(intkey) { the intRetValue =-1; -unordered_map<int, list<pair<int,int> >::iterator>:: Iterator it =Cachesmap.find (key); - - //if in Cashe, move the record to the front of the linked list + if(It! =cachesmap.end ()) - { +RetValue = it->second->second; A //move to the front atlist<pair<int,int> >:: Iterator Ptrpair = it->second; -pair<int,int> tmppair = *Ptrpair; -Caches.erase (ptrpair++); - Caches.push_front (tmppair); - - in //modifying values in a map -Cachesmap[key] =Caches.begin (); to } + returnRetValue; - } the * void Set(intKeyintvalue) { $ Panax Notoginsengunordered_map<int, list<pair<int,int> >::iterator>:: Iterator it =Cachesmap.find (key); - the if(It! = Cachesmap.end ())//already there + { Alist<pair<int,int> >:: Iterator ptrpait = it->second; thePtrpait->second =value; + //move to the front -pair<int,int> tmppair = *ptrpait; $ caches.erase (ptrpait); $ Caches.push_front (tmppair); - - the //Update Map -Cachesmap[key] =Caches.begin ();Wuyi } the Else //There is no one - { Wupair<int,int> Tmppair =Make_pair (key, value); - About $ if(m_capacity = = Caches.size ())//is full - { - intDelkey =Caches.back (). First; -Caches.pop_back ();//Delete Last A + the //Delete the corresponding item in the map -unordered_map<int, list<pair<int,int> >::iterator>:: Iterator delit =Cachesmap.find (delkey); $Cachesmap.erase (delit++); the } the the the Caches.push_front (tmppair); -Cachesmap[key] = Caches.begin ();//Update Map in } the } the About the Private: the intm_capacity;//size of the Cashe thelist<pair<int,int> > caches;//Storage of Cashe content with a double-linked list +unordered_map<int, list<pair<int,int> >::iterator> cachesmap;//use map to speed up lookups - }; the Bayi the intMainintargcChar**argv) the { -LRUCache S (2); -S.Set(2,1); theS.Set(1,1); thecout << S.Get(2) <<Endl; theS.Set(4,1); theS.Set(5,2); -cout << S.Get(5) <<Endl; thecout << S.Get(4) <<Endl; the GetChar (); the return 0;94}
Run in VS 2015, output:
Explanation: The program will first initialize 2 storage space of the LRUCache class, inserted in {2, 1},{1,1}, then output key=2 = value=1, then inserted into {4, 1}, because it is full, so replace the recently unused {1, 1}, became {4,1 }, {2,1}, then inserted {5, 2}, and then replaced {2,1}, became {4, 1}, {5,2}, and then output the remaining two keys, and then re-sorted who was recently used.
The knowledge used is: double-linked list + hash table (using map, inner elements are automatically sorted; using UNORDERED_MAP does not order internally)
The principle of the page scheduling algorithm (LRU): The cache mechanism in the new data or hit the data into the list header, indicating the most recently used data, if the chain is full, retire data from the tail. But only with the linked list there is a problem, the time complexity of hitting the data is O (n), each need to traverse the linked list, so introduce a hash table, time complexity down to O (1), space to change time.
LRU (least recently used) algorithm C + + implementation