Overview:
In the previous article, "Operating system: Cache principle based on page permutation algorithm (above)", we mainly elaborated FIFO, LRU and clock page replacement algorithm. Then, in the previous article, there are three core algorithms to explain. The LFU(Least frequently used),lru-k,MQ(Multi Queue) algorithms are respectively.
This article link:http://blog.csdn.net/lemon_tree12138/article/details/50475240--coding-naga
--Reprint please indicate the source
1.LFU
We can see from LFU's English full name least frequently used that the algorithm is implemented based on the number of times the resource is accessed. Because the algorithm is very simple, we give a direct idea, and the logic code will not show (because the following lru-k and MQ is the key to this article). The schematic of the LFU can be found in Figure 1.
Figure-1 Lfu displacement algorithm schematic diagram
algorithm steps:
(1) When a new resource is accessed, the resource is added to the tail of the cache queue;
(2) When accessing an already existing resource, the number of times that the resource is accessed is +1, and then moved up to the appropriate location;
(3) in the resource collection with the same number of visits, it is sorted by access time;
(4) When a new resource joins, detects that the queue is full at this time, then the resource at the end of the queue is swapped out and the new resource is added to the tail of the queue. Algorithm Evaluation:
Personally, this algorithm is not a good caching algorithm, because it does not well reflect the "user" in a relatively short period of time to access the resources of the trend. 2.lru-k problems in the LRU algorithm:
Lru-k algorithm from the name can be known that this must be based on the LRU algorithm, LRU-K algorithm is an improvement on the LRU algorithm. Now we look at the effect of the LRU algorithm in two different cases.
First, suppose that there is a certain amount of continuous access, such as I first access a resource 10 times, then access B resources 10 times, then access a resource 10 times, then access C resources 10 times, and so on. So I can roughly fill in a picture is that the cache queue is in a relatively small range of replacement, so that the number of swap-out and improve the performance of the system, this is the first case.
The second case is that we go to resources A, B, C 、...、 Y, Z, and then repeat n times. and our cache queue length is smaller than this cycle. In this way, our resources need to be constantly swapped in and out, increasing the IO operation, the efficiency is naturally down. The result of this situation is also known as cache pollution. Advantages of Lru-k:
In response to the cache pollution from the second case above, we made a corresponding adjustment--a queue with a new history record added. In front of the LRU algorithm, we are just access to a certain resource, then the resource is added to the cache, the result is a waste of resources, after all, the cache queue resources are limited. In the lru-k algorithm, we no longer put the access-only resource into the cache, but once the resource has been accessed K-times, the resource is added to the cache queue and removed from the history queue. When a resource is put into the cache, we don't have to think about its number of accesses. In the cache queue, we update and retire with the LRU algorithm (the FIFO can be used for history queues as well as LRU).
Do you have to ask, since this adds a new history queue or uses the LRU algorithm, then where is the advantage. And added a new queue, is also the cost of Ah, how can say also solved the problem of LRU. In fact, I had this idea at first, and then I thought, our history queue is just an array of records, and we can make it less expensive. What are you going to do here? Can you think of it.
We know what we're going to say here is a resource that could be a process that might be something of a larger object. Well, it's not a good deal to save these objects or processes directly in the history queue, right? So, our breakthrough point is that the history queue can be as small as possible. Yes, I can. We can hash the object into an integer, so that we can not save the original object, and the subsequent number of times can be saved using byte (because we can default a resource for a certain period of time, the number of visits will not be outrageous, of course, you can use int or long, no problem). As a result, the history queue can be done very little. If you're not sure about the hash, then what you need to do is fill in the basics.
LRU-K algorithm schematic diagram please the following figure-1.
Figure 2 lru-k permutation algorithm schematic code implementation:
In the process of code implementation, we have an ideal assumption that we will add new resources and access old resources completely independent. When you add a resource, it is not included by default in two queues, and the resources you access always exist in one queue. Add new Element (offer):
public void offer (Object object) {
if (histories = = null) {
throw new NullPointerException ();
}
if (histories.size () = = maxhistorylength) {
cleanhistory ();
}
Lruhistory history = new Lruhistory ();
History.sethash (Object.hashcode ());
History.settimes (1);
Histories.add (history);
}
access to an existing resource (visitting):
public void Visitting (Object object) {
if (histories = = null) {
throw new NullPointerException ();
}
if (caches = = null) {
throw new NullPointerException ();
}
int hashcode = Object.hashcode ();
if (Inhistory (hashcode)) {
Boolean offercache = Modifyhistory (hashcode);
if (!offercache) {
return;
}
Offertocache (object);
} else if (Incache (object)) {
displace (object);
} else {
throw new NullPointerException ("object does not exist");
}
}
Modify History:
Boolean offercache = Modifyhistory (hashcode);
if (!offercache) {
return;
}
Offertocache (object);
The logical description of the above code is that when I go to modify the history queue and find that a resource can be added to the cache, the resource is added to the cache.
To access a cache queue element:
Because the cache queue in Lru-k is a complete LRU, the access to the cache queue in Lru-k is consistent with the LRU access, as follows:
private void displace (Object object) {for
(object item:caches) {
if (Item.equals (object)) {
Caches.remove ( Item);
break;
}
}
Caches.add (object);
}
3.Multi Queue (MQ) the problem solved:
In the lru-k algorithm above, this resource is added to the cache as long as the number of accesses to the accessed resource reaches a certain amount. Of course, this approach is reasonable, but here we have an extension of this idea. We cache the resources by the number of times the resource is accessed. For the lru-k algorithm above, we assume that the cache queue is added when the number of resource accesses exceeds 3 times. At this time, two resources A and b,a have been visited 20 times and B has been accessed 4 times. At this point, if you revisit B (assuming that the subsequent access operation is not considered), then B's likelihood of being eliminated is smaller than the possibility of a being eliminated. But as a whole, we know that in theory a should have a greater priority than B.
It says Lru-k's "cache pollution" (which does not deny that the URL-K algorithm contributes to the "cache pollution" of the URL algorithm), so we think of a way to solve it. It is the MQ (Multi Queue) algorithm to be explained here. Here is a schematic diagram of the MQ replacement algorithm:
Figure-3 MQ replacement algorithm schematic diagram
algorithm steps:
(1) We need a history queue and an array of cache queues. The real cache queue is stored in this array, and each cache queue and history queue is "retired" according to the LRU algorithm. and the cache queue and the cache queue are ranked by the number of visits;
(2) When we need to access a new resource, we will add this resource to the lowest level of Q0, if there is a need to eliminate the resources, the elimination of resources into the history queue;
(3) When a resource is accessed again in a cache queue, the resource is added to the head of the queue. If the current resource is accessed for a certain number of times, the current resource is removed from the current queue and added to the head of the more advanced cache queue;
(4) In order to prevent (and not absolutely prevent) the high level cache resources from a certain program, when a certain resource has not been accessed, the resource level will be dropped to the next level;
(5) Cache resources from the cache queue array above are added to the history queue if they are "retired". The resources that are added to the history queue, if accessed again, can recalculate the number of accesses to the resource and add the appropriate cache queue, and if the resources in the history queue are eliminated, it is really eliminated. Logical implementation:
From the algorithm steps above, let's write the code and show the key code. As follows: add a new resource:
public void offer (Object object) {
if (object = = null) {
throw new NullPointerException ();
}
Cachebean Cachebean = new Cachebean (object);
Cachebean.settimes (1);
Cachebean.setlastvisittime (System.currenttimemillis ());
Cachequeue firstqueue = cachequeuelist.get (0);
Cachebean pollobject = Firstqueue.offer (Cachebean);
if (Pollobject = = null) {
return;
}
Historyqueue.offer (Pollobject);
}
To access a resource:
public void Visitting (Object object) {if (object = = null) {throw new NullPointerException ();
} Cachebean Cachebean = new Cachebean (object);
Find Cachebean Tmpbean = NULL in the cache queue first;
int currentlevel = 0;
Boolean needup = false; for (Cachequeue cachequeue:cachequeuelist) {if (Cachequeue.contains (Cachebean)) {Tmpbean =
Cachequeue.get (Cachebean); if (Tmpbean.gettimes () < Timesdistance * (CurrentLevel + 1)) {Tmpbean.settimes (Tmpbean.gettimes ()
+ 1);
Cachequeue.visiting (Tmpbean);
Return
} else {tmpbean.settimes (tmpbean.gettimes () + 1);
Cachequeue.remove (Tmpbean);
Needup = true;
} break;
} currentlevel++;
}//Do I need to upgrade if (needup) {int times = Tmpbean.gettimes (); Times = times > Cachequeuelist.size () * timesdistance?
Cachequeuelist.size () * timesdistance:times;
Cachequeue queue = Cachequeuelist.get (times/timesdistance);
Queue.offer (Tmpbean);
Return }//If the data is re-accessed in the history, recalculate its priority and move to the head of the target queue if (Historyqueue.contains (Cachebean)) {C
Achebean Revisitbean = historyqueue.revisiting (Cachebean);
Revisitbean.settimes (Revisitbean.gettimes () + 1);
System.out.println (Revisitbean);
int times = Revisitbean.gettimes (); Times = times > Cachequeuelist.size () * timesdistance?
Cachequeuelist.size () * timesdistance:times;
Cachequeue queue = Cachequeuelist.get (times/timesdistance);
Cachebean cb = Queue.offer (Revisitbean);
if (CB! = null) {Historyqueue.offer (CB); } return; }
}
Ref:
http://flychao88.iteye.com/blog/1977653
Http://www.cs.cmu.edu/~christos/courses/721-resources/p297-o_neil.pdf
http://flychao88.iteye.com/blog/1977642 GitHub source Download:
Https://github.com/William-Hai/LRU-Cache
Https://github.com/William-Hai/MultiQueue-Cache