Cache algorithm (page replacement algorithm)-fifo, LFU, LRU

Source: Internet
Author: User

transferred from: http://www.cnblogs.com/dolphin0520/ 1.FIFO Algorithm

FIFO (first-out). In fact, in the operating system design concept in many places have used the idea of first-out, such as job scheduling (first come first service), why this principle in many places will be used? Because this principle is simple and conforms to people's inertial thinking, it is fair and simple to implement, it can be implemented directly using the queue in the data structure.

In the FIFO cache design, the core principle is that if a data is first entered into the cache, it should be eliminated first . In other words, when the cache is full, the first data to enter the cache should be eliminated. The following operations should be supported in the FIFO cache;

  get (Key): If the key exists in the cache, the corresponding value value is returned, otherwise, 1;

Set (Key,value): If the key exists in the cache, the value is reset, if the key is not present, the key is inserted into the cache, and if the cache is full, the first data to enter the cache is retired.

For Example: if the cache size is 3, the Access data sequence is set (total), set (2,2), set (3,3), set (+), get (2), set (5,5)  

The data in the cache changes to:

(+) set (+)

( 2,2) set (2,2)

(2,2) ( 3,3) set (3,3)

(2,2) (3,3) (bis) set (bis )

(2,2) (3,3) (bis) Get (2)

(3,3) (bis) (5,5) Set (5,5)

So what data structures are used to achieve this?

Here's an idea:

Using a doubly linked list to save data, when new data is added to the end of the list, if the cache is full of data, the linked list header data is deleted, and then add the new data to the end of the list. When the data is accessed, the corresponding value value is returned if the data exists in the cache, otherwise 1. If you want to improve access efficiency, you can use HashMap to save each key in the linked list.

2.LFU algorithm

LFU (Least frequently used) The least recently used algorithm. It is based on the idea that " if a data is used very infrequently in the last period of time, then the likelihood of being used in the future is also very small ".

Note the difference between the LFU and the LRU algorithm, the LRU elimination rule is based on the access time, and the LFU is based on the number of visits. To give a simple example:

Suppose the cache size is 3, the data access sequence is set (2,2), set (2), get (1), get (2), set (3,3), set (+),

The LFU algorithm should be eliminated at set (3,3), and LRU should be eliminated.

So the actions that LFU cache should support are:

get (Key): If the key exists in the cache, the corresponding value value is returned, otherwise, 1;

Set (Key,value): If the key exists in the cache, the value is reset, if the key is not present, the key is inserted into the cache, and if the cache is full, the least accessed data is retired.

to be able to eliminate the least used data, the simplest design idea of the LFU algorithm is to use an array to store data items, store each data item in the corresponding position in the array with HashMap, and then design an access frequency for each data item, and when the data item is hit, the frequency of the access is increased. Eliminate the least frequently accessed data at the time of elimination. In this way, when inserting data and accessing data can reach the time complexity of O (1), in the elimination of data, through the selection algorithm to get the data items should be eliminated in the array index, and the index location of the contents of the new data content can be replaced, so that the elimination of data operation time complexity O (n).   

There is also a realization of the idea is to use small top heap +hashmap, small top heap insertion, deletion operations can achieve O (logn) time complexity, so efficiency compared to the first method of implementation more efficient.

If a friend has a more efficient way of doing things (such as O (1) time complexity), it might be appreciated.   

The principle and implementation of the LRU algorithm have been discussed in the previous blog post, and are not described here:

Http://www.cnblogs.com/dolphin0520/p/3741519.html

Reference Link: http://blog.csdn.net/hexinuaa/article/details/6630384

http://blog.csdn.net/beiyetengqing/article/details/7855933

http://outofmemory.cn/wr/?u=http%3A%2F%2Fblog.csdn.net%2Fyunhua_lee%2Farticle%2Fdetails%2F7648549

http://blog.csdn.net/alexander_xfl/article/details/12993565



Cache algorithm (page replacement algorithm)-fifo, LFU, LRU

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.