Cache elimination algorithm series 1--LRU
1. LRU
1.1. Principles
LRU (Least recently used, Least recently used) algorithms eliminate data based on historical access records of data. The core idea is: "If data has been accessed recently, in the future, the chance of being accessed is also higher ".
1.2. Implementation
The most common implementation is to use a linked list to save cached data. The detailed algorithm implementation is as follows:
1. Insert new data into the head of the linked list;
2. When the cache hits (that is, the cache data is accessed), the data is moved to the head of the linked list;
3. When the linked list is full, the data at the end of the linked list is discarded.
1.3. Analysis
[Hit rate]
When there is hotspot data, the LRU efficiency is good, but occasional and periodic batch operations will cause a sharp decrease in the LRU hit rate and serious cache pollution.
[Complexity]
Easy to implement.
[Cost]
When hit, you need to traverse the table, find the hit data block index, and then move the data to the header.
2. LRU-K2.1. Principle
The K in the LRU-K represents the number of recent uses, so LRU can be considered as a LRU-1. The main purpose of LRU-K is to solve the problem of LRU algorithm "cache pollution, its core idea is to extend the criterion for judging "Once recently used" to "K times recently used ".
2.2. Implementation
Compared to LRU, The LRU-K requires an additional queue to record the history of All cached data being accessed. Data is cached only when the number of Data Accesses reaches K. When data needs to be eliminated, the LRU-K will remove the data with the maximum K access time from the current time. The detailed implementation is as follows:
1. Data is accessed for the first time and added to the access history list;
2. If the data does not reach K times after the access history list, the data will be eliminated according to certain rules (FIFO, LRU;
3. when the number of data accesses to the Historical queue reaches K, the data index is deleted from the Historical queue, the data is moved to the cache queue, and the data is cached. The cache queue is re-sorted by time;
4. cache data queues are re-accessed and re-ordered;
5. When data needs to be eliminated, the data that is placed at the end of the cache queue should be eliminated, that is, the "Last-to-last K access time" data should be eliminated.
LRU-K has the advantages of LRU, at the same time can avoid the shortcomings of LRU, in practical application, LRU-2 is the best choice after a variety of factors, LRU-3 or greater K value hit rate will be high, but the adaptability is poor, A large amount of data access is required to clear historical access records.
2.3. Analysis
[Hit rate]
LRU-K reduces the problem of "cache pollution", with a higher hit rate than LRU.
[Complexity]
LRU-K queue is a priority queue with high algorithm complexity and cost.
[Cost]
Since the LRU-K also needs to record the objects that have been accessed but not put into the cache, the memory consumption will be more than LRU; when the amount of data is large, the memory consumption will be considerable.
LRU-K needs to be sorted Based on Time (can be sorted when it needs to be eliminated, or can be sorted in real time), CPU consumption is higher than LRU.
3. Two queues (2Q) 3.1. Principle
The Two queues (which is replaced by 2Q below) algorithm is similar to a LRU-2, with the difference that 2Q changes the access history queue in the LRU-2 algorithm (note that this is not the cache data) to a FIFO cache queue, namely: the 2Q algorithm has two cache Queues: one is a FIFO queue and the other is an LRU queue.
3.2. Implementation
When the data is accessed for the first time, the 2Q algorithm caches the data in the FIFO queue. when the data is accessed for the second time, the data is moved from the FIFO queue to the LRU queue, the two queues use their own methods to remove data. The detailed implementation is as follows:
1. Newly accessed data is inserted into the FIFO queue;
2. If the data has not been accessed again in the FIFO queue, the data will be eliminated according to the FIFO rules;
3. If the data is accessed again in the FIFO queue, move the data to the LRU queue header;
4. If the data is accessed again in the LRU queue, move the data to the LRU queue header;
5. data at the end of the LRU queue is eliminated.
Note: The FIFO queue is shorter than the LRU queue, but this does not mean this is an algorithm requirement. In actual application, there is no hard limit on the ratio of the two.
3.3. Analysis
[Hit rate]
The 2Q algorithm has a higher hit rate than LRU.
[Complexity]
Two queues are required, but both queues are relatively simple.
[Cost]
The sum of the cost of FIFO and LRU.
The hit rate of 2Q and LRU-2 algorithms is similar, memory consumption is also relatively close, but for the last cached data, 2Q will reduce the operation of reading data from the original storage or computing data.
4. Multi Queue (MQ) 4.1. Principle
The MQ algorithm divides data into multiple queues Based on the Access frequency. Different queues have different access priorities. The core idea is to cache data with a large number of accesses.
4.2. Implementation
The MQ algorithm divides the cache into multiple LRU queues. Each queue has different access priorities. The access priority is calculated based on the number of visits, for example
The detailed algorithm structure is as follows: Q0, Q1 .... qk indicates a queue of different priorities, and Q-history indicates a queue that removes data from the cache but records the index and reference times of the data:
For example, the algorithm is described as follows:
1. Add the newly inserted data to Q0;
2. Each queue manages data according to LRU;
3. When the number of Data Accesses reaches a certain level and the priority needs to be increased, delete the data from the current queue and add it to the header of the higher-level queue;
4. to prevent high-priority data from being eliminated, when the data is not accessed at the specified time, you need to reduce the priority and delete the data from the current queue, added to the lower-level queue header;
5. When data needs to be eliminated, it is eliminated from the lowest-level Queue according to LRU. when data is eliminated in each queue, the data is deleted from the cache and the data index is added to the Q-history header;
6. If the data is re-accessed in Q-history, the priority is recalculated and moved to the header of the target queue;
7. Q-history removes data indexes based on LRU.
4.3. Analysis
[Hit rate]
MQ reduces the problems caused by "cache pollution", and the hit rate is higher than that of LRU.
[Complexity]
MQ needs to maintain multiple queues and maintain the access time of each data, which is more complex than LRU.
[Cost]
MQ needs to record the access time of each data and regularly scan all queues at a higher cost than LRU.
Note: Although MQ queues seem to have many queues, the sum of all queues is limited by the cache capacity. Therefore, the sum of multiple queue lengths is the same as that of one LRU queue, therefore, the queue scanning performance is similar.
5. Comparison of LRU Algorithms
The hit rate varies greatly due to different access models. Here, the comparison is only based on theoretical qualitative analysis, without quantitative analysis.
Comparison |
Comparison |
Hit rate |
LRU-2> MQ (2)> 2Q> LRU |
Complexity |
LRU-2> MQ (2)> 2Q> LRU |
Cost |
LRU-2> MQ (2)> 2Q> LRU |
In actual applications, You need to select data access based on business needs. The higher the hit rate, the better. For example, although the LRU seems to have a lower hit rate and a "cache pollution" problem, the LRU may be used more in practical applications due to its simplicity and low cost.