Lru
LRU is the abbreviation for least recently used, which translates to "least recently used", that is, the LRU cache removes the least recently used data for the most recent data read. And the most often read, but also the most read, so, using the LRU cache, we can improve the system performance.
LRU implementation
1. Inserting new data into the list head;
2. Whenever the cache hits (that is, the cached data is accessed), the data is moved to the list header;
3. When the list is full, discard the data at the end of the list.
LRU Analysis
Hit rate
When there is hot data, LRU efficiency is very good, but the occasional, periodic batch operation will cause the LRU hit rate drops sharply, the cache pollution is more serious.
"Complexity"
Simple to implement.
Cost
A hit will need to traverse the linked list, find the hit block index, and then need to move the data to the head.
Lru-k
The k in Lru-k represents the number of recent uses, so LRU can be considered as LRU-1. The main purpose of lru-k is to solve the problem of "cache pollution" of LRU algorithm, whose core idea is to extend the criterion of "recently used 1 times" to "recently used K-Times".
Realize
There is a need to maintain a queue more than lru,lru-k to record the history of all cached data being accessed. Data is placed in the cache only when the number of accesses to the data has reached K times. When you need to retire data, Lru-k will eliminate the data that is the largest of the K access time from the current time. The detailed implementation is as follows:
1. Data is accessed for the first time and added to the Access History list;
2. If the data does not reach the K-visit after accessing the history list, it will be phased out according to certain rules (FIFO,LRU);
3. When the number of data accesses in the history queue reaches K times, the data index is deleted from the history queue, the data is moved to the cache queue, and the data is cached, and the cache queue is sorted again by time;
4. The cache data queue is re-accessed and reordered;
5. In the case of data elimination, the data at the end of the cache queue is eliminated, that is, the "last-second-most-visited" data is eliminated.
Lru-k has the advantages of LRU, at the same time can avoid the shortcomings of LRU, the actual application of LRU-2 is a combination of various factors after the optimal choice, LRU-3 or greater K-value hit rate will be high, but the adaptability is poor, need a large number of data access to the history of access records erased.
Analysis
Hit rate
Lru-k reduces the problem caused by "cache pollution", which is higher than LRU.
"Complexity"
Lru-k queue is a priority queue, the algorithm complexity and cost is relatively high.
Cost
Because Lru-k also needs to record objects that have been accessed but not yet cached, memory consumption is much higher than LRU, and memory consumption can be significant when the amount of data is large.
The lru-k needs to be sorted based on time (which can be sorted when it is retired or sorted instantly), and CPU consumption is higher than LRU.
Think the article works? Immediately: With friends
A total of learning progress together! Further study is recommended:
- An analysis of HTTP protocols, cookies and session mechanisms, browser caching (read: 12383)
- Introduction to Distributed Cache System Memcached (read: 11064)
- Force refresh of local DNS cache records (read: 7346)
- Some thoughts on the design of the cache (read: 4865)
- Use Memc-nginx and Srcache-nginx to build efficient and transparent caching mechanisms (read: 4821)
- PHP Caching and Acceleration analysis and summarization (read: 4687)
- Caching design patterns for Web apps (read: 4629)
- Browser caching mechanism (read: 4580)
- Web cache acceleration Based on reverse proxy--cacheable CMS system design (read: 4169)
- Cache as King (read: 3741)
Cache algorithm –LRU