1, least recently use the algorithm LRU (Least Recently used, least recently used)
"Implementation": the most common is to use a linked list to save cached data
1, new data inserted into the list head;
2. Whenever the cache hits (that is, the cached data is accessed), the data is moved to the list header;
3. When the chain is full, the data of the tail of the linked list is discarded;
Cost
A hit will need to traverse the linked list, find the hit block index, and then need to move the data to the head.
Change
Based on these costs, we changed the maintained list to a doubly linked list (that is, each node has a prev and next), and another need to maintain a map, the cache object's reference into the map;
1. New data inserted into the list header and placed in map
2, whenever need to use the cache, first through the key to the map to find, hit the cache after the data moved to the list head (this movement is very good move, only need to assign the node's Prev node's next property to the node's next node, At the same time, the node's next node's Prev property is assigned to the node's prev node, and the node is placed in the list header.
3. When the chain is full, discard the data from the tail of the linked list and delete the corresponding data in the map.
Results
Based on the above change of the LRU algorithm, completely remove the hit cache need to traverse the chain list This drawback, performance has been greatly improved.
2, using Redis cache data, to ensure the cache usage and principle of hot spot data
Say one thing: as long as you limit the memory that Redis occupies, Redis loads hot data into memory based on its own data retirement policy.
"Implementation":
Cache hotspot data by setting expiration time on Redis itself
1, cache each hit once, the data will be re-set the expiration time
2, so often hit cache will never expire, will not be deleted, and the non-hotspot data expiration time will be deleted, to ensure that the Redis always exists in the hotspot data.
Principle
1, the principle is actually the Java delay blocking queue Delayqueue principle
2. When setting the expiration time for the cached data in Redis, it is equivalent to putting the cached data into the deferred blocking queue Delayqueue maintained in Redis.
3, Delayqueue will be placed in the cache data based on the expiration time to sort, short time in front, long time behind the queue.
4, will use one or more threads circular query Delayqueue, once can get the element from the Delayqueue, the cache data expires, it can be taken out and deleted.
5. When there are multiple threads querying delayqueue at the same time, only one thread can get to the head element and the other threads will be blocked. When the head element is removed, all blocking threads are awakened, the thread competes for the header element, and the thread that competes to the HEAD element queries the remaining delay time of the header element, and the tag header element is already occupied by that thread, and then, after the delay time, wait for itself, then wake up the other blocking threads after the last fetch.
java--cache hotspot Data, the least recently used algorithm