Java-defined LRU cache algorithm
LinkedHashMap inherits from HashMap and provides a removeEldestEntry method. This method is the key to implementing the LRU policy. In addition, HashMap provides three dedicated callback methods for LinkedHashMap, afterNodeAccess, afterNodeInsertion, and afterNodeRemoval methods are literally easy to understand. They are the actions performed after a node is accessed, inserted, and deleted. Based on the above behavior LinkedHashMap, you can implement an LRUCache function.
Eldest: eldest of LinkedHashMap literally indicates the oldest. In LinkedHashMap, there is a field named accessOrder. When accessOrder is true, it indicates that the nodes in LinkedHashMap are sorted by the number of visits, the oldest node is the node with the least access. When the value of accessOrder is false, it indicates that the inner nodes of HashMap are sorted by the insertion order. The oldest node is the oldest node to be inserted. The default value is false.
Code implementation
To implement LRUCache by yourself, you only need to override the removeEldestEntry method. The code is as follows:
Private static class LRUCache <K, V> extends LinkedHashMap <K, V> {private static final long serialVersionUID =-9111855653176630846L; private static int MAX_ELEMENTS; public LRUCache (int initCap, int maxSize) throws writable {super (initCap, 0.75f, true); if (maxSize <0) throw new IllegalArgumentException (); MAX_ELEMENTS = maxSize ;}@ Override protected boolean removeEldestEntry (Map. entry <K, V> eldest) {return size ()> MAX_ELEMENTS ;}}
The above code requires a MAX_ELEMENTS variable to limit the maximum number of storage nodes. When a node is inserted, it is determined that if the current number of nodes exceeds this value, the node with least access will be deleted according to the LRU policy, note that by default, LinkedHashMap ensures the insertion sequence, that is, the nodes are sorted by the insertion sequence. Therefore, even if you delete a node, the first node to be inserted is deleted, however, we pass in a true value in the constructor. This parameter determines how the nodes in the LinkedHashMap are sorted. If the parameter is true, the internal nodes are sorted by the last access time, if the value is false, the data is sorted by the insertion order. So far, a simple LRUCache implementation has been completed.
Note:
Because the implementation of LinkedHahsMap itself is not thread-safe, that is to say, this LRUCache is not thread-safe. If you want to be able to access multiple threads, you can use it like this: LRUCache cache = Collections. synchronizedMap (new LRUCache (10, 10 )). In this way, the cache can execute get/put operations in multiple threads. However, the cache obtained in this way is not safe in multi-thread time. Therefore, the cache cannot be traversed in multiple threads. We recommend that you use the map itself for synchronization when traversing synchronizedmap.
Java implements simple LRU caching
Applications often need to cache some data in the memory. The most common classes in Java are HashMap and Hashtable. If you need more complex caching, you can use JBoss Cache, OSCache, or EHCache. Even with other caching systems, you may still want to cache some data locally with objects for quick access. When doing these caches, you often encounter an annoying problem, that is, you must carefully control the cache size to prevent it from occupying too much memory, if the cache keeps increasing, the program performance will be affected.
A simple solution is to set the maximum limit for the memory cache and use the LRU (least recently used) replacement algorithm. This method can have an expectation for memory usage and only store recently used data in the cache.
Since JDK1.4, a new collection class named LinkedHashMap has been introduced. LinkedHashMap has many advantages:
L It can maintain the order of data items. A dedicated constructor (LinkedHashMap (Map <? Extends K ,? Extends V> m). The traversal sequence can be consistent with the insertion sequence. In this scenario, TreeMap is more expensive.
L It also has a removeEldestEntry (Map. Entry) method, which can be rewritten to indicate the replacement policy. This is the main method we use to create the LRU cache.
OK. The following is a LRU cache implemented using LinkedHashMap:
Import java. util. *; public class SimpleLRU {private static final int MAX_ENTRIES = 50; private Map mCache = new LinkedHashMap (MAX_ENTRIES ,. 75F, true) {protected boolean removeEldestEntry (Map. entry eldest) {return size ()> MAX_ENTRIES ;}}; public SimpleLRU () {for (int I = 0; I <100; I ++) {String numberStr = String. valueOf (I); mCache. put (numberStr, numberStr); System. out. print ("\ rSize =" + mCache. size () + "\ tCurrent value =" + I + "\ tLast Value in cache =" + mCache. get (numberStr); try {Thread. sleep (10);} catch (InterruptedException ex) {}} System. out. println ("");} public static void main (String [] args) {new SimpleLRU ();}}
This code creates a simple LRU cache implementation that contains 50 lengths. The most amazing part of the code is that when creating a javashashmap, the true parameter is used to maintain the access order and the removeEldestEntry method is overwritten. When you run the program, you can see that the cache size increases continuously until 50, instead of increasing. Instead, you can replace the minimum recently used value. It is shown as follows:
Size = 50 Current value = 99 Last Value in cache = 99
Well, now you can take it for granted. As long as you set a maximum value for him, you will not be afraid that he will not be able to grow.