Deep Source Analysis LRUCache

Source: Internet
Author: User
Tags garbage collection map class

Introduction: Recently, many people in the blog mentioned that they were asked in the interview "the Principle of LruCache?" , found himself before completely contact this knowledge point, in accordance with the understanding of its reason, first searched some blog to understand the relevant knowledge, went to see the source. Now probably know what LruCache is, write a blog right when it is the study notes

LruCache's past Life LruCache is the sacred?

I don't usually like the definition of wild way, so I picked the Android official definition of LruCache:

A cache that holds strong references to a limited number of values. Each time a value is accessed, it's moved to the head of a queue. When a value was added to a full cache, the value at the end of this queue is evicted and could become eligible for garbage C Ollection.

The definition means: LruCache is a cache with a strong reference to a limited number of cached objects, each time the cache object is accessed, it is moved to the head of the queue. When an object is added to a LruCache that has reached the limit, the objects at the end of the queue are removed and may be reclaimed by the garbage collection mechanism.

So from the definition we can know that there is a queue storage object in the LruCache, and when LruCache reaches storage, it will make room for the elements that need to be added by removing the tail element in the queue.

What is the meaning of the queue in LruCache?

The first thing we need to know about this is that the Lru in LruCache refers to the "Least recently used-least recently used algorithm". This means that LruCache should be a cache class that can determine which cache object is the least recently used. I believe everyone can understand why we need to introduce a queue.

The meaning of the introduction queue is that every time a cache object in the LruCache is accessed, the object's position in the queue will change-that is, the head of the queue, assuming that the queue has n elements, n-1 elements have been accessed at different frequencies, but the element A has not been accessed, then a must be at the end of the team, Become a "least recently used element".

LruCache principle LruCache Preliminary Source code Analysis

The code is too much, I can't put it all up, so I follow my thinking to stick it ~

     Public  class LruCache<K, V> {        Private FinalLinkedhashmap<k, v> map;/** Size of this cache in units. Not necessarily the number of elements. */        Private intSizePrivate intMaxSize;Private intPutcount;Private intCreatecount;Private intEvictioncount;Private intHitCount;Private intMisscount;

From the code we can tell that LruCache is not a subclass of other classes, and that it actually involves only one Linkedhashmap object and a bunch of int values. That is, thekey to LruCache implementing its core logic should be Linkedhashmap.

Let's take a look at the source code of Linkedhashmap first.

Linkedhashmap Anatomy Linkedhashmap is what

Similarly, citing the official explanation:

Linkedhashmap is a implementation of MAP that guarantees iteration order. All optional operations is supported.

Entries is kept in a doubly-linked list. The iteration order is, by default, the order in which keys were inserted. reinserting a already-present key doesn ' t change the order. If The three argument constructor is used, and Accessorder is specified as true, the iteration would be in the order that E Ntries were accessed. The access order is affected by put, get, and putall operations, but not by operations on the collection.

Linkedhashmap is an implementation of MAP (HashMap subclass, HashMap's parent class is the abstract map class, which implements the map interface), which guarantees the order of the iterators. The data in the Linkedhashmap is stored in a doubly linked list, by default, the order of the iterators is the order of the keys, and inserting a key that already exists does not change the order of the iterators.

If the third parameter in the construction method boolean accessOrder is used, and the value of Accessorder is true, then the order of the iterators is the order in which the data is accessed, and the order will be affected by these methods Put,get,putall, but not by the collection view operations.

Linkedhashmap's explanation proves that our speculation LruCache 实现其核心逻辑的关键应该就是 LinkedHashMap is right, then we just have to understand the specific principles of linkedhashpmap, LruCache principle is not a problem.

Linkedhashmap usage Examples

Before analyzing the Linkedhashmap, we might as well look at what Linkedhashmap is capable of, after all, it will be easier to analyze the inner logic from his external performance, and here is a simple Java code:

 Public  class Demo {     Public Static void Main(string[] args) {linkedhashmap<string, string> Accessordermap =NewLinkedhashmap<> (Ten,0.75Ftrue); System.out.println ("Before processing");//Linkedhashmap test affected by Access orderAccessordermap.put ("One","One"); Accessordermap.put ("both","both"); Accessordermap.put ("three","three");//output without any processing         for(Entry<string, String> entry:accessOrderMap.entrySet ())        {System.out.println (Entry.getvalue ()); } System.out.println ("After processing");//access to evil elements in map, change OrderAccessordermap.get ("One");//Processed output         for(Entry<string, String> entry:accessOrderMap.entrySet ())        {System.out.println (Entry.getvalue ()); }    }}

Here is the output:

Before processing
After processing

As we can see, the position of the one after processing has obviously changed and ran to the front of the two,three.

Linkedhashmap Implementation principle

Let's look directly at the third construction method in Linkedhashmap:

publicLinkedHashMap(            intfloatboolean accessOrder) {        super(initialCapacity, loadFactor);        init();        this.accessOrder = accessOrder;    }
@Overridevoid init() {        new LinkedEntry<K, V>();    }

As we can see, the constructor method invokes the constructor of the parent class, creates a Linkedentry object, and sets the value of the Accessorder to the value passed in by the constructor method. So what is this linkedentry?

    /** * Linkedentry adds NXT/PRV double-links to plain hashmapentry. */    StaticClass linkedentry<k, v> extends Hashmapentry<k, v> {linkedentry<k, v> nxt; Linkedentry<k, v> PRV;/** Create The header entry * /Linkedentry () {Super(NULL,NULL,0,NULL); NXT = PRV = This; }/** Create A normal entry * /Linkedentry (K key, V value,intHash, hashmapentry<k, v> Next, linkedentry<k, V> NXT, linkedentry<k, v> PRV) {Super(Key, value, hash, next); This. NXT = NXT; This. PRV = PRV; }    }

As we can see, Linkedentry is a doubly linked list for storing data. So, now we just need to know how the Linkedhashmap class is based on the value of Accessorder, determine how the program uses Get,put and other methods will operate linkedentry can know a whole linkedhashmap implementation principle.

It is easy to trace the value of Accessorder in the Get method with the corresponding processing:

    /** * Returns The value of the mapping with the specified key.     * * @param key * the key. * @return The value of the mapping with the specified key, or {@code null} * If no mapping for the SP     Ecified key is found. */    @Override  PublicVGet(Object key) {/* * This method was overridden to eliminate the need for a polymorphic * invocation on superclass at T         He expense of code duplication. */        if(Key = =NULL) {hashmapentry<k, v> e = Entryfornullkey;if(E = =NULL)return NULL;if(Accessorder) Maketail ((linkedentry<k, v>) e);returnE.value; }inthash = Collections.secondaryhash (key); Hashmapentry<k, v>[] tab = table; for(hashmapentry<k, v> e = Tab[hash & (Tab.length-1)]; E! =NULL; E = {K EKey = E.key;if(EKey = = Key | | (E.hash = = Hash && key.equals (EKey))) {if(Accessorder) Maketail ((linkedentry<k, v>) e);returnE.value; }        }return NULL; }


can be seen from the code that, as long as Accessorder is true, the Maketail method operation is performed on a newly created Linkedentry object, so we might as well go into the Maketail method:

    /**     * Relinks the given entry to the tail of the list. Under access ordering,     * this method is invoked whenever the value of a  pre-existing entry is     * read by Map.get or modified by Map.put.     */    privatevoidmakeTail(LinkedEntry<K, V> e) {        // Unlink e        e.prv.nxt = e.nxt;        e.nxt.prv = e.prv;        // Relink e as tail        this.header;        LinkedEntry<K, V> oldTail = header.prv;        e.nxt = header;        e.prv = oldTail;        oldTail.nxt = header.prv = e;        modCount++;    }

Believe that the logic here a little learning data structure of the friends can understand, is to change the location of a node, put it to the head.

The queue changes brought by get we know, what about the put? When we searched the put, we found that there was no such method in the class, what a ghost ... Don't panic, we patiently turn over the code will find addNewEntry(K key, V value, int hash, int index) and addNewEntryForNullKey(V value) methods, and they are the two ways to implement put change queue logic, we might as well look at:

    @Override voidAddnewentry (K key, V value,intHashintIndex) {linkedentry<k, v> header = This. Header;//Remove eldest entry if instructed to does so.Linkedentry<k, v> eldest = HEADER.NXT;if(Eldest! = Header && Removeeldestentry (eldest))        {Remove (Eldest.key); }//Create new entry, link it on to list, and put it into tableLinkedentry<k, v> oldtail = HEADER.PRV; Linkedentry<k, v> newtail =NewLinkedentry<k,v> (key, value, hash, Table[index], header, Oldtail);    Table[index] = OLDTAIL.NXT = HEADER.PRV = Newtail; }@Override voidAddnewentryfornullkey (V value) {linkedentry<k, v> header = This. Header;//Remove eldest entry if instructed to does so.Linkedentry<k, v> eldest = HEADER.NXT;if(Eldest! = Header && Removeeldestentry (eldest))        {Remove (Eldest.key); }//Create new entry, link it on to list, and put it into tableLinkedentry<k, v> oldtail = HEADER.PRV; Linkedentry<k, v> newtail =NewLinkedentry<k,v> (NULL, Value,0,NULL, header, Oldtail);    Entryfornullkey = OLDTAIL.NXT = HEADER.PRV = Newtail; }

In the Addnewentry method, when a new element is added, it is executed to determine whether to remove the expired element, and when we enter the Removeeldestentry method we find that the return value of the Removeeldestentry method is constant false. Which means that the expiration element is not removed anyway, what does that mean?

Actually, that's pretty good. Linkedhashmap as a storage structure, it does not limit its own storage capacity (because his implementation is based on a doubly linked list), so he has no reason to remove the outdated elements in the default implementation, while in LruCache the corresponding logic is designed to handle the outdated elements.

Explore LruCache again

Through the analysis, we already know that LruCache's core linkedhashmap is how to extract the "least recently used objects", then LruCache also use the characteristics of linkedhashmap do what? Everyone follow me to explore it ~

Through the Get/put method we can know that LruCache is controlled by TrimToSize to the number of elements, when the number of caps, the addition of elements will remove the expiration element:

    Private void TrimToSize(intMaxSize) { while(true) {K key; V value;synchronized( This) {if(Size <0|| (Map.isempty () && Size! =0)) {Throw NewIllegalStateException (GetClass (). GetName () +". SizeOf () is reporting inconsistent results!"); }if(Size <= maxSize) { Break; }//BEGIN layoutlib Change                //Get the last item in the linked list.                //This isn't efficient, the goal here's to minimize the changes                //compared to the platform version.Map.entry<k, v> toevict =NULL; for(Map.entry<k, V> entry:map.entrySet ())                {toevict = entry; }//END layoutlib Change                if(Toevict = =NULL) { Break;                } key = Toevict.getkey ();                Value = Toevict.getvalue ();                Map.Remove (key);                Size-= safesizeof (key, value);            evictioncount++; } entryremoved (true, key, Value,NULL); }    }

The code first makes some simple judgments, and only size > MaxSize executes the relevant logic: traversing the elements of the Linkedhashmap doubly linked list, getting the element at the tail of the queue (which is what we call an expiration element), and removing it if it is not empty.


1. If your cache object holds resources that need to be accurately reclaimed, rewrite the entryremoved method to fulfill your needs.

2, by default, the size of the cache space is determined by the number of elements, but you can override the SizeOf method in different units to change the size. Example: Bitmap object size is 4m

   int410241024// 4MiB   new LruCache(cacheSize) {       protectedintsizeOf(String key, Bitmap value) {           return value.getByteCount();       }   }}

3. The key value pair passed into the LruCache cannot be null, because the meaning of the Get/put/remove method becomes blurred.

Deep Source Analysis LRUCache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.