Android LRU cache Algorithm Implementation learning notes (1), androidlru

Source: Internet
Author: User
Tags concurrentmodificationexception

Android LRU cache Algorithm Implementation learning notes (1), androidlru

When developing mobile apps, we often encounter big data access. We usually consider the following aspects. 1. Restrictions on the memory of mobile phones must also ensure smooth application response; 2. Minimize traffic consumption; otherwise, your application will experience a better smooth experience, the user will not hesitate to uninstall your application. When you access a large amount of data, data cache is a solution that we will definitely consider. As a cache, we will consider the following important points: 1. access speed; 2. eviction of the old Cache Policy; 3. It is best to consider concurrency. In this article, we will talk about the cache Algorithm Implementation of the LRU policy. We will use the image cache as an example to talk about the cache implementation in Android Application Development.

First, let's take a look at Google's officially recommended cache: The LruCache and DiskLruCache (hard disk cache structure) classes added to Android3.0. We know from the implementation of the code that the implementation of LruCache and DiskLruCache cache is based on the JDK's LinkedHashMap set. Next we will start from the source code analysis of LinkedHashMap.

Through the source code, we know that LinkedHashMap inherits HashMap. The underlying structure not only uses HashMap to save elements, but also inherits HashMapEntry to implement a two-way linked list structure to associate other elements. Let's first look at the node Implementation of LInkedHashMap:

    /**     * LinkedHashMap entry.     */    private static class Entry<K,V> extends HashMap.Entry<K,V> {        // These fields comprise the doubly linked list used for iteration.        Entry<K,V> before, after;
The node Entry of LinkedHashMap is inherited from HashMap. Entry. Two references are added to indicate the first and last elements respectively. The implementation of LinkedHashMap is to add the structure of a double-stranded table based on HashMap.

Let's take a look at the initialization of LinkedHashMap. We can see the most parameters of the constructor:

/*** Constructs an empty <tt> LinkedHashMap </tt> instance with the * specified initial capacity, load factor and ordering mode. ** @ param initialCapacity the initial capacity * @ param loadFactor the load factor * @ param accessOrder the ordering mode-<tt> true </tt> for * access-order, <tt> false </tt> for insertion-order * @ throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive */public incluhashmap (int initialCapacity, float loadFactor, boolean accessOrder) {super (initialCapacity, loadFactor); this. accessOrder = accessOrder; // accessOrder specifies the sorting. The default value is false. When it is fasle, insertion is ordered. If it is true, access is ordered}
We can see the initialization method of the rewritten base class init practice:

/**     * Called by superclass constructors and pseudoconstructors (clone,     * readObject) before any entries are inserted into the map.  Initializes     * the chain.     */    void init() {        header = new Entry<K,V>(-1, null, null, null);        header.before = header.after = header;    }
The initialization header pointer is specified for the loop. Next we will focus on the two most important put and get methods of the set. Let's first look at the put method. The put Method of LinkedHashMap does not overwrite the put Method of HashMap. Only the addEntry method under the put method is overwritten. The addEntry method is executed when LinkedHashMap inserts a new node.

/**     * This override alters behavior of superclass put method. It causes newly     * allocated entry to get inserted at the end of the linked list and     * removes the eldest entry if appropriate.     */    void addEntry(int hash, K key, V value, int bucketIndex) {        createEntry(hash, key, value, bucketIndex);        // Remove eldest entry if instructed, else grow capacity if appropriate        Entry<K,V> eldest = header.after;        if (removeEldestEntry(eldest)) {            removeEntryForKey(eldest.key);        } else {            if (size >= threshold)                resize(2 * table.length);        }    }    /**     * This override differs from addEntry in that it doesn't resize the     * table or remove the eldest entry.     */    void createEntry(int hash, K key, V value, int bucketIndex) {        HashMap.Entry<K,V> old = table[bucketIndex];Entry<K,V> e = new Entry<K,V>(hash, key, value, old);        table[bucketIndex] = e;        e.addBefore(header);        size++;    }
Let's look at the creatEntry implementation method for generating new nodes. First, find the corresponding location of the hash table and then specify the front and back nodes for the new Entry. The execution method e. addBefore (header) code is as follows:

 /**         * Inserts this entry before the specified existing entry in the list.         */        private void addBefore(Entry<K,V> existingEntry) {            after  = existingEntry;            before = existingEntry.before;            before.after = this;            after.before = this;        }
We can see that the function implemented by the code is to add the new node to the header node. Let's go back to the addEntry code implementation. Let's look at Entry <K, V> eldest = header. after; if (removeEldestEntry (eldest) execution, we understand the post-header node as our oldest node. Implementation of the removeEldestEntry (eldest) method:

protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {        return false;    }

The removeEldestEntry method returns false by default, which means that the old node will not be removed by default. We can rewrite the implementation of removeEldestEntry In the subclass overload to modify the LRU policy. The following figure shows how the put method executes the Code:

   map.put("22", "xxx"); 
First, the structure before map execution is as follows:


To prevent reference arrows from being confused, I omitted the before and after directions of many Entry nodes in HashMap. We only focus on the header references. The data structure after map. put ("22", "xxx") is changed to the following:



We use the hash algorithm to specify the node whose key is 22 to the end of the eee, and modify the header's before to point to the node whose key is 22. After analyzing our put method, we will analyze the implementation of the get method. The Code is as follows:

public V get(Object key) {        Entry<K,V> e = (Entry<K,V>)getEntry(key);        if (e == null)            return null;        e.recordAccess(this);        return e.value;    }
We can see that the get method is very simple. First, let's get the value corresponding to the key by Map. If it does not exist, we will return null. If yes, run the e. recoedAccess (this) method.

/*** This method is invoked by the superclass whenever the value * of a pre-existing entry is read by Map. get or modified by Map. set. * If the enclosing Map is access-ordered, it moves the entry * to the end of the list; otherwise, it does nothing. */void recordAccess (HashMap <K, V> m) {LinkedHashMap <K, V> lm = (LinkedHashMap <K, V>) m; if (lm. accessOrder) {// judge the sort order lm of LinkedHashMap. modCount ++; remove (); addBefore (lm. header );}}
In the recodeAccess method, when the sorting order is false and the sorting order is inserted, we exit the method directly. When our sorting order is the access sorting order, we execute the remove Method, remove the node from the linked list, and then execute the addBefore method, we insert the node to the front of the header node. In this way, we will sort the operations based on the access location each time.

The following example illustrates how to change the data structure of LinkedHashMap in the get operation process when LinkedHashMap is sorted by access. The code we run is as follows:

Put. get ("15"); // assume that the value corresponding to key 15 is "bbb"
After map. put ("22", "xxx") is executed, the changes of put. get ("15") are as follows:



The code after the get method is executed has changed. The Node corresponding to "bbb" has not actually changed, but the pointer in the linked list structure has changed, therefore, when the corresponding iterator is accessed, the reading location is different.

Because we can know that our LRU node must point to the node after the header. When we override removeEldestEntry to modify the LRU policy, we first evicted the nodes pointed to by the after header. Next, let's look at the implementation of the LinkedHashmap iterator as follows:

Private abstract class extends hashiterator <T> implements Iterator <T> {Entry <K, V> nextEntry = header. after; Entry <K, V> lastReturned = null;/*** The modCount value that the iterator believes that the backing * List shoshould have. if this expectation is violated, the iterator * has detected concurrent modification. */int expectedModCount = modCount; public boolean hasNext () {return nextEntry! = Header;} public void remove () {if (lastReturned = null) throw new IllegalStateException (); if (modCount! = ExpectedModCount) throw new ConcurrentModificationException (); LinkedHashMap. this. remove (lastReturned. key); lastReturned = null; expectedModCount = modCount;} Entry <K, V> nextEntry () {if (modCount! = ExpectedModCount) throw new ConcurrentModificationException (); if (nextEntry = header) throw new NoSuchElementException (); Entry <K, V> e = lastReturned = nextEntry; nextEntry = e. after; // here we can see that our iterator implements the after reference pointing to the accessed node. Return e ;}}
According to the internal class of javashashiterator implemented by LinkedHashMap, the element access sequence of our iterator is to start iteration from the header node to the after direction. The above is a simple analysis of LinkedHashMap. In the next section, we will explain the advantages of using the LinkedHashMap structure to achieve buffered and fast access based on the features of the LinkedHashMap data structure, it will also be combined with common open-source implementations to achieve a certain degree of concurrency in the Android cache implementation.









Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.