Android Source code parsing--lrucache

Source: Internet
Author: User

Android Source code parsing--lrucache

Lru

Before we read the LRUCache source code, let's look at what's here Lru . LRUall Least Recently Used -in-one, least recently used, is a cache substitution algorithm. Our cache capacity is limited and it faces a problem: when new content needs to be added to our cache, but our cache is free of space to put in new content, how to discard the original part of the content to make room for new content. There are many algorithms to solve this problem, such as Lru,lfu,fifo and so on.
It is important to distinguish between the LRU and LFU . The former is the least recently used, that is, the object that is not used for the longest time, and the latter is the least used recently, that is, the least used object for a period of time. For example, the order in which we cache objects is: a b C b D a C A, when an object needs to be eliminated, if the LRU algorithm is used, then the elimination is B, because it is the longest time not used. If the LFU algorithm is used, then D is eliminated because it is used only once and is the least frequently used during this time period.
LRUOnce we know, let's look at how LRUCache is implemented.

Linkedhashmap

Let's look at LruCache the structure, its member variables and the construction method are defined as follows (here is the code in ANDROID-23):

    PrivateFinal linkedhashmap<k, v> map;Privateint size;The size of the current cache content. It is not necessarily the number of elements, such as if the cache is a picture, the general use of image memory sizePrivateint maxSize;Maximum cacheable sizePrivateint putcount;The number of times the put method was calledPrivateint createcount;The number of times that create (Object) was calledPrivateint evictioncount;The number of elements that have been swapped outPrivateint hitCount; //get method hits the number of elements in the cache private int Misscount; //get method misses the number of elements in the cache public Span class= "Hljs-title" >lrucache (int maxSize) {if (maxSize <= 0) {throw new illegalargumentexception (" maxSize <= 0 ");} this.maxsize = maxSize; this.map = new linkedhashmap<k, V> ( 0, 0.75f, true);}     

From the definition above, it is found that the contents of the LRUCache cache are placed in LinkedHashMap the object. So, LinkedHashMap what is it? How does it implement LRU this caching strategy?

LinkedHashMapInherited from HashMap , the difference is that it is a two-way circular linked list, each of its data nodes have two pointers, respectively, pointing to direct precursor and direct successor, which we can see from its inner class LinkedEntry , which is defined as follows:

    StaticClasslinkedentry<Kv> extends HashMapEntry<k, v> {linkedentry<k, v> nxt; Linkedentry<k, v> PRV; /** Create the header entry */Linkedentry () {super (null, null, 0,  NULL); NXT = PRV = this;} /** Create a normal entry */Linkedentry (K key, V value, int Hash, H Ashmapentry<k, V> Next, linkedentry<k, V> NXT, linkedentry<k, v> prv) { Super (key, value, hash, next); THIS.NXT = NXT; THIS.PRV = PRV;} } 

LinkedHashMapThe data structure of the two-way cyclic linked list is implemented, and it is defined as follows:

public class LinkedHashMap<K, V> extends HashMap<K, V> { transient LinkedEntry<K, V> header; private final boolean accessOrder;}

When the linked list is not empty, header.nxt points to the first node, HEADER.PRV points to the last node, and when the list is empty, header.nxt and HEADER.PRV points to itself.
Accessorder is to specify how it is sorted, and when it is false , it is sorted only in the order inserted, that is, the new order is placed at the end of the list, and when it is true , the When you update or Access a node's data, that corresponding node is also placed in the trailer. It assigns values by constructing a method public linkedhashmap (int initialcapacity, float loadfactor, Boolean accessorder) .
Let's take a look at the method execution process when adding a new node:

  @Override  void addnewentry (K key, V value, int Hash, int index) {linkedentry<k, v> header = this.header; //Remove eldest entry if instructed to doing so. Linkedentry<k, v> eldest = HEADER.NXT; if (eldest! = Header && Removeeldestentry (eldest)) {remove (Eldest.key);} //Create new entry, link it on to list, and put it into table linkedentry<k, v> oldtail = head ER.PRV; Linkedentry<k, v> newtail = new linkedentry<k,v> (key, value, hash, Table[index] , header, Oldtail); Table[index] = OLDTAIL.NXT = HEADER.PRV = Newtail; }

As you can see, when a new node is added, the structure is as follows:


Qq20160512214030.png


When accessOrder true a node is updated or accessed, it moves the node to the tail and the corresponding code is as follows:

    private void makeTail(LinkedEntry<K, V> e) { // Unlink e e.prv.nxt = e.nxt; e.nxt.prv = e.prv; // Relink e as tail LinkedEntry<K, V> header = this.header; LinkedEntry<K, V> oldTail = header.prv; e.nxt = header; e.prv = oldTail; oldTail.nxt = header.prv = e; modCount++; }

The above code is divided into two steps, the first step is to first remove the node (Unlink e), such as:


Qq20160512215842.png


The second step is to move this node to the tail (relink e as tail), which is to point the old tail nxt and head to prv it, and let it point to the nxt head, and point it to the prv old tail. Such as:


Qq20160512220227.png

In addition, LinkedHashMap a method is provided public Entry<K, V> eldest() that returns the oldest node, when it is accessOrder , the true least recently used node.

LruCache

When we are familiar with LinkedHashMap it, we find that it Lru becomes a matter of course to implement the algorithm through it. All we need to do is to define the maximum size of the cache, record the current size of the cache, and check to see if the maximum size is exceeded when new data is placed. Therefore LruCache , the following three required member variables are defined:

    private final LinkedHashMap<K, V> map;    /** Size of this cache in units. Not necessarily the number of elements. */    private int size; private int maxSize;

Then let's read it's Get method:

    PublicFinal VGet(K key) {if (key = =NULL) {ThrowNew NullPointerException ("Key = = null"); } V Mapvalue;Synchronized (This) {Mapvalue = Map.get (key);if (mapvalue! =NULL) {When the corresponding value can be obtained, the value hitcount++ is returned;return mapvalue; } misscount++; }/* * Attempt to create a value. This is a long time, and the map * May is different when create () returns. If a conflicting value is * added to the map while create () is working, we leave that value in * The map and release the Created value. */Attempting to create a value, the default implementation of this method is to return null directly. In its design, however, the map has changed since the method might have been executed. V Createdvalue = Create (key);if (Createdvalue = =NULL) {If you do not create a new value for a key that is not named, return directlyReturnNull }Synchronized (This) {createcount++;Put the created value into the map, if the map in the previous procedure just put this pair of key-value, then will return the put value Mapvalue = Map.put (key, Createdvalue);if (mapvalue! = null) {//if not empty, The description does not need the value that we created, so we put the returned value in //there was a conflict so undo the last put Map.put (key, Mapvalue); } else {size + = safesizeof (key, Createdvalue);  Empty, indicating that we have updated the value of this key and need to recalculate the size}} if (mapvalue! = null) {//the value placed above has a conflict entryremoved (false, Key, Createdvalue, mapvalue); //the value created before the notice has been removed and changed to Mapvalue return mapvalue;} else {trimtosize (maxSize); return createdvalue;}     

LruCacheis likely to be accessed by multiple threads at the same time, so locks are made when reading and writing map . When the corresponding value is not obtained key , it calls its create(K key) method, which is used to calculate the value of a key when the cache is not named, and its default implementation is to return null directly. This method does not add a synchronous lock, which is when it is created, it map may have changed.
So in the Get method, if create(key) The returned V is not null , it will be put into the map , and check whether it was created during the creation of other objects have also been created and put map in, if any, will discard the created object, and the previous object left, Otherwise, because we put the newly created value, we want to calculate the current size and proceed trimToSize .
trimToSizeThe method is based on the incoming maxsize, if the current size exceeds this maxsize, the oldest node will be removed until it is not exceeded . The code is as follows:

    PublicvoidTrimToSize(int maxSize) {while (true) {K key; V value;synchronized (this) {if (size < 0 | | (Map.isempty () && size! = 0)) {throw new illegalstateexception (GetClass (). GetName () + Span class= "hljs-string" ". SizeOf () is reporting inconsistent results!"); if (size <= maxSize) {break;} Map.entry<k, v> toevict = Map.eldest (); if (toevict = null) {break;} key = Toevict.getkey (); Value = Toevict.getvalue (); Map.Remove (key); Size-= safesizeof (key, value); evictioncount++; } entryremoved (true, key, value, null);}}    

Next, let's look at the put method, and its code is simple:

    Publicfinal V put (K key, V value) {if (key = = null | | value = = null) {throw new nullpointerexception ( "key = = null | | Value = = null "); } V Previous; synchronized (this) {putcount++; size + = safesizeof (key, value ); Previous = Map.put (key, value); if (Previous! = null) {size-=-safesizeof (key, previous);}} 
                   
                    if (Previous! = 
                    null) {entryremoved ( False, key, previous, value); } trimtosize (MaxSize); return previous;}    
                          

The main logic is to calculate the newly added size, add the sizes, and then put the key-value into the map, if the old data is updated (it map.put(key, value) will return the previous value), subtract the old data size, and call the entryRemoved(false, key, previous, value) method to notify the old data to be updated to the new value, and finally the call trimToSize(maxSize)size of the trimming cache.
The rest of the methods, such as deleting objects inside or resizing them, are logically similar to the ones above, which are skipped here. LRUCache also defines a number of variables for statistical cache hit ratios, which are not discussed here.

Conclusion

LRUCache Source analysis is here, its implementation of the LRU algorithm is mainly through LinkedHashMap to complete. In addition, using the LRU algorithm, we need to set the maximum size of the cache, and the size of the cache object in different cache types of calculation method is different, the calculation method by protected int sizeOf(K key, V value) implementation, where the default implementation is the number of elements stored. For example, if we were to cache a bitmap object, we would need to override this method and return the sum of the memory size of all the pixels of the bitmap object. Also, LRUCache in the implementation of the multi-threaded access to consider the problem, so when the map is updated, will be added to the synchronization lock.

LRUCache is the implementation of a memory cache of the LRU policy, based on which we can implement our own image caches or other caches. In addition to the memory cache of the LRU algorithm implementation, Google in the subsequent system source code has also been added to the implementation of the algorithm's disk cache, currently in the android-23 example displayingbitmaps , there are corresponding source code DiskLruCache.java . By the way, the specific code on how to use LRUCache to implement the image memory cache can also be referenced in the sample code provided by Google ImageCache.java (browse the sample code online FQ access: https:// android.googlesource.com/platform/developers/samples/android/+/master/ui/graphics/displayingbitmaps/ application/src/main/java/com/example/android/displayingbitmaps/util/).

In addition, the long-winded: LRU cache policy, image caching is not without policy, weak references and soft references is not a variety of image frames before the popular memory caching technology, garbage collection mechanism is more inclined to recover weak references and soft reference objects this is not appropriate.



Android Source code parsing--lrucache

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.