LRUCache Source Code Analysis

Source: Internet
Author: User

In Android development, we typically use caching techniques to reduce user traffic and make the app experience smoother. Typically, the cache is divided into two levels. The first level, is the memory cache, the advantage is that the read and write very fast, the disadvantage is that the excessive use of the app will make the whole become very slow, because the running memory is not enough, even caused oom. The second level is the file cache (File,sqlite, etc.), and the file cache reads and writes less efficiently than the memory cache. But the space is more abundant.

First-level cache because space is limited, we usually set a size for it, and when this size is exceeded, the cache will clear out the less common content.

Android provides a handy container to handle this cache problem--lrucache. Read its source code here and take notes.

  

 Public classLrucache<k, v> {    Private FinalLinkedhashmap<k, v>map; /**Size of this cache in units. Not necessarily the number of elements. */    Private intsize; Private intmaxSize; Private intPutcount; Private intCreatecount; Private intEvictioncount; Private intHitCount; Private intMisscount;

From here it is easy to find that LRUCache is essentially a linkedhashmap, and we control it to achieve the above effect. Let's take a look at several properties of this class.

Size is the current cache sizes

MaxSize is the maximum size of the cache

The properties that follow are then understood in the reading process.

Next, let's look at the build function of LRUCache.  

    /**     * @paramMaxSize for caches that does not override {@link#sizeOf}, this was * the maximum number of entries in the cache.     For any other caches, * this is the maximum sum of the sizes of the entries in this cache. */     PublicLruCache (intmaxSize) {        if(maxSize <= 0) {            Throw NewIllegalArgumentException ("maxSize <= 0"); }         This. maxSize =maxSize;  This. Map =NewLinkedhashmap<k, v> (0, 0.75f,true); }

This piece of code is simple, which is to set the maximum size of the cache and create a new Linkedhashmap object. The input legitimacy is detected, and if its value is not greater than 0, an exception is thrown.

The next step is to reset the MaxSize method.

    /*** Sets the size of the cache. * @parammaxSize the new maximum size. * * @hide*/     Public voidResizeintmaxSize) {        if(maxSize <= 0) {            Throw NewIllegalArgumentException ("maxSize <= 0"); }        synchronized( This) {             This. maxSize =maxSize;    } trimtosize (MaxSize); }

In addition to resetting the MAXSIZE, this method finally trims the existing cache. Because when we modify the maximum cache to be less than the current cache, there will be a portion of the existing cache that needs to be cleaned up. So, let's take a look at the cleanup method

    /**     * @parammaxSize The maximum size of the cache before returning.     May be-1 * To evict even 0-sized elements. */    Private voidTrimToSize (intmaxSize) {         while(true) {K key;            V value; synchronized( This) {                if(Size < 0 | | (Map.isempty () && size! = 0)) {                    Throw Newillegalstateexception (GetClass (). GetName ()+ ". SizeOf () is reporting inconsistent results!"); }                if(Size <=maxSize) {                     Break; }                //BEGIN layoutlib Change//Get the last item in the linked list. //This isn't efficient, the goal here's to minimize the changes//compared to the platform version.Map.entry<k, v> toevict =NULL;  for(Map.entry<k, v>Entry:map.entrySet ()) {toevict=entry; }                //END layoutlib Change                if(Toevict = =NULL) {                     Break; } Key=Toevict.getkey (); Value=Toevict.getvalue ();                Map.Remove (key); Size-=safesizeof (key, value); Evictioncount++; } entryremoved (true, key, Value,NULL); }    }

The entire method is in a dead loop before the input legitimacy is checked and only if (Size > MaxSize) is cleaned and trimmed. Loop, we will find the last item in the Linkedhashmap through a for loop. In the above code, the code to find the last item is not optimal, and it is written to keep it with the online version. After finding the last item, get its key and value. Removing it from the linkedhashmap and calculating the amount of memory it can release, and then checking again, is whether the memory condition satisfies the maxsize, and calls an empty method entryremoved (we can override this method through inheritance to do some extensions). If not, continue to clean up. Where Safesizeof is the method of calculating the size of an item in the cache, let's look at it again:

  

    Private intsafesizeof (K key, V value) {intresult =sizeOf (key, value); if(Result < 0) {            Throw NewIllegalStateException ("Negative size:" + key + "=" +value); }        returnresult; }    /*** Returns the size of the entry for {@codekey} and {@codevalue} in * user-defined units. The default implementation returns 1 so, size * is the number of entries and max size is the maximum number of ENT     Ries.     * * <p>an entry ' s size must not change while it's in the cache. */    protected intsizeOf (K key, V value) {return1; }

As you can see from the code above, LRUCache thinks that the size of a deposit is 1 regardless of what we save. Thus, when we set the size, we actually set the number of data we want to save. If we want it to really reflect the size of what we have, we need to inherit and rewrite the sizeof method.

At this point, we have read the initialization of LRUCache. Next, let's look at how it holds the data.

  

    /*** Caches {@codevalue} for {@codekey}.     The value is moved to the head of * the queue. *     * @returnThe previous value mapped by {@codekey}. */     Public Finalv put (K key, V value) {if(Key = =NULL|| Value = =NULL) {            Throw NewNullPointerException ("key = = NULL | | Value = = NULL ");        } V Previous; synchronized( This) {Putcount++; Size+=safesizeof (key, value); Previous=map.put (key, value); if(Previous! =NULL) {size-=safesizeof (key, previous); }        }        if(Previous! =NULL) {entryremoved (false, key, Previous, value);        } trimtosize (MaxSize); returnprevious; }

Enter the legality check as usual, and then add the size of the new stored value to the size of the entire lrucache. Put the new value in Linkedhashmap and remove the old value. The old value of size is then subtracted from the LRUCache entire size. Next, call the empty method entryremoved, and finally, trim the cached size. and will be replaced by that value, returned.

As you can see, the LRUCache put method is almost the extension of Linkedhashmap. More input legality check, resize size, trimming the cache three parts. The point here is that since Linkedhashmap is an ordered map structure, the most recent data, whether insert or update, will be placed first.

  

Then, let's look at the Get section of LRUCache:

    /*** Returns The value for {@codeKey} If it exists in the cache or can being * created by {@code#create}. If A value was returned, it's moved to the * head of the queue.     This returns null if a value is not cached and cannot * be created. */     Public FinalV get (K key) {if(Key = =NULL) {            Throw NewNullPointerException ("key = = NULL");        } V Mapvalue; synchronized( This) {Mapvalue=Map.get (key); if(Mapvalue! =NULL) {HitCount++; returnMapvalue; } Misscount++; }        /** Attempt to create a value. This is a long time, and the map * May is different when create () returns.  If a conflicting value is * added to the map while create () is working, we leave that value in * the map         and release the created value. */V Createdvalue=Create (key); if(Createdvalue = =NULL) {            return NULL; }        synchronized( This) {Createcount++; Mapvalue=map.put (key, Createdvalue); if(Mapvalue! =NULL) {                //There is a conflict so undo the last putmap.put (key, Mapvalue); } Else{size+=safesizeof (key, Createdvalue); }        }        if(Mapvalue! =NULL) {entryremoved (false, Key, Createdvalue, Mapvalue); returnMapvalue; } Else{trimtosize (maxSize); returnCreatedvalue; }    }

This method is relatively long and can be divided into two parts. The first part is a comparison of the normal values from the Linkedhashmap, the second part is that if the value does not exist in the map, a new value is created. But the new method, which is an empty method in the original class, requires that we inherit and rewrite it ourselves.

In conclusion, we have analyzed most of the methods of the LRUCache class. The working mode is to put the new or updated data first, check the size after each operation, and if size exceeds maxsize, clean up the last value to return the size back to normal range. If we inherit this class, we can easily extend the following content:

1. Calculation of the memory size occupied by each data

2. The method we call after clearing the cache

3. If you want to get the value that does not exist in LRUCache, we create a new method that it calls.

done~

  

LRUCache Source Code Analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.