LRUCache Detailed Android Memory optimizer

Source: Internet
Author: User

Concept:

LruCache
What is LRUCache?
What is the LRUCache implementation principle?

These two questions can actually answer as a question, know what is LruCache, only then know the LruCache realization principle; The full name of the LRU is least recently used, the least recently used! So we can infer the principle of LruCache: To remove the least recently used data from the cache, to retain the most frequently used data, how to implement the specific code, we go into the source to see.

LRUCache Source Code Analysis
 Public classLrucache<k, v> {//Cache map Collection, why use Linkedhashmap    //Because yes, after you have cached values, sort them to ensure that    //The least used value is removed the next time    PrivateFinal linkedhashmap<k, v> map;//The current cached value    Private intSize//Maximum value    Private intMaxSize;//Number of additions to the cache    Private intPutcount;//Number of creation    Private intCreatecount;//number of removal    Private intEvictioncount;//number of hits    Private intHitCount;//Missing number    Private intMisscount;//Instantiate Lru, need to pass in the maximum value of the cache    //This maximum value can be a number, such as the number of objects, or the size of memory can be    //For example, maximum memory can only cache 5 megabytes     Public LruCache(intMaxSize) {if(MaxSize <=0) {Throw NewIllegalArgumentException ("maxSize <= 0"); } This. maxSize = maxSize; This. Map =NewLinkedhashmap<k, V> (0,0.75Ftrue); }//Reset the value of the maximum cache     Public void Resize(intMaxSize) {if(MaxSize <=0) {Throw NewIllegalArgumentException ("maxSize <= 0"); } synchronized ( This) { This. maxSize = maxSize;    } trimtosize (MaxSize); }//Get cache value via key     PublicFinal VGet(K key) {if(Key = =NULL) {Throw NewNullPointerException ("key = = NULL");        } V Mapvalue; Synchronized ( This) {Mapvalue = map.Get(key);if(Mapvalue! =NULL) {hitcount++;returnMapvalue;        } misscount++; }//If not, the user can go to createV Createdvalue = Create (key);if(Createdvalue = =NULL) {return NULL; } synchronized ( This) {createcount++; Mapvalue = Map.put (key, Createdvalue);if(Mapvalue! =NULL) {//There is a conflict so undo the last putMap.put (key, Mapvalue); }Else{//Cache size ChangeSize + = safesizeof (key, Createdvalue); }        }//There is no removal here, just changed the position        if(Mapvalue! =NULL) {entryremoved (false, Key, Createdvalue, Mapvalue);returnMapvalue; }Else{//Determine if the cache is out of boundsTrimToSize (maxSize);returnCreatedvalue; }    }//Add the cache, just like the code after the create of this method above     PublicFinal Vput(K key, Vvalue) {if(Key = =NULL||value==NULL) {Throw NewNullPointerException ("key = = NULL | | Value = = NULL ");        } V Previous; Synchronized ( This) {putcount++; Size + = safesizeof (key,value); Previous = Map.put (key,value);if(Previous! =NULL) {Size-= safesizeof (key, previous); }        }if(Previous! =NULL) {entryremoved (false, Key, Previous,value); } trimtosize (MaxSize);returnPrevious }//Detect if cache is out of bounds    Private void TrimToSize(intMaxSize) { while(true) {K key; Vvalue; Synchronized ( This) {if(Size <0|| (Map.isempty () && Size! =0)) {Throw NewIllegalStateException (GetClass (). GetName () +". SizeOf () is reporting inconsistent results!"); }//If not, returns                if(Size <= maxSize) { Break; }//The following code indicates that the maximum range has been exceededMap.entry<k, v> toevict =NULL; for(Map.entry<k, V> entry:map.entrySet ())                {toevict = entry; }if(Toevict = =NULL) { Break; }//Remove the last one, which is the least used cacheKey = Toevict.getkey ();value= Toevict.getvalue ();                Map.Remove (key); Size-= safesizeof (Key,value);            evictioncount++; } entryremoved (true, Key,value,NULL); }    }//manual removal, user call     PublicFinal VRemove(K key) {if(Key = =NULL) {Throw NewNullPointerException ("key = = NULL");        } V Previous; Synchronized ( This) {previous = Map.Remove (key);if(Previous! =NULL) {Size-= safesizeof (key, previous); }        }if(Previous! =NULL) {entryremoved (false, Key, Previous,NULL); }returnPrevious }//Here the user can rewrite it for data and memory recycling operations    protected void entryremoved(Boolean evicted, K key, v OldValue, v newvalue) {}protectedVCreate(K key) {return NULL; }Private int safesizeof(K key, Vvalue) {intresult = SizeOf (key,value);if(Result <0) {Throw NewIllegalStateException ("Negative size:"+ key +"="+value); }returnResult }//This method to pay special attention, and we instantiate LruCache maxSize to echo, how to do echo, such as maxSize size for the number of cache, here is the return 1 OK, if the size of the memory, if 5M, this can not be the number, This is supposed to be the size of each cache value, and if it is Bitmap, this should be Bitmap.getbytecount ();    protected int sizeOf(K key, Vvalue) {return 1; }//Empty cache     PublicFinalvoid Evictall() {TrimToSize (-1);//-1 would evict 0-sized elements} PublicSynchronized finalint size() {returnSize } PublicSynchronized finalint maxSize() {returnMaxSize; } PublicSynchronized finalint HitCount() {returnHitCount; } PublicSynchronized finalint Misscount() {returnMisscount; } PublicSynchronized finalint Createcount() {returnCreatecount; } PublicSynchronized finalint Putcount() {returnPutcount; } PublicSynchronized finalint Evictioncount() {returnEvictioncount; } PublicSynchronized final map<k, v>Snapshot() {return NewLinkedhashmap<k, v> (map); }}
LruCache use

Let's take a look at two diagrams of memory usage

                             图-1

                            图-2

The above analysis of the memory analysis is the same application data, the only difference is that figure 1 does not use LruCache, and figure 2 used LruCache; it can be very obvious to see that figure 1 of the memory use is significantly larger, basically is around 30M, The memory usage of Figure 2 is basically around 20M. This saves nearly 10M of memory!

OK, the implementation code is posted below

/** * Created by Gyzhong on 15/4/5. * * Public  class lrupageadapter extends pageradapter {    PrivateList<string> Mdata;PrivateLrucache<string,bitmap> Mlrucache;Private intMtotalsize = (int) Runtime.getruntime (). TotalMemory ();PrivateViewpager Mviewpager; Public Lrupageadapter(Viewpager viewpager,list<string> data)        {mdata = data; Mviewpager = Viewpager;/ * Instantiate lrucache*/Mlrucache =NewLrucache<string,bitmap> (mtotalsize/5){/ * This method is called when the cache is larger than the maximum value we set, and we can use it to do the memory release operation .            @Override            protected void entryremoved(BooleanEvicted, String key, Bitmap OldValue, Bitmap newvalue) {Super. entryremoved (evicted, Key, OldValue, NewValue);if(evicted && oldValue! =NULL) {oldvalue.recycle (); }            }/ * Create bitmap*/            @Override            protectedBitmapCreate(String key) {Final intResId = Mviewpager.getresources (). Getidentifier (Key,"Drawable", Mviewpager.getcontext (). Getpackagename ());returnBitmapfactory.decoderesource (Mviewpager.getresources (), resId); }/ * Get the size of each value * /            @Override            protected int sizeOf(String key, Bitmap value) {returnValue.getbytecount ();    }        } ; }@Override     PublicObjectInstantiateitem(ViewGroup container,intPosition) {View view = Layoutinflater.from (Container.getcontext ()). Inflate (R.layout.view_pager_item,NULL) ;        ImageView ImageView = (ImageView) View.findviewbyid (R.id.id_view_pager_item);        Bitmap Bitmap = Mlrucache.get (mdata.get (position));        Imageview.setimagebitmap (bitmap); Container.addview (view);returnView }@Override     Public void Destroyitem(ViewGroup container,intPosition, Object object) {Container.removeview (View) object); }@Override     Public int GetCount() {returnMdata.size (); }@Override     Public Boolean Isviewfromobject(View view, Object object) {returnview = = object; }}
Summarize

1, LruCache is a caching mechanism based on LRU algorithm;
2, the LRU algorithm principle is to remove the least recently used data, of course, the premise is that the current amount of data is greater than the maximum value set.
3, LruCache does not really release memory, just remove the data from the map, the actual release of memory or the user to manually release.

Demo Source Download

LRUCache detailed Android memory optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.