Detailed explanation of LruCache: Android memory optimization, lrucacheandroid

Source: Internet
Author: User

Detailed explanation of LruCache: Android memory optimization, lrucacheandroid
Concept:

LruCache
What is LruCache?
What is the implementation principle of LruCache?

These two questions can actually be answered as a question. Once you know what LruCache is, you only know the implementation principle of LruCache. The full name of Lru is Least Recently Used, least recently used! So we can infer the implementation principle of LruCache: remove the least recently used data from the cache and retain the most frequently used data. How can we implement the specific code, let's go to the source code.

LruCache source code analysis
Public class LruCache <K, V> {// cache the map set. Why do we need to use LinkedHashMap? // because it is correct that sorting is required after the cache value is obtained, to ensure that // The minimum private final LinkedHashMap value <K, V> map is removed next time; // The current cached value private int size; // maximum private int maxSize; // The number of private int putCount added to the cache; // The number of private int createCount created; // The number of private int evictionCount removed; // The number of private int hitCount hits; // number of missing private int missCount; // instantiate Lru. You need to input the Maximum Cache value. // The maximum value can be a number, such as the number of objects or Memory size // For example, the maximum memory can only cache 5 MB public LruCache (int maxSize) {if (maxSize <= 0) {throw new IllegalArgumentException ("maxSize <= 0");} this. maxSize = maxSize; this. map = new LinkedHashMap <K, V> (0, 0.75f, true);} // reset the Maximum Cache value public void resize (int maxSize) {if (maxSize <= 0) {throw new IllegalArgumentException ("maxSize <= 0");} synchronized (this) {this. maxSize = maxSize;} trimToSize (maxSize);} // pass Key obtains the cache value public final V get (K key) {if (key = null) {throw new NullPointerException ("key = null");} V mapValue; synchronized (this) {mapValue = map. get (key); if (mapValue! = Null) {hitCount ++; return mapValue;} missCount ++;} // if not, you can create V createdValue = create (key ); if (createdValue = null) {return null;} synchronized (this) {createCount ++; mapValue = map. put (key, createdValue); if (mapValue! = Null) {// There was a conflict so undo that last put map. put (key, mapValue);} else {// The cache size changes by size + = safeSizeOf (key, createdValue);} // not removed here, only if (mapValue! = Null) {entryRemoved (false, key, createdValue, mapValue); return mapValue;} else {// determine whether the cache is out of bounds trimToSize (maxSize); return createdValue ;}} // Add cache, which is the same as the public final V put (K key, V value) code after the create method above) {if (key = null | value = null) {throw new NullPointerException ("key = null | value = null");} V previous; synchronized (this) {putCount ++; size + = safeSizeOf (key, value); previous = Map. put (key, value); if (previous! = Null) {size-= safeSizeOf (key, previous) ;}} if (previous! = Null) {entryRemoved (false, key, previous, value);} trimToSize (maxSize); return previous;} // checks whether the cache has exceeded private void trimToSize (int maxSize) {while (true) {K key; V value; synchronized (this) {if (size <0 | (map. isEmpty () & size! = 0) {throw new IllegalStateException (getClass (). getName () + ". sizeOf () is reporting inconsistent results! ");} // If no, the returned if (size <= maxSize) {break;} // The following Code indicates that the maximum range of Map is exceeded. entry <K, V> toEvict = null; for (Map. entry <K, V> entry: map. entrySet () {toEvict = entry;} if (toEvict = null) {break;} // remove the last one, that is, the minimum used cache key = toEvict. getKey (); value = toEvict. getValue (); map. remove (key); size-= safeSizeOf (key, value); evictionCount ++;} entryRemoved (true, key, value, null) ;}// manually remove, the user calls public fin. Al V remove (K key) {if (key = null) {throw new NullPointerException ("key = null") ;}v previous; synchronized (this) {previous = map. remove (key); if (previous! = Null) {size-= safeSizeOf (key, previous) ;}} if (previous! = Null) {entryRemoved (false, key, previous, null);} return previous;} // You can override it here, protected void entryRemoved (boolean evicted, K key, V oldValue, V newValue) {} protected V create (K key) {return null ;} private int safeSizeOf (K key, V value) {int result = sizeOf (key, value); if (result <0) {throw new IllegalStateException ("Negative size: "+ key +" = "+ value);} return result;} // pay special attention to this method. It should echo the maxSize of the instantiated LruCache. How can we echo it, for example, if the size of maxSize is the number of caches, return 1 is OK. If the size of the memory is 5 MB, this cannot be the number, this should be the size of each cached value. If it is Bitmap, this should be bitmap. getByteCount (); protected int sizeOf (K key, V value) {return 1;} // clear the cache public final void evictAll () {trimToSize (-1 ); //-1 will evict 0-sized elements} public synchronized final int size () {return size;} public synchronized final int maxSize () {return maxSize ;} public synchronized final int hitCount () {return hitCount;} public synchronized final int missCount () {return missCount;} public synchronized final int createCount () {return createCount ;} public synchronized final int putCount () {return putCount;} public synchronized final int evictionCount () {return evictionCount;} public synchronized final Map <K, V> snapshot () {return new LinkedHashMap <K, V> (map );}}
Use LruCache

First, let's look at the two memory usage diagrams.

Figure-1

Figure 2

The memory analysis diagram above analyzes the data of the same application. The only difference is that figure-1 does not use LruCache, while figure-2 uses LruCache. Obviously, figure-1 memory usage is obviously too large, basically around 30 m, while figure-2 memory usage is basically around 20 m. This saves nearly 10 MB of memory!

OK. The implementation code will be posted below.

/*** Created by gyzhong on 15/4/5. */public class LruPageAdapter extends PagerAdapter {private List <String> mData; private LruCache <String, Bitmap> mLruCache; private int mTotalSize = (int) Runtime. getRuntime (). totalMemory (); private ViewPager mViewPager; public LruPageAdapter (ViewPager viewPager, List <String> data) {mData = data; mViewPager = viewPager;/* instantiate LruCache */mLruCache = new LruCac He <String, Bitmap> (mTotalSize/5) {/* this method is called when the cache is greater than the set maximum value, we can use it to release memory */@ Override protected void entryRemoved (boolean evicted, String key, Bitmap oldValue, Bitmap newValue) {super. entryRemoved (evicted, key, oldValue, newValue); if (evicted & oldValue! = Null) {oldValue. recycle () ;}}/* create bitmap */@ Override protected Bitmap create (String key) {final int resId = mViewPager. getResources (). getIdentifier (key, "drawable", mViewPager. getContext (). getPackageName (); return BitmapFactory. decodeResource (mViewPager. getResources (), resId);}/* Get the size of each value */@ Override protected int sizeOf (String key, Bitmap value) {return value. getByteCount () ;}};}@ Override public Object instantiateItem (ViewGroup container, int position) {View view = LayoutInflater. from (container. getContext ()). inflate (R. layout. view_pager_item, null); ImageView imageView = (ImageView) view. findViewById (R. id. id_view_pager_item); Bitmap bitmap = mLruCache. get (mData. get (position); imageView. setImageBitmap (bitmap); container. addView (view); return view;} @ Override public void destroyItem (ViewGroup container, int position, Object object Object) {container. removeView (View) object);} @ Override public int getCount () {return mData. size () ;}@ Override public boolean isViewFromObject (View view, Object object) {return view = object ;}}
Summary

1. LruCache is a cache mechanism based on the Lru algorithm;
2. The principle of the Lru algorithm is to remove the least recently used data, provided that the current data volume is larger than the set maximum value.
3. LruCache does not actually release the memory, but only removes data from the Map. To release the memory, you must manually release the memory.

Download demo source code

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.