LRUCache Detailed Android memory optimization __java/android

Source: Internet
Author: User
Tags object object
Concept:

LruCache
What is LRUCache.
What is the principle of lrucache implementation.

These two problems can be answered as a question, know what is LruCache, only then know the realization of LruCache principle; LRU's full name is least recently Used, the least recently used. So we can infer the principle of LruCache: the least recently used data from the cache to remove, keep the most frequently used data, that specific code how to achieve it, we enter into the source code to see. LRUCache Source Analysis

public class Lrucache<k, v> {//cache map Collection, why use LINKEDHASHMAP///Because yes. After the cached values are sorted to ensure that//the next removal is the least used value
    Private final linkedhashmap<k, v> map;
    The current cached value is private int size;
    Maximum value private int maxSize;
    Number of additions to the cache private int putcount;
    Number of created private int createcount;
    Number of removed private int evictioncount;
    Number of hits private int hitcount;

    Missing number private int misscount; Instantiate the Lru, need to pass in the maximum value of the cache//The maximum value can be number, such as the number of objects, or the size of memory//For example, the maximum memory can only cache 5 trillion public LruCache (int maxSize) {if (maxSize <= 0)
        {throw new IllegalArgumentException ("maxSize <= 0");
        } this.maxsize = MaxSize;
    This.map = new linkedhashmap<k, v> (0, 0.75f, true); //Resets the value of the maximum cache public void resize (int maxSize) {if (maxSize <= 0) {throw new Illegalargume
        Ntexception ("maxSize <= 0");
  } synchronized (this) {this.maxsize = maxSize;      } trimtosize (MaxSize); //Get cached value via key public final V get (K key) {if (key = null) {throw new Nullpointerexceptio
        N ("key = = null");
        } V Mapvalue;
            Synchronized (this) {Mapvalue = Map.get (key);
                if (mapvalue!= null) {hitcount++;
            return mapvalue;
        } misscount++;
        //If not, the user can go to create V-Createdvalue = Create (key);
        if (Createdvalue = = null) {return null;
            } synchronized (this) {createcount++;

            Mapvalue = Map.put (key, Createdvalue); if (mapvalue!= null) {//There is a conflict so undo then put Map.put (key, Mapvalu
            e);
            else {///cache size is changed to sizes = Safesizeof (key, Createdvalue);
          }//Not removed here, just changed position if (Mapvalue!= null) {  Entryremoved (False, Key, Createdvalue, Mapvalue);
        return mapvalue;
            else {//judge whether the cache is out of bounds trimtosize (maxSize);
        return createdvalue; }//Add cache, as public final V-put (K key, V value) {if (key = null | NULL) {throw new NullPointerException ("key = = NULL | |
        Value = = null ");
        } V Previous;
            Synchronized (this) {putcount++;
            Size + = safesizeof (key, value);
            Previous = Map.put (key, value);
            if (previous!= null) {size = Safesizeof (key, previous);
        } if (previous!= null) {entryremoved (False, key, previous, value);
        } trimtosize (MaxSize);
    return previous;
            }//Detect if cache is out of bounds private void trimtosize (int maxSize) {while (true) {K key;
            V value;
Synchronized (this) {                if (Size < 0 | | (Map.isempty () && size!= 0)) {throw new IllegalStateException (GetClass (). GetName () + ". SizeOf () is rep
                Orting inconsistent results! ");}
                If not, returns the if (size <= maxSize) {break;
                }//The following code indicates that it has exceeded the maximum range map.entry<k, v> toevict = null;
                For (Map.entry<k, v> entry:map.entrySet ()) {toevict = Entry;
                } if (toevict = = null) {break;
                //Remove the last, that is, the least used cache key = Toevict.getkey ();
                Value = Toevict.getvalue ();
                Map.Remove (key);
                Size = safesizeof (key, value);
            evictioncount++;
        } entryremoved (True, key, value, NULL); }//manual removal, user calls public FInal V Remove (K key) {if (key = = null) {throw new NullPointerException ("key = = null");
        } V Previous;
            Synchronized (this) {previous = Map.Remove (key);
            if (previous!= null) {size = Safesizeof (key, previous);
        } if (previous!= null) {entryremoved (False, key, previous, NULL);
    return previous; ///Here users can rewrite it to implement data and memory recovery operations protected void entryremoved (Boolean evicted, K key, v OldValue, v newvalue) {} Pro
    tected V Create (K key) {return null;
        private int safesizeof (K key, V value) {int result = SizeOf (key, value);
        if (Result < 0) {throw new IllegalStateException ("Negative size:" + key + "=" + value);
    return result;  }//This method should pay special attention to, and we instantiate the LruCache maxSize to echo, how to do echoes, such as maxSize size for the number of cache, here is return 1 OK, if the size of the memory, if 5M, this can not be the number of Up, which is supposed to be the value of each cacheSize, if it is Bitmap, this should be bitmap.getbytecount ();
    protected int sizeOf (K key, V value) {return 1; //Empty cache public final void Evictall () {trimtosize ( -1);//-1 would evict 0-sized elements} pub
    LIC synchronized final int size () {return size;
    Synchronized final int maxSize () {return maxSize;
    Synchronized final int Hitcount () {return hitcount;
    Synchronized final int Misscount () {return misscount;
    Synchronized final int Createcount () {return createcount;
    Synchronized final int Putcount () {return putcount;
    Synchronized final int Evictioncount () {return evictioncount;
    Synchronized final map<k, v> snapshot () {return new linkedhashmap<k, v> (MAP);
 }
}
LruCache Use

Let's take a look at two graphs of memory use

                             Figure-1

                            Figure-2

The above memory analysis diagram is the same application of the data, the only difference is that figure 1 does not use LruCache, and 2 use of LruCache; it can be very obvious to see that 1 of the memory use is significantly larger, basically around 30M, and figure-2 memory usage is basically around 20M. This saves nearly 10M of memory.

OK, let's post the implementation code below.

/** * Created by Gyzhong on 15/4/5.
    * * public class Lrupageadapter extends Pageradapter {private list<string> mdata;
    Private lrucache<string,bitmap> Mlrucache;
    private int mtotalsize = (int) runtime.getruntime (). TotalMemory ();

    Private Viewpager Mviewpager;
        Public Lrupageadapter (Viewpager viewpager,list<string> data) {mdata = data;
        Mviewpager = Viewpager; /* Instantiate lrucache*/Mlrucache = new Lrucache<string,bitmap> (MTOTALSIZE/5) {/* This method is called when the cache is greater than the maximum value we set , we can use to do memory release operations/@Override protected void Entryremoved (Boolean evicted, String key, Bitmap Oldvalu
                E, Bitmap newvalue) {super.entryremoved (evicted, Key, OldValue, NewValue);
                if (evicted && oldValue!= null) {oldvalue.recycle ();
      }/* Creates bitmap*/@Override protected bitmap Create (String key) {          Final int resid = Mviewpager.getresources (). Getidentifier (Key, "drawable", Mviewpager.get
                Context (). Getpackagename ());
            Return Bitmapfactory.decoderesource (Mviewpager.getresources (), resid);
                /* Get the size of each value/* @Override protected int sizeOf (String key, Bitmap value) {
            return Value.getbytecount ();
    }
        } ; @Override public Object Instantiateitem (viewgroup container, int position) {View view = Layoutinflate
        R.from (Container.getcontext ()). Inflate (R.layout.view_pager_item, NULL);
        ImageView ImageView = (imageview) View.findviewbyid (R.id.id_view_pager_item);
        Bitmap Bitmap = Mlrucache.get (mdata.get (position));
        Imageview.setimagebitmap (bitmap);
        Container.addview (view);
    return view; @Override public void Destroyitem (ViewGroup container, int position, object object) {Container.reMoveview (View) object);
    @Override public int GetCount () {return mdata.size ();
    @Override public boolean isviewfromobject (view view, Object object) {return view = = object; }
}
Summary

1, LruCache is a caching mechanism based on LRU algorithm;
2, the principle of the LRU algorithm is to remove the least recently used data, of course, if the current amount of data is greater than the maximum set value.
3, LruCache does not really release memory, just remove the data from the map, the real release of memory or to the user manual release.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.