Talk about the Android LRU cache algorithm implementation note (ii) application of--LRU

Source: Internet
Author: User
Tags set set

The previous article said that Android LRU cache algorithm implementation of the learning Note (a) we introduced the most common implementation of the LRU cache data Structure Linkedhashmap, this section we will focus on the characteristics of LINKEDHASHMAP data structure, To implement the cache structure and learn the Android source code and the project in the perfect cache.

In the previous article on the cache implementation, we are very important to consider the following points: 1. Access speed; 2. Drive out of the old cache policy; 3. It is best to consider a certain degree of concurrency. Linkedhashmap to the implementation of the hash table to ensure that our cache fast access speed, we know through the source, linkedhashmap default cache infinity, all the nodes never expire. Actually in the mobile phone development, the memory is the amount of gold, sometimes even pinching irritate d. Therefore, we must rewrite the old cache policy in the Android app. My own simple implementation of the cache eviction policy is as follows:

public class Lrucache<k,v> extends Linkedhashmap<k, v> {private static final long serialversionuid = 1L;/** most      Big Data storage capacity */private static final int lru_max_capacity = 1024;      /** Storage Data capacity */private int capacity;      /** * Default Construction method */Public LruCache () {super (); }/* * Default cache maximum value is lru_max_capacity */public LruCache (int initialcapacity, float loadfactor, Boolean islru)          {Super (initialcapacity, Loadfactor, ISLRU);      capacity = Lru_max_capacity; } public LruCache (int initialcapacity, float loadfactor, boolean islru, int lrucapacity) {Super (Initialcap          Acity, Loadfactor, ISLRU);      this.capacity = lrucapacity; }/** * Overrides the Removeeldestentry method to implement overriding the default cache eviction policy (the default Linkedhashmap node never expires) */@Override protected Boolea          N Removeeldestentry (map.entry<k,v> eldest) {if (size () > capacity) {return true;    } return false;  }  } 
The above code in a multithreaded environment may be problematic, because our map object belongs to the shared resources of multiple threads, we must implement synchronous access in a multithreaded environment. Multithreaded environments can be used using the Collections.synchronizedmap () method to implement the LRUCache thread-safe operation of our internship.

The above code we can also have another way of writing, we do not rewrite the linkedhashmap through inheritance, can be done by the delegate (the personal feel that the aggregation relationship is more accurate) to achieve, and we need to achieve the thread security of map access.

public class Lrucache<k,v> {/** maximum data storage capacity */private static final int lru_max_capacity = 1024;        Linkedhashmap<k, v> map;      /** Storage Data capacity */private int capacity;      /** * Default Construction method */Public LruCache () {super (); }/* * Default cache maximum value is lru_max_capacity */public LruCache (int initialcapacity, float loadfactor, Boolean islru)      {capacity = Lru_max_capacity; Map = new Linkedhashmap<k,v> (initialcapacity, Loadfactor, ISLRU) {@Override protected Boolean removeeldest                  Entry (map.entry eldest) {if (size () > capacity) {return true;              } return false;      }        }; } public LruCache (int initialcapacity, float loadfactor, boolean islru, int lrucapacity) {this.capacity = Lruc      apacity; Map = new Linkedhashmap<k,v> (initialcapacity, Loadfactor, ISLRU) {@Override protected Boolean Removeel Destentry(Map.entry eldest)                  {if (size () > capacity) {return true;              } return false;      }    };    } public synchronized void put (K key, V value) {map.put (key, value);    Public synchronized V get (K key) {return map.get (key);    } public synchronized void remove (K key) {map.remove (key);    } public synchronized set<map.entry<k, v>> GetAll () {return map.entryset ();    } public synchronized int size () {return map.size ();    } public synchronized void Clear () {map.clear (); }}

The above implementation of the thread safety of the LINKEDHASHMAP cache structure, we would like, we can improve the concurrency of our cache it? Based on our experience, we will think of read-write locks to achieve different constraints on read-write locking levels to achieve simultaneous multi-threaded reading, exclusive write to improve the cache concurrency. We can write the following code:

public class Lrucache<k,v> {/** maximum data storage capacity */private static final int lru_max_capacity = 1024;        Linkedhashmap<k, v> map;      Private final Readwritelock Rwlock = new Reentrantreadwritelock ();      Private final Lock Readlock = Rwlock.readlock ();         Private final Lock Writelock = Rwlock.writelock ();      /** Storage Data capacity */private int capacity;      /** * Default Construction method */Public LruCache () {super (); }/* * Default cache maximum value is lru_max_capacity */public LruCache (int initialcapacity, float loadfactor, Boolean islru)      {capacity = Lru_max_capacity; Map = new Linkedhashmap<k,v> (initialcapacity, Loadfactor, ISLRU) {@Override protected Boolean removeeldest                  Entry (map.entry eldest) {if (size () > capacity) {return true;              } return false;      }        }; } public LruCache (int initialcapacity, float loadfactor, boolean islru, intlrucapacity) {this.capacity = lrucapacity; Map = new Linkedhashmap<k,v> (initialcapacity, Loadfactor, ISLRU) {@Override protected Boolean Removeel                  Destentry (map.entry eldest) {if (size () > capacity) {return true;              } return false;      }    };    } public void put (K key, V value) {try{writelock.lock ();    Map.put (key, value);    } finally{Writelock.unlock ();    }} public synchronized V get (K key) {try{readlock.lock ();    return Map.get (key);    } finally{Readlock.unlock ();    }} public void remove (K key) {try{readlock.lock ();    Map.Remove (key);    } finally{Readlock.unlock ();    }} public Set<map.entry<k, V>> GetAll () {try{readlock.lock ();    return Map.entryset ();    } finally{Readlock.unlock ();    }} public int size () {try{readlock.lock ();   return Map.size (); } finally{Readlock.unlock ();    }} public void Clear () {try{readlock.lock ();    Map.clear ();    } finally{Readlock.unlock (); }    }}
The above code in the multi-threaded environment, the get and put method of read-write lock contention problem. We assume that our lrucache is accessed in a multithreaded environment when we have multiple threads executing the Get method (read lock), we know the Get method

Public V get (Object key) {        entry<k,v> e = (entry<k,v>) getentry (key);        if (E = = null)            return null;        E.recordaccess (this); If the enclosing Map is access-ordered, it moves the entry to the end of the list; Otherwise, it does nothing.        return e.value;    }
When our linkedhashmap is sorted in order of access, we move the current node to the Befroe reference point of the header node of the LINKEDHASHMAP list structure. Therefore, when we execute the GET method at the same time, we cannot guarantee that every time the Get method is called, we can execute the Recordaccess method completely every time, so our list structure may be destroyed. We look at the Recordaccess method
void Recordaccess (hashmap<k,v> m) {            linkedhashmap<k,v> lm = (linkedhashmap<k,v>) m;            if (lm.accessorder) {                lm.modcount++;                Remove ();                 Addbefore (Lm.header);            }        }
So we know that our get method is not just a read operation, but also changes the data structure of linkedhashmap, and we do not guarantee that the Get method to the linkedhashmap of the change to exclusive operation in the case of multi-read, so, We use read-write locks differently to improve the concurrency of linkedhashmap.

Read-write locks do not improve the concurrency of the map, we will think of the JDK1.5 java.util.concurrent package under the concurrenthashmap of the clever design of concurrency (unfamiliar can see my other article Java multithreaded Learning notes- Starting with the map to talk about synchronization and concurrency, we can learn from Concurrenthashmap's concurrency design to improve the concurrency of our map. We know that our linkedhashmap actually implemented when the HashMap was inherited, and we also added two fields before and after nodes to the HashMap node.

private static class Entry<k,v> extends Hashmap.entry<k,v> {        //These fields comprise the doubly linked Li St used for iteration.        Entry<k,v> before, after;
Similarly, we can implement high concurrency cache implementations by inheriting Concurrenthashmap. Because of their own level of limited, did not fully appreciate the concurrenthashmap essence, the whole does not come out. See someone on the net using Concurrenthashmap's design to implement a high concurrency LRU cache (see Concurrenthasplruhashmap implementation).

We went back to Google's design of the Android cache, and we first looked at the design of the memory cache LRUCache recommended by Google. Similarly, we consider the following aspects of caching: 1. Access speed; 2. Drive out of the old cache policy; 3. It is best to consider some degree of concurrency. The access speed is mainly determined by the data structure, and the LRUCache can guarantee the access speed of the nodes through the delegated linkedhashmap. Let's look at LRUCache driving out the old cache policy and concurrency, we see LRUCache source know, LRUCache to linkedhashmap get and put method is not a simple call implementation, self re-implementation of put and get method internship. Let's look at the put internship as follows:

public final V put (K key, V value) {if (key = = NULL | | value = = NULL) {T Hrow new NullPointerException ("key = = NULL | |        Value = = null ");        } V Previous;            Synchronized (this) {putcount++;            Size + = safesizeof (key, value);            Previous = Map.put (key, value); if (previous! = NULL) {Size-= safesizeof (key, previous);//The entry size of the default linkedhashmap is not the occupied byte size, and the default count represents the size.            If we need to precisely limit the memory size to evict the old eldest node, we need to override the sizeof method in the Safesizeof method}} if (previous! = null) { Entryremoved (False, key, previous, value); When a node is removed, the default does nothing, and we can override the method in subclasses such as Android2.3.3 (API 10) and previous versions, where the bitmap object is stored separately from its pixel data, and bitmap objects are stored in the heap. The bitmap object's pixel data is stored in the native memory (local RAM) so when bitmap is evicted from the cache, we also need to manually release the Bitamp. At this time, we rewrite the Entryremoved method function is shown out} trimtosize (maxSize);    Call this method by determining the cache time to reach the maximum value, try to evict the cache return previous; }
We look at the code implementation of LRUCache, and we find that when we implement the Put method, we cannot find a way to determine the caching strategy in Linkedhashmap Removeeldestentry, Our put implementation uses the TrimToSize method to implement a policy of eviction of the cache, so we can assume that our LRUCache does not consider the cache policy implementation of various scenarios on the cache policy. Our code caching policy is no longer a subclass of the template schema overriding the parent class method to rewrite, our cache eviction policy is the TrimToSize method that overrides sizeof to define the policy. We see the implementation of TrimToSize as follows:

public void trimtosize (int maxSize) {        while (true) {            K key;            V value;            Synchronized (this) {                if (Size < 0 | | (Map.isempty () && size! = 0)) {                    throw new IllegalStateException (GetClass (). GetName ()                            + ". SizeOf () is reporting inconsistent results!");                }                if (size <= maxSize | | map.isempty ()) {//when our cache size is less than maxSize, we do not perform eviction cache break                    ;                }                Map.entry<k, v> toevict = Map.entryset (). iterator (). Next (); We evict the cache policy when our cache size exceeds maxsize, we start with an iterator starting with the oldest iteration, removing the node from the map                key = Toevict.getkey ();                Value = Toevict.getvalue ();                Map.Remove (key);                Size-= safesizeof (key, value);                evictioncount++;            }            Entryremoved (True, key, value, NULL);        }    }
Let's look at the implementation of the Get method as follows:

Public final V get (K key) {if (key = = null) {throw new NullPointerException ("key = = null");        } V Mapvalue;            Synchronized (this) {Mapvalue = Map.get (key);                if (mapvalue! = null) {hitcount++;            return mapvalue;        } misscount++; }/* * Attempt to create a value. This is a long time, and the map * May is different when create () returns.  If a conflicting value is * added to the map while create () is working, we leave that value in * the map         and release the created value. */V Createdvalue = Create (key);        The Create method returns null by default, so when get does not get the value, the default return is null if (Createdvalue = = null) {return null;            } synchronized (this) {createcount++;            Mapvalue = Map.put (key, Createdvalue); if (mapvalue! = null) {//There is a conflict so undo the last put               Map.put (key, Mapvalue);            } else {size + safesizeof (key, Createdvalue);            }} if (Mapvalue! = null) {entryremoved (False, Key, Createdvalue, Mapvalue);        return mapvalue;            } else {trimtosize (maxSize);        return createdvalue; }    }
From the source of the implementation of LRUCache we see that the implementation of LRUCache is not a surprising place, personally think that the purpose of rewriting the Linkedhashmap method in order to release the node entry map, entryremoved method can do some cleanup work.

In our actual development, we often combine lrucache and set<softreference<bitmap>> to implement caching. We know that starting from Android2.3, the method of using SoftReference or weakreference to do image caching has not been recommended. Because the DVM GC has more frequent recoveries for softreference and weakreference, we can no longer rely on the SoftReference collection to implement the cache when using the cache, but SoftReference still serves as a secondary cache. Here we learn LRUCache strong references and set<softreference<bitmap>> implement memory caching with an open source implementation android-bitmapcache on GitHub.

Final class Bitmapmemorylrucache extends Lrucache<string, cacheablebitmapdrawable> {private final Set<softref Erence<cacheablebitmapdrawable>> mremovedentries; The set set of SoftReference here holds the node entry the entryremoved operation to the LRUCache node private final Bitmaplrucache.recyclepolicy        mrecyclepolicy;//this office to determine the current bitmap time to perform a manual recycling policy bitmapmemorylrucache (int maxSize, bitmaplrucache.recyclepolicy policy) {        Super (MaxSize);        Mrecyclepolicy = policy; Mremovedentries = Policy.caninbitmap ()?    Collections.synchronizedset (New Hashset<softreference<cacheablebitmapdrawable>> ()): null; } cacheablebitmapdrawable put (cacheablebitmapdrawable value) {if (null! = value) {value.setcached (t            Rue);        Return put (Value.geturl (), value);    } return null;    } bitmaplrucache.recyclepolicy Getrecyclepolicy () {return mrecyclepolicy; } @Override protected int sizeOf (String key, CacheablebiTmapdrawable value) {//Override this method, we get the exact size of the bitmap, our eviction policy is more sensitive to size, and the default eviction policy is to evict the return Value.getmemorysize () based on the number of nodes; } @Override protected void Entryremoved (Boolean evicted, String key, cacheablebitmapdrawable oldvalue,//when the node is evicted, put in        The collection of our soft references cacheablebitmapdrawable newvalue) {//Notify the wrapper that it's no longer being cached        Oldvalue.setcached (FALSE); if (mremovedentries! = null && oldvalue.isbitmapvalid () && oldvalue.isbitmapmutable ()) {SYNCHR             Onized (mremovedentries) {Mremovedentries.add (new softreference<cacheablebitmapdrawable> (OldValue)); }}} Bitmap getbitmapfromremoved (final int width, final int height) {//Get node VA for soft reference set evicted by LRUCache        Lue if (mremovedentries = = null) {return null;        } Bitmap result = null; Synchronized (mremovedentries) {final iterator<softreference<cacheablebitmapdrawable>> it = MRemOvedentries.iterator ();                while (It.hasnext ()) {cacheablebitmapdrawable value = It.next (). get (); if (value! = null && value.isbitmapvalid () && value.isbitmapmutable ()) {if (Value.geti                        Ntrinsicwidth () = = width && value.getintrinsicheight () = = height) {                        It.remove ();                        result = Value.getbitmap ();                    Break                }} else {it.remove ();    }}} return result; } void Trimmemory () {final set<entry<string, cacheablebitmapdrawable>> values = Snapshot (). EntrySet        (); For (entry<string, cacheablebitmapdrawable> entry:values) {cacheablebitmapdrawable value = Entry.getVa            Lue ();            if (null = = value | |!value.isbeingdisplayed ()) {Remove (Entry.getkey ());      }  }    }} 
Have the full code interested in the park friends, see the full source of their own implementation of Android-bitmapcache (Https://github.com/chrisbanes/Android-BitmapCache).

Reprint Please specify source: http://blog.csdn.net/johnnyz1234/article/details/43958147







Talk about the Android LRU cache algorithm implementation note (ii) application of--LRU

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.