"Go" completely parse the android cache mechanism--lrucache

Source: Internet
Author: User

Thoroughly parse the android cache mechanism--lrucache

About Android's level three cache, the main one is the memory cache and the hard disk cache. The implementation of these two caching mechanisms has been applied to the LRUCache algorithm, today we will use to the source code parsing, to thoroughly understand the caching mechanism in Android.

First, the cache policy in Android

In general, the caching strategy primarily includes the addition, fetching, and deletion of the cache. How to add and get cache This is a good understanding, then why do you want to delete the cache? This is because both the memory cache and the hard disk cache have a limited cache size. When the cache is full and you want to add the cache, you need to delete some old caches and add new ones.

Therefore, LRU (Least recently used) cache algorithm came into being, LRU is the least recently used algorithm, its core idea is that when the cache is full, will first eliminate those least recently used cache objects. There are two kinds of caches using LRU algorithm: Lrhcache and Dislrucache, which are used to implement memory cache and hard disk cache, and their core idea is LRU cache algorithm.

Second, the use of LRUCache

LRUCache is a cache class provided by Android 3.1, so you can use LRUCache to implement memory caching directly in Android. While Dislrucache is currently not part of the Android SDK, Android's official documentation recommends using this algorithm for hard disk caching.

Introduction of 1.LruCache

LRUCache is a generic class, and the main algorithm principle is to store the most recently used objects in Linkedhashmap with strong references (that is, the way we normally use object references). When the cache is full, the least recently used objects are removed from memory and a get and put method is provided to complete the fetch and add operations of the cache.

Use of 2.LruCache

The use of LRUCache is very simple, we have the picture cache as an example.

int maxMemory = (int) (Runtime.getRuntime().totalMemory()/1024);        int cacheSize = maxMemory/8; mMemoryCache = new LruCache<String,Bitmap>(cacheSize){ @Override protected int sizeOf(String key, Bitmap value) { return value.getRowBytes()*value.getHeight()/1024; } };

① sets the size of the LRUCache cache, typically 1/8 of the available capacity for the current process.
② rewrite the sizeof method to calculate the size of each picture to be cached.

Note: The total capacity of the cache is consistent with the unit size of each cache object.

Three, the realization principle of LRUCache

LRUCache's core idea is to maintain a list of cached objects, where the list of objects is arranged in order of access, that is, objects that have not been accessed, will be placed at the end of the queue and are about to be eliminated. The most recently visited objects will be placed on the team's head and eventually eliminated.

As shown in the following:



Then who is the queue to maintain, the previous has been introduced by the Linkedhashmap to maintain.

The Linkedhashmap is implemented by the data structure of the array + doubly linked list. The structure of the two-way linked list can achieve the order of access and insertion order, so that the <key,value> in Linkedhashmap in a certain order.

Use the following constructor to specify whether the structure of a doubly linked list in Linkedhashmap is an access order or an insertion order.

public LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder) { super(initialCapacity, loadFactor); this.accessOrder = accessOrder; }

Where Accessorder is set to True is the access order, or FALSE, is the insertion order.

Explained by specific examples:
When set to True

PublicStatic finalvoidMain(string[] args) {Linkedhashmap<integer, integer>Map =New Linkedhashmap<> (0,0.75f,true);Map.put (0,0);Map.put (1,1);Map.put (2, 2); Map.put (3, 3); Map.put (4, 4); Map.put (5, 5); Map.put (6, 6); Map.get (1); Map.get (2); for (Map.entry<integer, integer> Entry: map.entryset ()) {System.out.println (Entry.getkey () + ":" + Entry.getvalue ()); } } 

Output Result:

0:0
3:3
4:4
5:5
6:6
1:1
2:2

That is, the latest access to the last output, then this exactly satisfies the idea of the LRU cache algorithm. Visible LRUCache Ingenious realization, is to take advantage of this data structure of LINKEDHASHMAP.

Below we in the LRUCache source code to see specifically, how to apply Linkedhashmap to implement the cache to add, get and delete.

public LruCache(int maxSize) { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } this.maxSize = maxSize; this.map = new LinkedHashMap<K, V>(0, 0.75f, true); }

You can see from the LRUCache constructor that it is using the Linkedhashmap access order.

Put () method

PublicFinal VPut(K key, V value) {cannot be empty, otherwise throws an exceptionif (key = =null | | Value = =NULL) {Thrownew nullpointerexception ( "key = = NULL | | Value = = null "); } V Previous; synchronized (this) {// The inserted cache object value is added 1 putcount++; //increases the size of the existing cache by + = safesizeof (key, value); //Add cache object to map previous = Map.put (key, value); //if there are already cached objects, the cache size reverts to the previous if (Previous! = null) {Size-= safesizeof (key, previous);} } //entryremoved () is an empty method that can be implemented by itself if (Previous! = null" {entryremoved (false, Key, Previous, value);} //adjust cache Size (critical method) TrimToSize (maxSize); return previous;}           

There is no difficulty in seeing the put () method, it is important to call the TrimToSize () method after the cache object has been added, to determine whether the cache is full, and to delete the least recently used algorithm if it is full.
TrimToSize () method

 PublicvoidTrimToSize(int maxSize) {Dead loopwhile (true) {K key; V value; Synchronized (This) {Throws an exception if map is empty and the cache size is not equal to 0 or the cache size is less than 0if (Size <0 | | (map.isempty () && size! = 0)) {throw new illegalstateexception (GetClass (). GetName () + Span class= "hljs-string" ". SizeOf () is reporting inconsistent results!"); //if the cache size is smaller than the maximum cache, or the map is empty, you do not need to delete the cache object, jump out of the loop if (Size <= MaxSize | | map.isempty ()) {break;} //iterator gets the first object, the element at the end of the queue, the least recently accessed element map.entry<k, v> toevict = map.entryset (). iterator (). next (); key = Toevict.getkey (); value = Toevict.getvalue (); Span class= "hljs-comment" >//Delete the object and update the cache size map.remove (key); Size-= safesizeof (key, value ); evictioncount++; } entryremoved (true, key, value, NULL);}}        

The TrimToSize () method constantly removes elements from the tail of the Linkedhashmap Squadron, which is least recently accessed until the cache size is less than the maximum value.

When you call LRUCache's Get () method to get the cached object in the collection, the element is accessed once, and the queue is updated, keeping the entire queue sorted in order of access. This update process is done in the Get () method in Linkedhashmap.

First look at the Get () method of LRUCache

Get () method

public final V get(K key) { //key为空抛出异常 if (key == null) { throw new NullPointerException("key == null"); } V mapValue; synchronized (this) { //获取对应的缓存对象 //get()方法会实现将访问的元素更新到队列头部的功能 mapValue = map.get(key); if (mapValue != null) { hitCount++; return mapValue; } missCount++; }

Where Linkedhashmap's Get () method is as follows:

public V get(Object key) {        LinkedHashMapEntry<K,V> e = (LinkedHashMapEntry<K,V>)getEntry(key);        if (e == null) return null; //实现排序的关键方法 e.recordAccess(this); return e.value; }

Call the Recordaccess () method as follows:

void recordAccess(HashMap<K,V> m) {            LinkedHashMap<K,V> lm = (LinkedHashMap<K,V>)m;            //判断是否是访问排序 if (lm.accessOrder) { lm.modCount++; //删除此元素 remove(); //将此元素移动到队列的头部 addBefore(lm.header); } }

This shows that LRUCache maintains a set of Linkedhashmap, which are sorted in order of access. When the put () method is called, the element is added to the binding, and TrimToSize () is called to determine if the cache is full, and if full, remove the tail element with the Linkedhashmap iterator, which is the least recently accessed element. When the Get () method is called to access the cache object, the Linkedhashmap get () method is called to get the corresponding collection element, and the element is updated to the team header.

The above is the principle of lrucache implementation, understand the LINKEDHASHMAP data structure can understand the whole principle. If you do not understand, you can first look at the specific implementation of LINKEDHASHMAP.

"Go" completely parse the android cache mechanism--lrucache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.