Probe into Android LRUCache

Source: Internet
Author: User
Tags int size

In Linkedhashmap, we know that Linkedhashmap has reserved an interface for our map cache that implements a specific substitution strategy, that is, to rewrite the Removeeldestentry function as follows:

          private static final int max_entries =;
     
          Protected Boolean removeeldestentry (Map.entry eldest) {return
             size () > max_entries;
          }
However, Linkedhashmap is not a problem in the implementation of the concurrent access is not considered, that is, in a multithreaded environment for LINKEDHASHMAP access is not secure.

1.Andriod LRUCache Overview Andriod Development This website gives the following description:

A cache that holds strong references to a limited number of values. Each time a value was accessed, it is moved to the head of a queue. When a value was added to a full cache, the value at the end of this is evicted and could become eligible for garbage C Ollection.

If Your cached values hold the need to is explicitly released, Override entryremoved (Boolean, K, V , V).

If A cache miss should is computed on demand for the corresponding keys, Override create (K). This is simplifies the calling code, allowing it to assume a value would always be returned, even when there ' s cache miss.

By default, the cache size was measured in the number of entries. Override sizeof (K, V)  to size the cache in different units. 

It can be seen that in the implementation of the Andriod LRUCache there are several flexible extension interfaces for development, including Entryremoved (Boolean, K, V, v) for memory space release operations on specific cache elements, create (k) Used to generate key values for a particular key when access is invalidated, but both functions are NULL in the source implementation. 2. Underlying data structure supportI originally thought the implementation of Andriod LRUCache inherited Linkedhashmap and rewritten the Removeeldestentry function, but looking at the source is completely different, so first look at the LRUCache's internal data field and its constructor:
        Private final linkedhashmap<k, v> map;         Linkedhashmap operates as a member variable

	/** Size of this cache in units. Not necessarily the number of elements. * *
	private int size;
	private int maxSize;                           cache element number upper limit

	private int putcount;
	private int createcount;
	private int evictioncount;
	private int hitcount;
	private int misscount;

	/**
	 * @param maxSize for caches that does not override {@link #sizeOf}, this is
	 *     maximum number of Entri Es in the cache. For all other caches, * it is the maximum sum of the sizes of the the entries in this     cache.
	 *
	/public LruCache (int maxSize) {
		if (maxSize <= 0) {                //constructor, initializing Linkedhashmap
			throw new IllegalArgumentException ("maxSize <= 0");
		}
		This.maxsize = maxSize;
		This.map = new linkedhashmap<k, v> (0, 0.75f, true);    Note the argument, true to represent the Accessorder
	}
Here you will think that when we operate on LRUCache, we operate on the linkedhashmap of our internal encapsulation.
3.putThe put function internally or linkedhashmap is invoked, but during the operation, a lock operation is performed to ensure that only one thread at a time can be inserted into the data.
/**
	 * Caches {@code value} for {@code key}. The value is moved to the head of the
	 queue.
	 *
	 * @return The previous value mapped by {@code key}.
	 * Public
	Final V put (K key, V value) {
		if (key = null | | | value = NULL) {
			throw new NullPointerException ("Ke y = = NULL | | Value = = null ");
		}

		V previous;
		Synchronized (this) {
			putcount++;
			Size + = safesizeof (key, value);
			Previous = Map.put (key, value);     Call Linkedhashmap's put Operation
			if (previous!= null) {
				size = safesizeof (key, previous);
			}

		if (previous!= null) {
			entryremoved (false, key, previous, value);

		TrimToSize (maxSize);                         Cache capacity detection to ensure that the amount of cached data does not exceed the maximum capacity return
		previous
	}

4.get
/** * Returns The value for {@code key} if it exists in the cache or can is * created by {@code #create}. If A value was returned, it's moved to the "*" Head of the queue.
	 This returns null if a value isn't cached and cannot * is created.
		* * Public final V get (K key) {if (key = = null) {throw new NullPointerException ("key = = null");
		} V Mapvalue;        Synchronized (this) {//Lock's get operation Mapvalue = Map.get (key);
				Call Linkedhashmap get operation if (Mapvalue!= null) {hitcount++;
			return mapvalue;
		} misscount++; } * * Attempt to create a value. This could take a long, and the map * may different when create () returns. If a conflicting value is * added to "map while create () is working, we leave that value in * the M
         AP and release the created value.                  * * V Createdvalue = Create (key); The processing of access failure, the English annotation is very clear, is what you have to do with the IF (Createdvalue = null) {RetuRN null; Create an unsuccessful direct return can} synchronized (this) {//Lock processing, ensure that the first created value is cached, thus ensuring the consistency of cached data Createcount     
			++; Mapvalue = Map.put (key, Createdvalue); The current thread was created successfully, put operation if (Mapvalue!= null) {//There was a conflict so undo, then put Map.put (key, Mapvalu        e);
			It was found that the thread completed the creation operation before it was discovered, keeping the original cache value operation} else {size = safesizeof (key, Createdvalue); } if (Mapvalue!= null) {//execute here, prove that a replacement operation must have occurred, and that the current thread is a post replacement operation Entryremoved (False, key, Crea  Tedvalue, Mapvalue);
		Releases the value return Mapvalue created by the current thread;                    else {trimtosize (maxSize);                    This proves that the current thread completes the creation operation, and that the value created by the current thread is cached return createdvalue; So cache capacity detection is required}}
5. Cache capacity Control TrimToSize
	/**
	 * @param maxSize The maximum size of the cache before returning. May be-1
	 * to     evict even 0-sized elements.
	 * *
	private void trimtosize (int maxSize) {while
		(true) {
			K key;
			V value;
			Synchronized (this) {  //Lock cache element deletion operation
				if (Size < 0 | | (Map.isempty () && size!= 0)) {
					throw new IllegalStateException (GetClass (). GetName ()
							+ ". SizeOf () is reporting inconsistent results!");
				}

				if (size <= maxSize | | map.isempty ()) {break
					;
				}

				Map.entry<k, v> toevict = Map.entryset (). iterator (). Next ();
				Key = Toevict.getkey ();
				Value = Toevict.getvalue ();
				Map.Remove (key);
				Size = safesizeof (key, value);
				evictioncount++;
			}

			Entryremoved (True, key, value, NULL);  This thread is responsible for freeing space for existing elements
		}
	
5. SummaryThe implementation of Android's LRUCache is a layer of encapsulation of LINKEDHASHMAP, which includes a thread lock that supports multi-threaded access, so it supports concurrent access when LRUCache.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.