The cache processing mechanism of Android universal Imageloader from source code Analysis _android

Source: Internet
Author: User
Tags exception handling

Through this article, we have seen UIL the cache processing mechanism of the image cache class library which is sought after by Daniel both at home and abroad. See the UIL in the implementation of the cache, only to find that this thing is not difficult, not too many process scheduling, there is no memory read control mechanism, no various exception handling. Anyway, UIL is not simply written in code, it's easy to handle. But this class library is so useful, and so many people use, so it is very necessary to see how he achieved. First understand the schematic diagram of the caching process in UIL.

Schematic diagram

There are three principals, respectively, the UI, the cache module, and the data source (network). The relationship between them is as follows:


①ui: Request data, using a unique key value index memory cache bitmap.

② Memory Cache: Cache Search, if you can find the key value corresponding to the bitmap, then return the data. Otherwise, perform the third step.

③ Hard disk storage: Retrieves files on SDcard using the file name corresponding to the unique key value.

④ if there is a corresponding file, use the Bitmapfactory.decode* method, decode the bitmap and return the data while writing the data to the cache. If there is no corresponding file, perform step fifth.

⑤ Download Picture: Start asynchronous thread, download data from data source (WEB).

⑥ If the download succeeds, the data is written to both the hard disk and the cache, and the bitmap is displayed in the UI.

Next, let's review the configuration of the cache in UIL (see Universal IMAGE LOADER specifically.) Part 2). Focus on the annotation section, we can configure the memory, disk cache implementation according to our own needs.

File Cachedir = storageutils.getcachedirectory (context,
"Universalimageloader/cache");
imageloaderconfiguration config = new
imageloaderconfiguration. Builder (Getapplicationcontext ())
. Maximagewidthformemorycache ().
Maximageheightformemorycache ()
. Httpconnecttimeout (). Httpreadtimeout (). ThreadPoolSize ().
threadpriority (
thread.min_priority +)
. Denycacheimagemultiplesizesinmemory ()
. MemoryCache (New Usingfreqlimitedcache ())//You can pass in your own memory cache
. Disccache (New Unlimiteddisccache (Cachedir))//You can pass in your own disk cache
. defaultdisplayimageoptions ( Displayimageoptions.createsimple ())

Memory caching policy in UIL

1. Only use the strong reference cache
Lrumemorycache (This class is the open source framework default memory cache class, caching is the strong reference of bitmap, I will analyze this class from the source above)

2. Caching with strong references combined with weak references is

Usingfreqlimitedmemorycache (if the total number of cached pictures exceeds the limit, delete the bitmap with the least frequency)
Lrulimitedmemorycache (This is also used LRU algorithm, and Lrumemorycache is different, he caches is bitmap weak reference)
Fifolimitedmemorycache (first-out cache policy, when exceeding the set value, first delete the first to add cache bitmap)
Largestlimitedmemorycache (deletes the largest bitmap object first) when the cache limit value is exceeded
Limitedagememorycache (when bitmap is added to the cache time exceeds our set value, delete it)

3. Use only weak reference caching

Weakmemorycache (The total size of this class cache bitmap is not limited, the only deficiency is instability, cached images are easily recycled)

We directly select the default configuration caching policy in UIL for analysis.

imageloaderconfiguration config = imageloaderconfiguration.createdefault (context);
Imageloaderconfiguration.createdefault (...) This method finally calls the Builder.build () method to create the default configuration parameters. The default memory cache implementation is Lrumemorycache, and the disk cache is Unlimiteddisccache.

Lrumemorycache resolution

Lrumemorycache: A cache that uses strong references to hold a limited number of bitmap (in the case of limited space, retaining the most recently used bitmap). Each time the bitmap is accessed, it is moved to the head of a queue. When bitmap is added to a cache with full space, the bitmap at the end of the queue is squeezed out and becomes eligible for GC-recycled status.
Note: This cache only uses strong references to save bitmap.

Lrumemorycache implements MemoryCache, and MemoryCache inherits from Memorycacheaware.

Public interface MemoryCache extends Memorycacheaware<string, bitmap>

The following is an inheritance diagram


Lrumemorycache.get (...)

I'm sure you'll see this code as much as I do. The simplicity of the code, in addition to the exception in the code, is the use of synchronized synchronous control.

/**
* Returns the Bitmap for {@code key} if it exists in the cache. If a BITMAP was returned, it's moved to the
"the". This returns null if a Bitmap are not cached.
* *
@Override public
final Bitmap get (String key) {
if (key = = null) {
throw new NullPointerException ("key = = null ");
}
Synchronized (this) {return
map.get (key);
}

We would be curious, wouldn't it be simple to take bitmap out of the map? But Lrumemorycache claims to retain the recently used bitmap in limited space. No hurry, let's look at map carefully. He is a linkedhashmap<string, bitmap> type of object.

The Get () method in Linkedhashmap not only returns the matching value, but also adjusts the entry of the matching key to the order in the list (Linkedhashmap uses a double linked list to save the data) before returning it to the end of the list. Of course, this must happen in the case of accessorder==true in Linkedhashmap, whereas the Get () method does not change the position of the entry in the list corresponding to the matching key.

 @Override public V get (Object key) {/* * * This is overridden to eliminate the n
Eed for a polymorphic * invocation in superclass at the expense of code duplication. */if (key = = null) {hashmapentry<k, v> e = Entryfornullkey; if (e = = null) return null; if (Accessorder) Maketail (
(Linkedentry<k, v>) e);
return e.value;
//Replace with Collections.secondaryhash when the VM is fast enough (http://b/).
int hash = Secondaryhash (key);
Hashmapentry<k, v>[] tab = table; For (hashmapentry<k, v> e = Tab[hash & (Tab.length-)]; e!= null; e = e.next) {K EKey = E.key; if (EKey = = Ke y | | (E.hash = = Hash && key.equals (EKey)))
{if (Accessorder) Maketail ((linkedentry<k, v>) e); return e.value;}
return null; }

The 11th line of code Maketail () is to adjust the position of entry in the list, in fact, is the two-way linked list adjustment. It judged Accessorder. It is clear to us now that Lrumemorycache uses Linkedhashmap to cache data, and that the order of entry in Linkedhashmap is adjusted after the Linkedhashmap.get () method is executed. So how do we make sure that the most recently used items won't be removed? Next, let's look at Lrumemorycache.put (...).

Lrumemorycache.put (...)

Notice the size+= sizeOf (key, value) in line 8th of the code, what is this size? We note that there is a trimtosize (maxSize) in line 19th, trimtosize (...) This function is used to limit the size of the lrumemorycache to not exceed the user-defined size, the cache size by the user at the beginning of Lrumemorycache initialization of the time limit.

@Override Public
Final Boolean put (String key, Bitmap value) {
if (key = NULL | | value = NULL) {
throw new N Ullpointerexception ("key = = NULL | | Value = = null ");
}
Synchronized (this) {
size + = sizeOf (key, value);
If the return value of Map.put () is not NULL, it means that the entry,put operation corresponding to the key only updates the entry
Bitmap previous = Map.put (key, value)
corresponding to the original key. if (previous!= null) {
size = sizeOf (key, previous);
}
}
TrimToSize (maxSize);
return true;
}

In fact, it is not difficult to think that when the bitmap cache size than the original set of maxsize should be in TrimToSize (...) In this function. What this function does is also simple, traversing the map, removing the extra items (corresponding toevict in the code) until the current cache size is equal to or less than the specified size.

private void trimtosize (int maxSize) {while
(true) {
String key;
Bitmap value;
Synchronized (this) {
if (Size < | | (Map.isempty () && size!=)) {
throw new IllegalStateException (GetClass (). GetName () + ". SizeOf () is reporting inconsistent");
if (size <= maxSize | | map.isempty ()) {break
;
}
map.entry<string, bitmap> toevict = Map.entryset (). iterator (). Next ();
if (toevict = = null) {break
;
}
Key = Toevict.getkey ();
Value = Toevict.getvalue ();
Map.Remove (key);
Size = SizeOf (key, value);
}
}

At this time we will have a thought, why traverse the use of the least bitmap cache can be eliminated, not mistakenly deleted to the most recently used bitmap cache? First, let's be clear that the most recent use of lrumemorycache definitions refers to the bitmap cache that was recently manipulated with get or put. Second, before we until Lrumemorycache's get operation is actually through its internal field linkedhashmap.get (...) Implemented, when Linkedhashmap accessorder==true, each get or put action moves the item (3rd in the figure) to the end of the list (see below, the list header is considered to be the least used, and the tail of the chain is considered the most commonly used.) Every time we get to the item we think it's been used recently, and the lowest priority is eliminated when there is not enough memory. It is important to note that the first Linkedhashmap linked list is in the order of insertion, that is, the initial inserted item is in the list header, and the last one is inserted at the end of the chain. Let's say that as long as you remove 1 of the figure, 2 items can make lrumemorycache smaller than the size of the original limit, then we just walk down from the head of the list (from 1→ last) so we can eliminate the least-used items.

At this point, we know the whole principle of lrumemorycache caching, including how he put, get, and eliminate an element's strategy. Next, we're going to start analyzing the default disk caching policy.

Disk caching policies in UIL

such as Sina Weibo, petals this application needs to load a lot of pictures, the original picture of the load is slow, if the next time you need to download the last time has been the picture, I believe that the user's traffic will let their shouting loudly. For many applications of pictures, a good disk cache directly determines the retention time of the user's mobile phone. We implement the disk cache ourselves, and we have to think too much, but luckily UIL offers several common disk caching strategies, and if you don't feel like it's your call, you can expand it yourself.

Filecountlimiteddisccache (You can set the number of cached pictures, when more than the set value, delete the first file added to the hard disk)
Limitedagedisccache (set file to survive for the longest time, when this value is exceeded, delete the file)
Totalsizelimiteddisccache (sets the maximum cache bitmap, when this value is exceeded, deletes the file first added to the hard drive)
Unlimiteddisccache (This cache class does not have any restrictions)

In UIL, there is a relatively complete storage strategy, according to the predetermined space size, the use of frequency (life cycle), the number of files constraints, have a corresponding implementation strategy. The most basic interface Disccacheaware and abstract class Basedisccache

Unlimiteddisccache resolution

Unlimiteddisccache implements disk cache interface, which is the default in Imageloaderconfiguration. With it, the size of the disk cache is unlimited.

Next we look at implementing Unlimiteddisccache source code, through the source code we found that he is actually inherited Basedisccache, this class does not implement their own unique methods, and did not rewrite anything, So let's look directly at the Basedisccache class. Before we analyze this class, let's think about how much trouble we need to do to implement a disk cache:

1, the name of the picture will not be heavy. You have no way of knowing what the original file name is for the user to download, so it is likely to be overwritten by a useful picture of the file's name.

2, when the application of cotton or network delay, the same picture is repeatedly downloaded.

3, processing the picture to disk may encounter delays and synchronization problems.

Basedisccache Constructors

First, let's take a look at the Basedisccache constructor:

Cachedir: File cache Directory

Reservecachedir: Alternate file cache directory, can be null. It only works when Cachedir is not available.
Filenamegenerator: FileName Builder. Generates a file name for the cached file.

Public Basedisccache (file Cachedir, file Reservecachedir, Filenamegenerator filenamegenerator) {
if (Cachedir = = NULL) {
throw new IllegalArgumentException ("Cachedir" + error_arg_null);
}
if (Filenamegenerator = = null) {
throw new IllegalArgumentException ("Filenamegenerator" + error_arg_null);
}
This.cachedir = Cachedir;
This.reservecachedir = Reservecachedir;
This.filenamegenerator = Filenamegenerator;
}

We can see a filenamegenerator, and then we'll find out how UIL specifically generates a duplicate filename. There are 3 file naming policies in UIL, where we only analyze the default filename policy. The default file naming policy is Defaultconfigurationfactory.createfilenamegenerator (). It is a hashcodefilenamegenerator. Really is your unexpected simple, is to use String.hashcode () for the generation of file names.

public class Hashcodefilenamegenerator implements Filenamegenerator {
@Override public
string Generate (String Imageuri) {return
string.valueof (Imageuri.hashcode ());
}
}

Basedisccache.save ()

After analyzing the naming strategy, look at the Basedisccache.save (...) Method. Notice that line 2nd has a getfile () function that is primarily used to generate a file that points to a cached directory in which the filenamegenerator that was just introduced is invoked to generate the filename. Note the tmpfile of line 3rd, which is used to write temporary files for bitmap (see line 8th), and then delete the file. You may be puzzled as to why there is no judgment in the Save () function to determine the existence of the bitmap file to be written, and we cannot help but see if there is any judgment in the UIL. Remember our "From Code Analysis Android-universal-image-loader picture loading, display process" introduced, UIL load the general process of picture is to determine whether there is a corresponding bitmap in memory, and then judge whether there are disks (disk), If not, load from the network. Finally, based on the previous configuration in UIL, the need to cache bitmap to memory or disk is determined. In other words, when you need to call Basedisccache.save (...) Before, I've actually judged that this file is not on disk.

Public boolean Save (String Imageuri, InputStream ImageStream, Ioutils.copylistener listener) throws IOException {
File ImageFile = GetFile (Imageuri);
File Tmpfile = new file (Imagefile.getabsolutepath () + temp_image_postfix);
Boolean loaded = false;
try {
OutputStream os = new Bufferedoutputstream (new FileOutputStream (tmpfile), buffersize);
try {
loaded = Ioutils.copystream (imagestream, OS, Listener, buffersize);
} finally {
ioutils.closesilently (OS);
}
finally {
ioutils.closesilently (imagestream);
if (Loaded &&!tmpfile.renameto (imagefile)) {
loaded = false;
}
if (!loaded) {
tmpfile.delete ();
}
}
return loaded;
}

Basedisccache.get ()

The Basedisccache.getfile (...) is called internally by the Basedisccache.get () method. method, let's analyze the function we touched before. The 2nd line is to use Filenamegenerator to generate a unique file name. The 3rd to 8th line is to specify the cache directory, at which point you can clearly see the relationship between Cachedir and Reservecachedir, when Cachedir is not available, is to use the Reservecachedir as a cache directory.

Finally, it returns an object pointing to the file, but be aware that file is null instead of an error when the files that the object of the file type point to do not exist.

Protected File GetFile (string imageuri) {
string fileName = Filenamegenerator.generate (Imageuri);
File dir = cachedir;
if (!cachedir.exists () &&!cachedir.mkdirs ()) {
if (reservecachedir!= null && ( Reservecachedir.exists () | | Reservecachedir.mkdirs ())) {
dir = reservecachedir;
}
}
return new File (dir, fileName);
}

Summarize

Now, we have analyzed the caching mechanism of UIL. In fact, the implementation of the caching mechanism from the UIL is not very complex, although there are a variety of caching mechanisms, but simply: the memory cache is to use the object of the map interface in memory cache, there may be different storage mechanisms. Disk caching is actually writing files to disk.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.