Cache, cache algorithm, and cache framework on android

Source: Internet
Author: User

Cache, cache algorithm, and cache framework on android
1. The purpose of cache is to temporarily access the data, because it takes too much to retrieve the original data, and the cache can be obtained faster. The cache can be considered as a subset of the original data. It is copied from the original data and is marked to be retrieved.

In android development, you often need to access network data, such as a large number of network images. If you need to retrieve the same image from the network each time, the cost is obviously too high. You can set the local File Cache and memory cache to store data obtained from the network. The local file cache space is not infinitely large, and the larger the capacity, the lower the reading efficiency, you can set a compromise cache capacity such as 10 M. If the cache is full, we need to replace an existing data object with a new one; as the first data to be read, the memory cache should store frequently used data objects with limited memory capacity and limited memory cache capacity. Follow these steps to obtain an image (N in total:
A. First fetch data in the memory cache (set to store K Records). If yes, return (hit rate: K/N, time: tA); otherwise, perform B;
B. in the local file cache (set to store M), if obtained, return and update the memory cache (hit rate (M-K)/N, time is tB), otherwise c;
C. Download images over the network and update the local File Cache and memory cache (hit rate (N-M)/N, time tC );

The expected time to take an image is: W = tA * (K/N) + tB * (M-K)/N + tC * (N-M)/N, tA <tB <tC, in order to reduce the W cost, that is, to obtain data as quickly as possible, we should increase the memory cache hit rate and the local File Cache hit rate, however, the capacity of the two is limited, so the appropriate replacement algorithm must be used to update the objects stored by the two. Selecting an appropriate replacement algorithm is the difficulty of caching. 2. Common cache algorithms Least Frequently Used (LFU)
Calculate the frequency of use for each cached object. Replace the least commonly used cache objects.

Least Recently User (LRU)
Swap out the least recently used cache objects. You always need to know when and when the cached object is used. It is very difficult for someone to understand why they can always kill the least recently used objects. The browser uses LRU as the cache algorithm. The new object will be placed on the top of the cache. When the cache reaches the capacity limit, I will kick the bottom object. The trick is: I will put the latest accessed cache object, to the top of the cache pool.
Therefore, cache objects that are frequently read will remain in the cache pool. You can use data or linked lists. Its improved algorithms include LRU2 and 2Q.

Least Recently Used 2 (LRU2)
Put the objects that have been accessed twice into the cache pool. When the cache pool is full, I will kick the cache objects that have been used for at least two times. Because the object needs to be tracked twice, the access load will increase with the increase of the cache pool. If it is used in a large cache pool, there will be problems. In addition, we need to track the objects that are not cached because they have not been read for the second time. This is better than LRU.

Two Queues (2Q)
Put the accessed data in the LRU cache. if the object is accessed again, it is transferred to the second larger LRU cache. The cache object is replaced to keep the first cache pool as 1/3 of the second cache pool. When the cache access load is fixed, replacing LRU with LRU2 is better than increasing the cache capacity. This mechanism makes the algorithm better than LRU2.


Adaptive Replacement Cache (ARC)
This algorithm is between LRU and LFU and consists of two LRU types. The first, L1, contains the entries that have been used only once recently, and the second, LRU, that is, L2 contains the two recently used entries. Therefore, L1 stores new objects, while L2 stores common objects. This algorithm is one of the best performance cache algorithms. It can be self-tuned and has low load. Save the historical objects. In this way, you can remember the removed objects. You can also see whether the replaced objects can be left behind and replaced with other objects. The algorithm has poor memory, but is fast and adaptable.

Most Recently Used (MRU)
This algorithm corresponds to LRU. It replaces the most recently used objects, and you will certainly ask why. The reason is that when an access request comes, some tasks cannot be predicted, and finding the least recently used object in the cache system is a very time-complex operation. This algorithm is widely used in the database memory cache! Every time a cache record is used, it is placed at the top of the stack. When the stack is full, the objects at the top of the stack will be replaced with new objects!

First in First out (FIFO)
This is a low-load algorithm that does not require high management of cached objects. A queue is used to track all cache objects. The most common recently cache objects are placed behind them, and the cache objects earlier are placed before them. When the cache capacity is full, the cache objects listed above will be kicked out, and new cache objects will be added. Very fast, but not applicable.


Second Chance
The improved FIFO algorithm improves the cost of FIFO. The same is to observe the front-end of the queue, but the first-in-first-out replacement is very different. It will check whether the object to be kicked out has a previously used mark (1 bit ), if it has not been used, replace it. Otherwise, clear the flag and add the cache object to the queue as a new cache object. As you can imagine, it is like a ring queue. When this object is met again in the head of the team, it can be swapped out immediately because it does not have a flag. It is faster than FIFO.

CLock
This is a better FIFO and better than second chance. Because it does not put the marked cache object to the end of the queue like second chance, but it can also achieve the effect of second chance. It holds a circular list containing cached objects, and the header Pointer Points to the oldest cached object in the list. When cache miss occurs and there is no new cache space, it determines what to do based on the flag of the cache object to which the Pointer Points. If the flag is 0, replace the cached object with a new cache object. If the flag is 1, increment the head pointer and repeat the process until the new cache object can be placed.

Simple time-based
The cached objects are invalidated through an absolute time period. For new objects, save the specified time. Very fast, but not applicable.

Extended time-based expiration
Cache objects are invalidated by relative time. For new cache objects, the specific time is saved, for example, every 5 minutes, at 12 o'clock every day.

Sliding time-based expiration
The starting point of the managed cache object is counted from the last accessed time of the cache. Very fast, not applicable.

The cache algorithm mainly takes into account the following points:

Cost. If cache objects have different costs, they should be saved.
Capacity. If the cached object has different sizes, clear those large cache objects so that more small cache objects can be entered.
Time. Some caches also save the cache expiration time. Computers will expire because they have expired. 3. for image caching frameworks that enhance user experience, see a network interface and image caching framework enif on android.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.