Cache, cache algorithm, cache framework: Part 3

Source: Internet
Author: User

 

When programmer one woke up, he began to read the article again.

Cache Algorithm

No one can tell which cache algorithm is based on other cache algorithms. (Some of the following cache algorithms are hard to understand. If you are interested, you can Google them)

Least frequently used (LFU ):

Hello everyone, I am LFU. I will calculate the frequency of using each cached object. I will kick out the least frequently used cache objects.

Least recently user (LRU ):

I am an LRU cache algorithm. I kicked away the cache objects that were least recently used.

I always need to know when and when cache objects are used. If someone wants to know why I can always kill the least recently used objects, it is very difficult.

The browser uses LRU as the cache algorithm. The new object will be placed on the top of the cache. When the cache reaches the capacity limit, I will kick the bottom object. The trick is: I will put the latest accessed cache object, to the top of the cache pool.

Therefore, cache objects that are frequently read will remain in the cache pool. There are two ways to implement me, array or linked list.

My speed is very fast, and I can also be adapted by the data access mode. I have a big family that can perfect me and even do better than me (I do sometimes envy, but it doesn't matter ). Some members of my family include lru2 and 2q. They exist to improve LRU.

Least recently used 2 (lru2 ):

I'm least recently used 2. Someone told me to use twice at least recently. I prefer this method. I will put the objects that have been accessed twice into the cache pool. When the cache pool is full, I will kick the cache objects that have been used for at least two times. Because the object needs to be tracked twice, the access load will increase with the increase of the cache pool. If I use it in a large cache pool, there will be problems. In addition, I still need to trace objects that are not cached because they have not been read for the second time. I am better than LRU, and it is in adoptive to access mode.

Two queues (2q ):

I am two queues. I put the accessed data in the LRU cache. If this object is accessed again, I will transfer it to the second and larger LRU cache.

I kicked the cache object to keep the first cache pool as 1/3 of the second cache pool. When the cache access load is fixed, replacing LRU with lru2 is better than increasing the cache capacity. This mechanism makes me better than lru2. I am also a member of the LRU family and in the adoptive to access mode.

Adaptive replacement cache (ARC ):

I am arc. Some people say that I am between LRU and LFU. To improve the effect, I am composed of two LRU instances. The first one is L1, the contained entries are recently used only once, and the second LRU, that is, L2, contains the recently used two entries. Therefore, L1 stores new objects, while L2 stores common objects. So, other people will think that I am between LRU and LFU, but it doesn't matter. I don't mind.

I am considered to be one of the best performance cache algorithms that can be self-tuned and are low-load. I also save historical objects, so that I can remember the objects to be removed, and also let me see if the objects to be removed can be left behind, instead, they kick other objects. My memory is poor, but I am fast and have good applicability.

Most recently used (MRU ):

I am MRU and it corresponds to LRU. I will remove the most recently used objects, and you will certainly ask me why. Well, let me tell you, when an access is over, some things are unpredictable, and finding the least recently used objects in the cache system is a very time-complex operation, this is why I am the best choice.

How common I am in the database memory cache! Whenever a cache record is used, I will place it at the top of the stack. What do you guess when the stack is full? I will replace the objects at the top of the stack with new objects!

First in first out (FIFO ):

I am a low-load algorithm and have low requirements on the management of cache objects. I use a queue to track all cache objects. Recently, the most common cache objects are placed behind, while earlier cache objects are placed before. When the cache capacity is full, the cache objects listed above will be kicked out, and new cache objects will be added. I am fast, but I am not applicable.

Second chance:

Hello everyone, I am second chance. I modified it through FIFO. I am called the Second Chance cache algorithm. What is better than FIFO is that I have improved the cost of FIFO. I am a FIFO, and I am also observing the front-end of the queue, but it is very FIFO, I will check whether the object to be kicked out has a previously used sign (1 in BIT). If it is not used, I will kick it out. Otherwise, I will clear this flag, and then add this cache object to the queue as a new cache object. As you can imagine, it is like a ring queue. When I met this object again at the head of the team, I immediately kicked him out because he had no such sign. I am faster than FIFO.

Clock

I am clock, a better FIFO, and better than Second Chance. Because I will not put the marked cache object at the end of the queue like second chance, but it can also achieve the effect of Second Chance.

I hold a circular list containing cache objects. The header Pointer Points to the oldest cache object in the list. When cache miss occurs and there is no new cache space, I will ask the flag of the cache object pointed to by the pointer to decide what to do. If the flag is 0, I will replace the cached object with a new cache object. If the flag is 1, I will increment the pointer and repeat this process, knowing that new cache objects can be placed. I am faster than Second Chance.

Simple time-based:

I am a simple time-based Cache algorithm. I use an absolute time period to invalidate those cached objects. For new objects, I will save the specified time. I am fast, but I am not applicable.

Extended time-based Expiration:

I am the extended time-based expiration cache algorithm, and I use the relative time to invalidate the cache objects. For new cache objects, I will save the specified time, for example, every five minutes, 12 o'clock every day.

Sliding time-based Expiration:

I am a sliding time-based expiration. What is different from the previous one is that the starting point of the cache object to be managed is counted from the last access time of the cache. I am fast, but I am not very suitable.

Okay! After listening to so many self-introductions of cache algorithms, other cache algorithms also consider the following:

  • Cost. If cache objects have different costs, they should be saved.
  • Capacity. If the cached object has different sizes, clear those large cache objects so that more small cache objects can be entered.
  • Time. Some caches also save the cache expiration time. Computers will expire because they have expired.

It may be necessary to cache objects regardless of other caching algorithms.

Email!

After reading this article, programmer one thought for a while and then decided to send an email to the author. He felt that the author's name was heard, but he couldn't remember it anymore. In any case, he sent the email and asked the author how the cache works in a distributed environment.

The author of the article received an email. Ironically, the author is the person interviewing programmer one. The author replied ......

Distributed cache:
  • Cache data can be stored in separate memory
  • Distributed cache can greatly increase the cache capacity
  • The cost of reading the cache will also increase accordingly.
  • The hit rate also increases because the cache capacity increases.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.