How to Use Redis for LRU-Cache

Source: Internet
Author: User
Tags allkeys

How to Use Redis for LRU-Cache

Least Recently Used LRU (Least Recently Used) algorithms are one of the many replacement algorithms.
There is a maxmemory concept in Redis, mainly to limit the memory used to a fixed size. The LRU algorithm used by Redis is an approximate LRU algorithm.

1. Set maxmemory

As mentioned above, maxmemory is used to limit the maximum memory usage of Redis. There are multiple ways to set its size. One of the methods is SET through config set, as shown below:

127.0.0.1:6379> CONFIG GET maxmemory1) "maxmemory"2) "0"127.0.0.1:6379> CONFIG SET maxmemory 100MBOK127.0.0.1:6379> CONFIG GET maxmemory1) "maxmemory"2) "104857600"

Another method is to modify the configuration file redis. conf:

maxmemory 100mb

Note: In 64-bit systems, setting maxmemory to 0 does not limit Redis memory usage. In 32-bit systems, maxmemory cannot exceed 3 GB.
When the Redis memory usage reaches the specified limit, you need to select a replacement policy.

2. replacement policy

When Redis memory usage reaches maxmemory, You need to select the configured maxmemory-policy to replace old data.
The following is a replacement policy that can be selected:

Noeviction: If no replacement is performed, it means that no replacement is performed even if the memory reaches the upper limit. All commands that can cause memory increase will return error allkeys-lru: delete the most recently used key first to save the new data volatile-lru: only the expired settings (expire set) select the least recently used key from the key to save the new data allkeys-random: Randomly select some keys from all-keys to delete, to save the new data volatile-random: only select some keys from the keys with the set expiration (expire set) to delete them to save the new data volatile-ttl: only select the key with the shortest TTL from the key of the expire set to save the new data.

The method for setting maxmemory-policy is similar to that for setting maxmemory. It is dynamically modified through redis. conf or config set.

If no key can be deleted is matched, the volatile-lru, volatile-random, and volatile-ttl policies are the same as those of noeviction-no key is replaced.

It is very important to select an appropriate replacement policy, which depends on the Access Mode of your application. Of course, you can also dynamically modify the replacement policy, the cache hit rate can be output using Redis command -- INFO to optimize the replacement policy.

In general, there are some common experiences:

When all keys are recently used most often, you need to select allkeys-lru to replace the recently used least frequently used keys. If you are not sure which strategy to use, we recommend that you use allkeys-lru. If the access probability of all keys is similar, you can use the allkeys-random policy to replace the data. If you have a good understanding of the data, if you can specify an hint for the key (specified by expire/ttl), you can select volatile-ttl for replacement.

Volatile-lru and volatile-random are often used in a Redis instance for both cache and persistence. However, it is better to use two Redis instances to solve this problem.

The setting indicates that the expiration time expire will occupy some memory, and using allkeys-lru does not need to set the expiration time, so as to make more effective use of the memory.

3. How the replacement policy works

Understanding how a replacement policy is executed is very important, for example:

The client executes a new command, causing the database to increase data (such as set key value). Redis checks memory usage. If the memory usage exceeds maxmemory, some new key commands are successfully deleted according to the replacement policy.

Our Continuous Data Writing will cause the memory to reach or exceed the upper limit of maxmemory, but the replacement policy will reduce the memory usage to the lower limit.

If you need to use a lot of memory at a time (such as writing a large set at a time), Redis memory usage may exceed the maximum memory limit for a period of time.

4. Approximate LRU Algorithm

The LRU in Redis is not strictly implemented by the LRU algorithm. It is an approximate LRU implementation mainly to save memory usage and improve performance. Redis has such a configuration-maxmemory-samples. Redis LRU is used to retrieve the configured number of keys, and then selects a recently least frequently used key for replacement. The default value is 5, as follows:

maxmemory-samples 5

You can adjust the number of samples to achieve the speed or accuracy advantages of the LRU replacement algorithm.

Redis does not adopt the true LRU implementation because it is used to save memory usage. Although they are not truly LRU implementations, they are almost equivalent in applications. It is a comparison between Redis's approximate LRU implementation and the theoretical LRU implementation:

The test starts by importing a certain number of keys in Redis, and then accessing the last key from the first key in sequence. Therefore, the first accessed key according to the LRU algorithm should be replaced by the latest one, then increase the number of keys by 50%, so that 50% of old keys are replaced.
You can see three types of points in three different regions:

The dimmed key is replaced. The gray key is not replaced. The Green key is the new key.

The theoretical LRU implementation is as expected. The oldest 50% keys are replaced, and Redis LRU replaces a certain percentage of old keys.

We can see that Redis3.0 is much better than Redis2.8 when the number of samples is 5, and many data in Redis2.8 should be replaced. When the number of samples is 10, Redis3.0 is very close to the real LRU implementation.

LRU is a model that predicts which data will be accessed in the future. If we access data in a way that is close to our expectation-power law, the implementation of the approximate LRU algorithm will be able to process well.

During the simulation test, we can find that in the power law access mode, the theoretical LRU and Redis approximate LRU gap is very small or there is no gap.

If you set maxmemory-samples to 10, Redis will increase the CPU overhead to ensure near-real LRU performance. You can check the hit rate to see what is different.

Use config set maxmemory-samples Dynamically adjust the number of samples and perform some tests to verify your conjecture.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.