How to do Lru-cache with Redis

Source: Internet
Author: User
Tags allkeys redis volatile

The least recently used algorithm of LRU (Least recently used) is one of many permutation algorithms.
There is a concept in Redis maxmemory , primarily to limit the memory used to a fixed size. The LRU algorithm used by Redis is an approximate LRU algorithm.

1 Setting MaxMemory

The above has been said maxmemory to limit the maximum amount of redis memory usage. There are several ways to set its size. One way to do this is by CONFIG SET setting the following:

127.0. 0. 1:6379> CONFIGGETMaxMemory1)"MaxMemory"2)"0"127.0. 0. 1:6379> CONFIGSETMaxMemory -Mbok127.0. 0. 1:6379> CONFIGGETMaxMemory1)"MaxMemory"2)"104857600"

Another way is to modify the configuration file redis.conf :

maxmemory 100mb

Note that under the 64bit system, maxmemory setting to 0 means that Redis memory usage is not limited, and under 32bit systems, the maxmemory implicit cannot exceed 3GB.
When Redis memory usage reaches the specified limit, you need to select a displacement policy.

2 Replacement Strategy

When Redis memory usage maxmemory is reached, you need to choose a set up for the replacement of maxmemory-policy old data.
The following are the alternative displacement strategies:

  • noeviction: does not displace, indicating that any command that causes memory to increase will return error even if the memory reaches the upper limit and does not displace
  • allkeys-lru: First delete the most infrequently used key to save the new data
  • volatile-lru: Select only the most recently infrequently used key from the expire set key to save the new data
  • allkeys-random: Randomly select key from All-keys to delete to save new data
  • volatile-random: Select some key to delete the new data only from the key setting (expire set)
  • volatile-ttl: Save new data by selecting a key with the shortest time-to-live (TTL) from the key set to fail (expire set)

maxmemory-policythe method and setting maxmemory method are similar, redis.conf either through or dynamically modified.

If there is no match to the key that can be deleted, then volatile-lru , volatile-random as with the volatile-ttl policy and noeviction substitution policy-no key is replaced.

Choosing the right replacement strategy is important, depending on the access mode of your application, but you can also dynamically modify the displacement strategy, and by using the Redis command- INFO to output the cache hit ratio, you can tune the permutation strategy.

In general, there are some common experiences:

  • In all of the keys are most recently used, then you need to choose allkeys-lru to replace the last most infrequently used key, if you are not sure which strategy to use, then recommend the use ofallkeys-lru
  • If all key access probabilities are similar, then the strategy can be used allkeys-random to replace the data
  • If you have enough knowledge of the data to specify hint (specified by Expire/ttl) for the key, you can choose to volatile-ttl displace

volatile-lruAnd volatile-random often used in cases where a Redis instance is both cache and persistent, however, a better choice is to use two Redis instances to solve this problem.

setting is the expiration time that expire consumes some memory, and the adoption allkeys-lru does not need to set the expiration time, thus more efficient use of memory.

3 How the displacement strategy works

It is important to understand how the displacement strategy is executed, such as:

  • The client executes a new command, causing the database to require additional data (for example set key value )
  • Redis checks memory usage and, if memory usage is exceeded maxmemory , removes some keys according to the substitution policy
  • The new command executes successfully

We continue to write data that causes memory to reach or exceed maxmemory the upper limit, but the displacement policy lowers the memory usage below the upper limit.

If you need to use a lot of memory at one time (such as writing a large set at a time), Redis memory usage may exceed the maximum memory limit for a period of time.

4 Approximate LRU algorithm

LRU in Redis is not a strict LRU algorithm implementation, is an approximate LRU implementation, mainly to save memory consumption and improve performance. Redis has a configuration-- maxmemory-samples Redis's LRU is the number of keys to take out the configuration, and then choose one of the most infrequently used keys to displace, the default 5, as follows:

maxmemory-samples5

You can gain the speed or accuracy advantage of the LRU permutation algorithm by adjusting the number of samples.

The reason Redis does not adopt a true LRU implementation is to save memory usage. Although not a true LRU implementation, they are almost equivalent in application. Is the comparison between the approximate LRU implementations of Redis and the theoretical LRU implementations:

The test begins by importing a certain number of keys in the Redis, then accessing the last key sequentially from the first key, so that the first key to be accessed according to the LRU algorithm should be replaced by the latest, and then add a 50% number of keys, resulting in the replacement of 50% of the old key.
In the you can see three types of points that make up three different areas:

  • The light gray is the key that's been swapped out.
  • The gray is the key that's not being swapped out.
  • The green is the newly added key

Theoretical LRU implementations as we expect, the oldest 50%-number key is swapped out, and Redis's LRU displaces a certain percentage of the old key.

You can see that in the case of the number of samples is 5, Redis3.0 is much better than Redis2.8, there are a lot of Redis2.8 that should be replaced by the data is not replaced. With a sample number of 10, the Redis3.0 is close to the real LRU implementation.

LRU is a model that predicts what data we will be accessing in the future, and if we access the data in a form close to what we expect-power law, the approximate LRU algorithm implementation will be able to handle it well.

In the simulation test, we can find that in the Power Law access mode, the difference between the theoretical LRU and the Redis approximate LRU is very small or there is no gap.

If you set it to maxmemory-samples 10, Redis will add additional CPU overhead to ensure near real LRU performance and can be viewed differently by checking the hit rate.

By CONFIG SET maxmemory-samples <count> dynamically adjusting the size of the sample, do some tests to verify your conjecture.

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

How to do Lru-cache with Redis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.