Redis expiration deletion policy

Source: Internet
Author: User

Redis expiration deletion policy

Recently, when the Redis memory reaches the maxmemory limit, slow data elimination causes application requests to be slowed down. Later, I took a closer look at various Redis data elimination strategies and summarized them.

First, Redis has three key deletion times, which correspond to different elimination policies:

1. When you read/write an expired key, a inert deletion policy is triggered to delete the expired key directly.

2. Because the inert deletion policy cannot ensure that cold data is deleted in time, Redis will regularly take the initiative to remove a batch of expired keys.

3. When the memory used exceeds the maxmemory limit, an active cleaning policy is triggered.

The following describes in detail the regular active elimination policies and active cleaning policies, as well as the meanings of their corresponding configuration parameters.

• Regular active elimination strategies

First, the "regular" here refers to the cleaning policy triggered when Redis regularly calls the databasesCron () function. The regular frequency is determined by the hz parameter in the configuration file, which indicates that within one second, the number of times the background task is expected to be called. The default value in the Redis-3.0.0 is 10, representing 10 background tasks per second.

Hz tuning will increase the active elimination frequency of Redis. If your Redis storage contains a lot of cold data that occupies too much memory, you can increase the value, however, the author of Redis suggested that this value should not exceed 100. We actually increase this value to 100, and we can see that the CPU will increase by about 2%, however, the memory release speed of cold data is indeed significantly improved (by observing the number of keyspace and used_memory size ).

In addition to the frequency of active elimination, Redis also has a limit on the maximum duration of execution of each elimination task, which ensures that each active elimination does not block too many application requests. The following is a formula for calculating the limitation:

# Define ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC 25/* CPU max % for keys collection */

...

Timelimit = 1000000 * ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC/server. hz/100;

It can be seen that timelimit and server. hz are a reciprocal relationship, that is, the larger the hz configuration, the smaller the timelimit. In other words, the higher the expected active elimination frequency every second, the shorter the maximum time it takes for each elimination. Here, the maximum elimination time per second is a fixed 250 ms (1000000 * ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC/100), and the elimination frequency and maximum time of each elimination are controlled by the hz parameter.

The specific elimination process is as follows: In redis. in the c/activeExpireCycle () function, there is a loop for each db, and 20 keys are randomly retrieved from the db-> expires set each time. If more than 5 Keys expire, then the loop continues. That is to say, if more than 25% of keys expire, they will continue to be eliminated until the random key expiration rate is lower than 25%, or the entire cycle time exceeds the maximum limit of timelimit. End the entire elimination process.

According to the above analysis, when the expiration key ratio in redis does not exceed 25%, increasing hz can significantly increase the minimum number of scanned keys. If hz is 10, at least 200 keys are scanned in one second (10 calls per second * at least 20 keys are randomly retrieved at a time). If hz is changed to 100, at least 2000 keys are scanned in one second. If the expiration key rate exceeds 25%, no maximum number of keys is scanned, but the cpu time is up to 250 ms per second.

• Proactive cleanup policy of maxmemory

When the mem_used memory exceeds the maxmemory setting, all read/write requests will trigger the redis. c/freeMemoryIfNeeded (void) function to clear the excess memory. Note that this cleaning process is blocked until enough memory space is cleared. Therefore, when maxmemory is reached and the caller is still writing data, the proactive cleanup policy may be triggered repeatedly, resulting in a certain latency in the request.

During cleaning, the user-configured maxmemory-policy will be used for proper cleaning (generally LRU or TTL). Here, the LRU or TTL policy is not all keys for redis, the maxmemory-samples key in the configuration file is used as the sample pool for sampling cleaning.

The default configuration of maxmemory-samples in the redis-3.0.0 is 5. Increasing the accuracy of LRU or TTL increases, the result of the redis author's test is that when this configuration is set to 10, it is very close to the precision of full LRU, and increasing maxmemory-samples will lead to more CPU time consumption during active cleaning.

Suggestion:

Do not trigger maxmemory as much as possible. It is best to increase the size of the mem_used memory to a certain proportion of maxmemory, or increase the size of the hz to speed up the elimination or resize the cluster.

If you can control the memory, you do not need to modify the maxmemory-samples configuration. If Redis itself is used as the LRU cache service (this service is generally in the maxmemory state for a long time, redis automatically performs LRU elimination). maxmemory-samples can be adjusted appropriately.

You may also like the following articles about Redis. For details, refer:

Install and test Redis in Ubuntu 14.04

Basic configuration of Redis master-slave Replication

Redis cluster details

Install Redis in Ubuntu 12.10 (graphic explanation) + Jedis to connect to Redis

Redis series-installation, deployment, and maintenance

Install Redis in CentOS 6.3

Learning notes on Redis installation and deployment

Redis. conf

Redis details: click here
Redis: click here

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.