Redis Memory culling mechanism

Source: Internet
Author: User
Tags allkeys

Redis memory culling means that some keys stored by the user can be actively removed from the instance by Redis, thus generating read miss, so why does Redis have this function? This is what we need to explore the original design. Two of the most common scenarios for Redis are cache and persistent storage, and one of the first things to make clear is that the memory-out strategy is more appropriate for that scenario. Is it persistent storage or caching?

The memory elimination mechanism is originally designed to better use memory, with a certain cache miss in exchange for the efficiency of memory usage.

As a Redis user, how do I use this feature provided by Redis? Look at the configuration below

# maxmemory <bytes>

We can turn on memory culling by configuring the value of MaxMemory in redis.conf, and as to what this value means, we can understand its meaning by understanding the process of memory obsolescence:

1. The client initiates a command (such as set) that requires more memory to be requested.

2. Redis checks memory usage and, if the memory used is greater than maxmemory, begins to retire the memory (key) in exchange for a certain amount of memory based on the different retirement policies that the user has configured.

3. If the above is not a problem, the command executes successfully.

When MaxMemory is 0, we have no limit on the memory usage of Redis.

Redis provides the following retirement policies for the user to choose, with the default policy being the Noeviction policy:

· Noeviction: When memory usage reaches the threshold, all commands that cause the request for memory will be error-generated.

· ALLKEYS-LRU: In the primary key space, remove the key that has not been used in the first place.

· VOLATILE-LRU: In the key space where the expiration time has been set, remove the key that has not been used in the first place.

· Allkeys-random: In the primary key space, a key is randomly removed.

· Volatile-random: A key is randomly removed in the key space where the expiration time has been set.

· Volatile-ttl: In the key space that has the expiration time set, a key with an earlier expiration time is removed first.

Here to add the primary key space and set the expiration time of the key space, for example, suppose we have a batch of keys stored in Redis, then there is a hash table to store this batch of keys and their values, if a portion of this batch of keys set the expiration time, then this batch of keys will also be stored in another hash table, The value in this hash table corresponds to the expiration time that the key was set. Set the expiration time of the key space as a subset of the primary key space.

We understand that Redis probably offers such a few elimination strategies, so how to choose? The selection of the elimination policy can be specified by the following configuration:

# Maxmemory-policy Noeviction

But what does this value fill? To solve this problem, we need to understand how our application requests are accessed by the datasets stored in Redis and what we want. Redis also supports runtime modification of the retire policy, which allows us to adjust the memory-culling policy in real time without restarting the Redis instance.

Here's a look at the scenarios for several strategies:

· ALLKEYS-LRU: If our application's access to the cache is in a power-law distribution (that is, there are relative hotspot data), or we are not quite sure about the cache access distribution of our application, we can choose the ALLKEYS-LRU policy.

· Allkeys-random: If our app is equal to the access probability of the cache key, this policy can be used.

· Volatile-ttl: This strategy allows us to suggest to Redis which keys are more suitable for eviction.

In addition, the VOLATILE-LRU policy and volatile-random strategy are suitable for us to apply a Redis instance both to the cache and to persistent storage, but we can also achieve the same effect by using two Redis instances. It is worth mentioning that setting the expiration time of the key actually consumes more memory, so we recommend using the ALLKEYS-LRU policy for more efficient use of memory.

Redis Memory culling mechanism

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.