Exploring the LRU algorithm of Redis and memcached--------The implementation of the LRU of Redis

Source: Internet
Author: User
Tags allkeys

has been interested in the LRU algorithm for the two open source cache systems of Redis and memcached. Today we are going to summarize the implementation and differences of these two LRU algorithms.

The first thing to know is the LRU algorithm:LRU is the least recently used least recently used algorithm. related information online a lot. Http://en.wikipedia.org/wiki/Cache_algorithms#LRU

Six types of Redis strategies

Rewriteconfigenumoption (state, "Maxmemory-policy", Server.maxmemory_policy,        "Volatile-lru", Redis_maxmemory_ VOLATILE_LRU,        "Allkeys-lru", Redis_maxmemory_allkeys_lru,        "Volatile-random", Redis_maxmemory_volatile_ RANDOM,        "Allkeys-random", Redis_maxmemory_allkeys_random,        "Volatile-ttl", Redis_maxmemory_volatile_ttl,        "Noeviction", Redis_maxmemory_no_eviction,        NULL, Redis_default_maxmemory_policy);

    1. VOLATILE-LRU: Pick the least recently used data from the set of data sets (Server.db[i].expires) that have expired time
    2. Volatile-ttl: Select the data that will expire from the set of expired data sets (Server.db[i].expires)
    3. Volatile-random: Choose data culling from any data set (Server.db[i].expires) that has an expiration time set
    4. ALLKEYS-LRU: Pick the least recently used data culling from the dataset (Server.db[i].dict)
    5. Allkeys-random: Choose data culling from data set (SERVER.DB[I].DICT)
    6. No-enviction (expulsion): Prohibition of eviction data


LRU algorithms are often used in caching systems, such as our memcached and Redis.

Let's start by introducing Redis. I went to the recently released redis3.0 to search for code, through its code implementation and how to implement LRU to explain.

/* The actual Redis Object */#define REDIS_LRU_BITS 24#define Redis_lru_clock_max ((1<<redis_lru_bits)-1)/* MAX Val UE of OBJ->LRU */#define REDIS_LRU_CLOCK_RESOLUTION/* LRU RESOLUTION in ms */typedef struct Redisobject {
   unsigned Type:4;    unsigned encoding:4;    unsigned lru:redis_lru_bits; /* LRU time (relative to Server.lruclock) */    int refcount;    void *ptr;} robj;/* Macro used to obtain, the current LRU clock.  * If The current resolution is lower than the frequency we refresh the * LRU clock (as it should being in production servers) We return the * precomputed value, otherwise we need to resort to a function call. */#define Lru_clock ((1000/server.hz <= redis_lru_clock_resolution)? Server.lruclock:getLRUClock ())

unsigned int getlruclock (void) {    return (Mstime ()/redis_lru_clock_resolution) & Redis_lru_clock_max;}


Define some of the LRU constants associated with the

unsigned lru:redis_lru_bits; /* LRU time (relative to Server.lruclock) */

The member that is speaking object joins a 24-bit-length LRU member.



We find that the simplest and most brutal way is to randomly select 16, just like the all-key mechanism. The pool memory data is retired by expiration time.

  /* VOLATILE-LRU and ALLKEYS-LRU policy */else if (Server.maxmemory_policy = = Redis_maxmemory_allkeys_lru | | Server.maxmemory_policy = = Redis_maxmemory_volatile_lru) {struct Evictionpoolen                Try *pool = db->eviction_pool;                    while (Bestkey = = NULL) {evictionpoolpopulate (dict, Db->dict, Db->eviction_pool); /* Go backward from the best to worst element to evict. */for (k = redis_eviction_pool_size-1; k >= 0; k--) {if (Pool[k].key = = NUL                        L) continue;                        de = Dictfind (Dict,pool[k].key); /* Remove the entry from the pool.                        */Sdsfree (Pool[k].key); /* Shift all elements on it right to left. */Memmove (pool+k,pool+k+1, sizeof (pool[0]) * (redis_eviction_pool_size-k-                        1)); /* Clear the Element on the right which are empty * since we shifted one position to the left.                        */Pool[redis_eviction_pool_size-1].key = NULL;                        Pool[redis_eviction_pool_size-1].idle = 0; /* If The key exists, is our pick. Otherwise it is * a ghost and we need to try the next element.                            */if (DE) {Bestkey = Dictgetkey (DE);                        Break                        } else {/* Ghost ... */continue; }                    }                }            }

Obsolete by LRU, return to LRU time, and then compare by current LRU

/* Given An object returns the Min number of milliseconds the object is never * requested, using an approximated LRU algo Rithm. */unsigned Long Long Estimateobjectidletime (RobJ *o) {    unsigned long long lruclock = Lru_clock ();    if (Lruclock >= o->lru) {        return (LRUCLOCK-O->LRU) * redis_lru_clock_resolution;    } else {        return (l Ruclock + (REDIS_LRU_CLOCK_MAX-O->LRU)) *                    redis_lru_clock_resolution;    }}


The Servercron will periodically update the LRU time. The key in Maxmemeory is randomly found and then deleted according to the LRU value.

Because

unsigned int dictgetsomekeys (dict *d, dictentry **des, unsigned int count) {

This method is obtained by random data.


Summarize:

Redis just maintains a relative time for each object, and when eliminated, randomly take 3 or more to find the oldest to be eliminated. This saves the pointer overhead of the double-linked list and does not have to be locked when reading. Although not guaranteed to eliminate the oldest, but tend to eliminate the old object, after our online test: And the standard LRU comparison, the loss of hit ratio is very small, the effect is good.


More articles, welcome to visit Http://blog.csdn.net/wallwind
















Exploring the LRU algorithm of Redis and memcached--------The implementation of the LRU of Redis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.