Simple implementation of LRU cache

Source: Internet
Author: User
Tags prev

Cache this thing can be said everywhere, the processor in the Tlb,linux system of high-speed page cache, as well as many people familiar with the open source software memcached, is a cache implementation. LRU is the abbreviation of least recently used, which is the least used recently, is one of the commonly used cache algorithms. Since cache storage space is more limited relative to back-end storage, after mapping cache space and back-end storage space, some algorithms are needed to solve the problem of cache full and to ensure efficiency, and LRU is to remove the least recently accessed content after the cache is full. The new content is then put into the cache, and the new content becomes the most recently visited content.


The most frequently accessed content is placed first, the least accessed on the last side, and the content can grow within a certain range, which is appropriate for a linked list as a storage structure. If it is a single-linked list, even if we know the value to access the node pointer, still need to traverse to the node's predecessor, and then move the node to the list header, the most frequently accessed node, and if it is a doubly linked list, as long as you know the pointer to access the node, It is possible to move from one location in the linked list to the list header without traversing to find its predecessor, and the time Complexity is O (1). So for the sake of efficiency, we should use the doubly linked list, although the two-way list each node more than a pointer size overhead, but I think since the cache to some extent the default should be a certain amount of space to change time. When accessing a node, it is accessed through a key value, and how can you quickly get a pointer to the node corresponding to a key value? Use a hash bar. With a hash table, the time complexity of obtaining the address of the node corresponding to a key value is O (1). By using a hash and a doubly linked list, the overall efficiency of accessing the cache is also O (1).


The problem with hash tables is that different key values are mapped to the same location in the hash table, which is the same subscript that is mapped to the hash array. To solve this conflict, we can use the open address method, that is, we connect different key values to the same subscript node content in the form of a single linked list. This solves the problem, but if many key values correspond to the same subscript and the list becomes very long, the time complexity of accessing the hash table to get the node pointer is almost degraded from O (1) to O (n), and in some cases there is a significant performance degradation. If so, you can try to store the key value and its corresponding node pointer using a two-fork tree, so that the time complexity of acquiring the key corresponding to the node pointer is not worsened by increasing the data. Here is a simple LRU cache code that I implemented using the first method:


struct Kvnode
{
int key;
int value;
struct kvnode* prev;
struct kvnode* next;
};

struct listtable
{
int capacity;
int count;
struct kvnode* head;
struct kvnode* tail;
};

struct Hashnode
{
struct kvnode* linknode;
struct hashnode* prev;
struct hashnode* next;
};

struct hashTable
{
int capacity;
struct hashnode* *pnode;
};

struct listtable* ptable;
struct hashtable* phash;

void lrucacheinit (int capacity) {
if (capacity <= 0) return;

ptable = malloc (sizeof (*ptable));

Ptable->capacity = capacity;
Ptable->count = 0;
Ptable->head = NULL;
Ptable->tail = NULL;

Phash = malloc (sizeof (*phash));

Phash->capacity = capacity;
Phash->pnode = malloc (sizeof (*phash->pnode) *capacity);

for (int i = 0; i < capacity; ++i)
{
Phash->pnode[i] = NULL;
}
}

void Lrucachefree () {

if (ptable)
{
struct kvnode* p = ptable->head;
while (p)
{
struct kvnode* next = p->next;
Free (p);
p = Next;
}
Free (ptable);
}

if (Phash) free (phash);
}

int lrucacheget (int key) {

if (Phash)
{
struct hashnode* p = phash->pnode[key%phash->capacity];
if (!p) return-1;

while (P && p->linknode->key! = key) P = p->next;
if (!p)
{
return-1;
}

if (P->linknode = = Ptable->head)
{
Return p->linknode->value;
}

if (P->linknode->prev)
{
P->linknode->prev->next = p->linknode->next;
}

if (P->linknode->next)
{
P->linknode->next->prev = p->linknode->prev;
}

Ptable->head->prev = p->linknode;
P->linknode->next = ptable->head;
P->linknode->prev = NULL;

Return p->linknode->value;
}
return-1;
}

void Lrucacheset (int key, int value) {

if (value <= 0) return;

if (Phash)
{
int index = key%phash->capacity;
struct hashnode* p = phash->pnode[index];

while (P && p->linknode->key! = key)
{
p = p->next;
}

if (p)
{
P->linknode->value = value;
if (Ptable->head = = P->linknode) return;

if (P->linknode->prev)
{
P->linknode->prev->next = p->linknode->next;
}
if (P->linknode->next)
{
P->linknode->next->prev = p->linknode->prev;
}

Ptable->head->prev = p->linknode;
P->linknode->next = ptable->head;
P->linknode->prev = NULL;
}
Else
{
struct hashnode* phashnode = malloc (sizeof (*phashnode));
Phashnode->linknode = NULL;
Phashnode->prev = NULL;
Phashnode->next = NULL;

struct kvnode* Pkvinsert = malloc (sizeof (*pkvinsert));

Phashnode->linknode = Pkvinsert;

Pkvinsert->key = key;
Pkvinsert->value = value;
Pkvinsert->prev = NULL;
Pkvinsert->next = ptable->head;

if (Ptable->head)
{
Ptable->head->prev = Pkvinsert;
}
Else
{
Ptable->tail = Pkvinsert;
}

Ptable->head = Pkvinsert;

if (Phash->pnode[index])
{
Phash->pnode[index]->prev = Phashnode;
}

Phashnode->next = phash->pnode[index];
Phash->pnode[index] = Phashnode;

if (Ptable->count = = ptable->capacity)
{
struct kvnode* Pkvdel = ptable->tail;

Ptable->tail = ptable->tail->prev;

if (ptable->tail) Ptable->tail->next = NULL;

struct hashnode* p = phash->pnode[pkvdel->key%phash->capacity];

while (P && p->linknode->key! = pkvdel->key) p = p->next;

if (p)
{
if (p->prev) P->prev->next = p->next;
if (p->next) P->next->prev = p->prev;
Free (p);
}

Free (Pkvdel);
}
else ptable->count++;
}
}
}


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Simple implementation of LRU cache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.