Design a Cache System

Source: Internet
Author: User

Design a Cache System

Similar to our previous posts, we would like to select system Design interview questions that is popular and practical so That's the only can get ideas on how to analyze problems in a interview, but learn something interesting at the SAM E time.

If you had no idea about system design interviews, I ' d recommend your read this tutorial first. In this post, we is addressing the problem–how to design a cache system. Topics covered by this post include:

    • LRU Cache
    • Eviction Policy,
    • Cache concurrency
    • Distributed Cache System

Problem

How to design a cache system?

Cache system is a widely adopted technique in almost every applications today. In addition, it applies to every layer of the technology stack. For instance, the at Network area cache was used in DNS lookup and in Web server cache was used for frequent requests.

In short, a cache system stores common used resources (maybe in memory) and when next time someone requests the same Resou RCE, the system can return immediately. It increases the system efficiency by consuming more storage space.

Lru

One of the most common caches systems is LRU (least recently used). In fact, another common interview question are to discuss data structures and design of the LRU cache. Let's start with this approach.

The "the"-the-same-LRU cache works are quite simple. When the client requests resource A, it happens as follow:

    • If A exists in the cache, we just return immediately.
    • If not and the cache have extra storage slots, we fetch resource A and return to the client. In addition, insert A into the cache.
    • If the cache is full, we kick out the resource that's least recently used and replace it with resource A.

The strategy here's to maximum the chance, the requesting resource exists in the cache. So what can we implement a simple LRU?

LRU Design

An LRU cache should support the Operations:lookup, insert and delete. Apparently, in order to achieve fast lookup, we need to use hash. By the same tokens, if we want to make insert/delete fast, something like linked list should come to your mind. Since we need to locate the least recently used item efficiently, we need something in order like queue, stack or sorted a Rray.

To combine all these analyses, we can use the queue implemented by a doubly linked list to store all the resources. Also, a hash table with resource identifier as key and address of the corresponding queue node as value is needed.

Here's how it works. When resource A was requested, we check the hash table to see if a exists in the cache. If exists, we can immediately locate the corresponding queue node and return the resource. If not, we'll add A into the cache. If There is enough space, we just add a to the end of the queue and update the hash table. Otherwise, we need to delete the least recently used entry. To do, we can easily remove the head of the queue and the corresponding entry from the hash table.

Eviction Policy

When the cache was full, we need to remove existing items for new resources. In fact, deleting the least recently used item was just one of the most common approaches. So is there other ways to does that?

As mentioned above, the strategy is to maximum the chance and the requesting resource exists in the cache. I ' ll briefly mention several approaches here:

    • Random Replacement (RR) – Als The term suggests, we can just randomly delete an entry.
    • Least frequently Used (LFU) –we keep the count of how frequent each item is requested and delete the one Least frequently Used.
    • W-tinylfu–i ' d also like-to-talk on this modern eviction policy. In a nutshell, the problem of LFU are that sometimes an item are only used frequently in the past, but LFU would still keep t His item is a long while. W-TINYLFU solves this problem by calculating frequency within a time window. It also has various optimizations of storage.

Concurrency

To discuss concurrency, I-d like-to-talk on why there was concurrency issue with the cache and how can we address it.

It falls into the classic reader-writer problem. When multiple clients is trying to update the cache at the same time, there can is conflicts. For instance, the clients-compete for the same cache slots and the one who updates the cache last wins.

The common solution of course is using a lock. The downside is obvious–it affects, the performance a lot. How can we optimize this?

One approach-to-split the cache into multiple shards and has a lock for each of them so, clients won ' t wait for EA CH Other if they is updating cache from different shards. However, given that hot entries is more likely to being visited, certain shards'll be is more often locked than others.

An alternative are to use commit logs. To update the cache, we can store all the mutations into logs rather than update immediately. And then some background processes would execute all the logs asynchronously. This strategy was commonly adopted in database design.

Distributed cache

When the system gets to certain scale, we need to distribute the cache to multiple machines.

The general strategy are to keep a hash table, maps each resource to the corresponding machine. Therefore, when requesting resource A, the From this hash table we know the machine M be responsible for cache A and direct T He request to M. At Machine M., it works similar to local cache discussed above. Machine M could need to fetch and update the "cache for A" If it doesn ' t exist in memory. After that, it returns the original server with the cache back.

If you is interested in this topic, you can check more about Memcached.

Summary

The Cache can be a really interesting and practical topic as it's used in almost every system nowadays. There is still many topics I ' m not covering here like expiration policy.

If you want to know more about similar posts, check our system design interview questions collection.

The post is written by Gainlo-a platform This allows you to has mock interviews with employees from Google, Amazon etc. .

Design a Cache System

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.