The routines for caching updates

Source: Internet
Author: User

The routines for caching updates

There are four ways to update the design pattern of the cache: Cache Aside,read through,write Through,write behind caching

Cache aside Pattern

This is the most commonly used pattern, and its specific logic is as follows:

    • Invalidation: The application takes data from the cache, does not get it, takes data from the database, succeeds, and puts it into the cache
    • Hit: The application takes data from the cache and takes it back
    • Update: The data is stored in the database, after the success, then let the cache fail
Read/write Through Pattern

In the above cache aside routines, our application code needs to maintain two data stores, one for the cache and one for the database. So, the application is more verbose. And read/write through routine is to update the database operation by the cache itself proxy, so, for the application layer, it is much simpler. It can be understood that the application believes that the backend is in a single storage, while the store itself maintains its own cache

Read Through

Read through routine is to update the cache in the query operation, that is, when the cache fails (expired or LRU swap out), cache aside is the caller responsible for loading the data into the cache, and read through with the caching service itself to load, so that the application is transparent.

Write Through

The Write Throygh routine is similar to read through, but occurs when the data is updated, and when there is data update, the database is updated directly and then returned if there is no hit cache. If the cache is hit, the cache is updated and the database is updated by the cache itself (this is a synchronous operation)

Write Behind Caching Pattern

Write behind is also called write back. Some students who understand Linux OS kernel should be very familiar with write back, is this the Linux file system page cache algorithm? Write back routine, in a word, when updating data, only update the cache, do not update the database, and our cache will asynchronously bulk update cache library. The advantage of this design is that the IO operation of the data is very fast (because of the direct operation of memory), because asynchronous, write backg can also combine multiple operations on the same data, so the performance improvement is considerable.

Reprint: http://coolshell.cn/articles/17416.html

The routines for caching updates

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.