The routines for caching updates
There are four ways to update the design pattern of the cache: Cache Aside,read through,write Through,write behind caching
Cache aside Pattern
This is the most commonly used pattern, and its specific logic is as follows:
- Invalidation: The application takes data from the cache, does not get it, takes data from the database, succeeds, and puts it into the cache
- Hit: The application takes data from the cache and takes it back
- Update: The data is stored in the database, after the success, then let the cache fail
Read/write Through Pattern
In the above cache aside routines, our application code needs to maintain two data stores, one for the cache and one for the database. So, the application is more verbose. And read/write through routine is to update the database operation by the cache itself proxy, so, for the application layer, it is much simpler. It can be understood that the application believes that the backend is in a single storage, while the store itself maintains its own cache
Read Through
Read through routine is to update the cache in the query operation, that is, when the cache fails (expired or LRU swap out), cache aside is the caller responsible for loading the data into the cache, and read through with the caching service itself to load, so that the application is transparent.
Write Through
The Write Throygh routine is similar to read through, but occurs when the data is updated, and when there is data update, the database is updated directly and then returned if there is no hit cache. If the cache is hit, the cache is updated and the database is updated by the cache itself (this is a synchronous operation)
Write Behind Caching Pattern
Write behind is also called write back. Some students who understand Linux OS kernel should be very familiar with write back, is this the Linux file system page cache algorithm? Write back routine, in a word, when updating data, only update the cache, do not update the database, and our cache will asynchronously bulk update cache library. The advantage of this design is that the IO operation of the data is very fast (because of the direct operation of memory), because asynchronous, write backg can also combine multiple operations on the same data, so the performance improvement is considerable.
Reprint: http://coolshell.cn/articles/17416.html
The routines for caching updates