To design a caching system, the problem that has to be considered is the avalanche effect of cache penetration, cache breakdown and failure.
Cache breakdown
Cache penetration refers to querying a certain non-existent data, because the cache is not hit when the passive write, and for fault-tolerant consideration, if the data from the storage layer is not written to the cache, which will cause this non-existent data each request to the storage layer to query, lost the meaning of the cache. When the traffic is large, the db may be hung up, and if someone uses the nonexistent key to attack our application frequently, this is the loophole.
Solution Solutions
There are many ways to effectively solve the problem of buffer penetration, the most common is to use a bitmap filter, all possible data hash to a large enough, a certain non-existent data will be intercepted by this bitmap, thus avoiding the query pressure on the underlying storage system. There is also a simpler and more crude approach (which is what we do), and if a query returns data that is empty (whether it is data not present or a system failure), we still cache the empty result, but its expiration time will be very short, up to five minutes.
Cache avalanche
Cache avalanche refers to the same expiration time when we set up the cache, causing the slow existence to fail at some point, and all requests are forwarded to the db,db instantaneous pressure avalanche.
Solution Solutions
The avalanche effect of cache invalidation is terrible for the underlying system. Most system designers consider locking or queuing to guarantee a single-threaded (process) write of the cache, thus avoiding a large number of concurrent requests falling to the underlying storage system when it fails. Sharing a simple scenario here is when the cache expiration time is dispersed, for example, we can add a random value based on the original failure time, such as 1-5 minutes random, so that the repetition rate of each cache expiration time will be reduced, it is difficult to trigger the collective failure event.
Use Mutex (mutex key)
The industry's more common practice is to use mutexes. Simply put, when the cache fails (the value is empty), instead of going to load db immediately, use some of the cache tool's operations with a successful return value (such as Redis's setnx or memcache add) to set a mutex key, When the operation returns successfully, the operation of load DB is performed and the cache is reset, otherwise the entire get cache method is retried.
SETNX is the abbreviation for "set if not exists", which is set only when it does not exist, and can be used to achieve the effect of locking. The Setnx expiration time is not implemented in the previous version of redis2.6.1, so here are two versions of the Code reference:
Public StringGet(String key) {String value= Redis.Get(key); if(Value = =NULL) { if(Redis.setnx (Key_mutex,"1")) { //3 min Timeout to avoid mutex holder crashRedis.expire (Key_mutex,3* -) Value= db.Get(key); Redis.Set(key, value); Redis.delete (Key_mutex); } Else { //Other threads retry after 50 milliseconds of restThread.Sleep ( -); Get(key); } } }
Optimized mutual-exclusion lock logic
Public String Get (key) { String value = Redis.get (key); if (value = = null) {//indicates the cache value expires //Set 3min timeout to prevent the Del operation from failing, the next cache expiration cannot load DB if (Redis.setnx (Key_mutex, 1, 3 * 60) = = 1) { //For setting success value = Db.get (key); Redis.set (key, value, expire_secs); Redis.del (Key_mutex); } else { //This time the other threads at the same time are already load db and back to the cache, then retry to get the cache value can sleep (); Get (key); Retry } } else { return value;} }
The following is a distributed mutex implemented by the Memcache-based Add method
if (memcache.get (key) = = NULL) { //3 min timeout to avoid mutex holder crash if (Memcache.add (Key_mutex, 3 * 60 * () = = True) { value = Db.get (key); Memcache.set (key, value); Memcache.delete (Key_mutex); } else { sleep (); Retry (); }
Reference Link: 54135506
Based on (Redis | Memcache) Implementation of distributed mutual exclusion lock