Cache avalanche
The cache avalanche may be because the data is not loaded into the cache, or the cache is invalidated by a large area at the same time, causing all requests to go to the database, causing the database CPU and memory load to be too high, or even downtime.
Solution Ideas:
1, using the lock count, or use a reasonable number of queues to avoid cache failure when the database caused too much pressure. This approach can alleviate the pressure on the database, but at the same time reduce the system's throughput.
2, analyze user behavior, try to make the failure time point evenly distributed. Avoid the advent of cache avalanches.
3, if it is due to a cache server outage, you can consider the master, such as: Redis Main, but the double cache involves updating the transaction problem, update may read dirty data, need to solve.
Cache penetration
Cache penetration refers to the user querying the data, not in the database, and naturally not in the cache. This causes the user to query, in the cache can not be found, each time to go to the database query.
Solution Ideas:
1, if the query database is also empty, set a default value directly to the cache, so that the second to the buffer to get the value, and do not continue to access the database, this method is the most simple and rude.
2, according to the rules of the cache Data key. For example, our company is to do set-top boxes, cache data to Mac for Key,mac is a rule, if not conform to the rules are filtered out, so that you can filter a portion of the query. In the cache planning, key has certain rules, you can take this approach. This approach can only relieve part of the pressure, filtering and system-independent queries, but cannot be cured.
3, using the Bitset filter, hash all possible data into a large enough size, the non-existent data will be intercepted, thus avoiding the query pressure on the underlying storage system. For more information on Bron filters, see: Bitset-based Filters (Bloom filter)
Large concurrent cache penetration can cause a cache avalanche.
Cache warming up
Single-machine web system is relatively simple in the case.
Solution Ideas:
1, directly write a cache refresh page, on-line manual operation.
2, the amount of data is small, can be loaded when the web system starts.
3, make a timer to refresh the cache periodically, or triggered by the user line.
A distributed cache system such as Memcached,redis, such as a large cache system, consists of more than 10 or even dozens of machines, which can be more complex to preheat.
Solution Ideas:
1, write a program to run.
2, a single cache preheating framework.
The goal of cache warming is to load the data into the cache before the system goes live.
Caching algorithms
FIFO algorithm: First Out, FIFO. Principle: When a data is first entered into the cache, it should be eliminated first. In other words, when the cache is full, the first data to enter the cache should be eliminated.
LFU algorithm: Least frequently used, the least frequently used algorithm.
LRU algorithm: Least Recently used, the least recently used algorithm. See: Memcached Do you really understand LRU (4)
The difference between LRU and LFU. The LFU algorithm selects the least-used data item based on the number of times the data item is used, which is determined by the difference in usage times. LRU is determined by the difference in usage time.
"Go" memcached cache avalanche, cache penetration, cache warming, cache algorithm