First, preface
At present, the memcached + MySQL storage combination is widely applied to the "read and write less" application scenario. So what are some of the things we need to be aware of when using memcached?
Let's explore the issues that you should be aware of when using memcached or the problems you might encounter. (hereinafter referred to as MC)
- When to introduce MC
- What content to use to update the policy
- How to deal with the problem of content bulk reading
- How to Extend MC
- How to evaluate and monitor MC on-line
Second, when to introduce MC
This is a very simple question, and when our system performance falls, the first thing we need to do is to find out where the bottleneck is.
How to find the cache can help solve the problem, then we can consider introducing MC.
In general, the role of MC is to improve response time and improve the performance of the service concurrency.
From the response time point of view, the MC's access time is probably 1-2ms, and MySQL access time is often related to the access time of the hard disk.
MySQL itself also has its own cache, if the access operation does not involve disk access, then the response time for MySQL and MC response time is comparable.
However, if the MySQL cache fails, then the response time of MySQL is largely dependent on the access time of the hard disk. At present, the random access time of a common mechanical hard disk is probably 8-10ms.
From the point of view of concurrency performance, the concurrent processing performance of MC Single Instance is about 40W/S, while MySQL single instance concurrency processing performance is only about 2000/s.
Note: The processing performance and system resources are also relevant here.
Iii. What content update strategy to use
Generally speaking, MC's cache update strategy mainly has write-back and write-through two kinds.
For each of these two strategies, the specific process is not described here.
Write-back is only written to the cache when the data is accessed, so this strategy will save memory, and the data update operation will be shorter.
However, it is bound to affect the cache hit rate, especially in high concurrency situations.
Write-through is written directly to the cache when the data is created or updated, so the data update operation takes a little longer and uses more memory.
However, this strategy is a good choice if concurrency is high and the updated data is quickly used.
Iv. how to deal with the problem of bulk reading
In the actual application of MC, we may need to read the data in the cache multiple times in order to get the resources we need, that is to multiget the problem.
In order to improve the performance of concurrent reads, we can choose the mechanism of asynchronous reading when we need to get 100 copies of the cache data at a time of 1ms reading, and if we need 100ms for synchronous processing.
V. How to extend MC
When our system is running for a period of time, inevitably there will be expansion or reduce the situation of MC Server, then how should we design the expansion of MC services?
A common strategy is to use the consistent hash strategy, but the common hash strategy will lead to uneven distribution of memcached servers, memcached black hole problem, memcached the data inconsistency caused by flash.
So we need to design how to extend the memcached when we use memcached.
Vi. how to evaluate and monitor MC on-line
After MC is on-line, we need to evaluate whether our cache improves the overall performance of the system, how much the MC's cache hit rate is, whether it matches our expectations, and how much memory is utilized.
At the same time, with the change of MC performance Index, we also need to adjust the service of MC.
For example:
When the cache hit rate drops, but the memory usage is high, how do we handle it?
What do we need to do when the cache hit rate drops while memory utilization is declining?
About the memcached thing.