A monitoring
To view the status of the Memcahe server, we usually use 2 commands
1) Telnet 127.0.0.1 11211
View Global
Stats
View Slab
Stats Slabs
View Item
Stats items
2) Vmstat 1-s M
Second tuning
According to the author's personal experience, memcached tuning needs to pay attention to a few points.
1) Node overheating
If the memcached has an individual node capacity to consume light, concurrent concurrency is large. Then you need to reassign the server distribution for the consistency hash. or increase the virtual node
2) node preheating
If the online pressure is very large, memcache cluster expansion, we need to warm up to the new memcached server, all the clients will be double write data to this new node. The threshold for general cache capacity is 70-80%
3) Decentralized deployment
Because the Memcache server is very small CPU consumption, basically in 20w/second concurrency, the CPU load will not exceed 1%, so the CPU and disk will cause waste. So make the most of your server deployments as much as you can.
For example, there is a demand for 100G cache, the initial allocation of 5 20G memory machine, as well as using 10 10G machine. This improves the absolute concurrency capability of the cache. At the same time reduce the loss of individual node downtime. In addition, the network throughput rate can be increased. You can also use the free disk and CPU shares for other applications.
4) Slab, chunk, page, Growth factor
Chunk, and 48B is the capacity of the data structure of chunk itself, so the chunk setting is too small, can cause itself to occupy too much, a lot of waste phenomenon. However, if the settings are too large, memory fragmentation can also occur. So this should be discretionary.
Page size, which determines the maximum cache object, and the default is 1M. But we can sometimes save more than 1M of strings, xmemcached the client will compress the duplicated strings.
Growth factor is critical, too large, too small will cause waste. So you want to distribute the size of the cached object in a positive way. Growth rate is more appropriate. The memcached does not occupy memory when it is started. Apply a page space to the operating system one time only when caching is used. If the chunk used in the slab of this page is very small, then the other space becomes the memory fragment cannot continue to allocate.
5) Avalanche effect
Suppose there are 10 memcache clusters on the line, with a memory load of 80%. If you have 3 downtime at this time. Then the cached data for 80%x30%=24% will be lost. The pressure of the persistence layer will increase instantaneously. Cause all requests to be handled slowly, like avalanches, the entire cluster cannot work. So we have to reserve as much free time as possible. So the memory threshold of 70% is more reasonable.
6) Server Squeeze
Case 1: On a server of 64G memory, the memcached of 1 48G memory
Case 2: On a server of 64G memory, the memcached of 4 12G memory
The test shows that the performance of case 2 is significantly higher than case 1 by 50%, and server processing can reach 71.5w#/sec
Author Introduction
Nickname: Australian Bird, Cat head
Name: Park Heiling
See more highlights of this column: http://www.bianceng.cnhttp://www.bianceng.cn/webkf/tools/