Phenomenon: Redis-cluster A fragment of memory soared, significantly higher than other slices, and continued to grow. and master-Slave memory usage is not consistent. II, analysis of possible causes: 1. redis-cluster Bug (this should not exist) 2. There is a problem with the client's hash (key), resulting in uneven distribution. (Redis uses CRC16, does not appear so uneven situation) 3. There are individual large key-value: for example, a millions of data set structure is included (this is possible) 4. There was a problem with master-slave replication. 5. Other reasons Three, investigate the reason: 1. Upon enquiry, none of the above 1-4 exist 2. To observe info information, there is a point of suspicion: client_longes_output_list some anomalies. 3. Thus understanding the thought of server-side interaction with the client, the input buffer and output buffer were set for each client, which, if very large, would also occupy the memory of Redis servers. from the above client_longest_output_list, the output buffer should occupy a large memory, that is, a large number of data from the Redis server to some clients output. Then use the client List command (similar to the MySQL processlist) redis-cli-h host-p Port Client List | Grep-v "omem=0" to query the output buffer is not 0 of the client connection, so the query to the culprit monitor, so suddenly enlightened. The Monitor's model is that it outputs all of the commands that are executed on the Redis server, and usually the QPS of the Redis server is very high, that is, if the monitor command is executed, Redis Server in monitor this client output buffer will have a lot of "inventory", also occupy a lot of redis memory. IV, emergency processing and resolution of the master-slave switching (the main memory usage is inconsistent), that is, Redis-cluster fail-over operation, continue to observe the new master whether there is an exception, by observing that there is no exception. After looking for the real reason, that is, monitor, after shutting down the process of the Monitor command, the memory quickly came down. V. Preventive measures: 1. Why would there be a monitorThis order occurred, I think there are two reasons: (1). The engineer wanted to see what commands were being executed, using monitor (2). Engineers for the purpose of Redis learning, because of the redis trusteeship, engineers as long as the Redis can be used, but as a technical staff have the curiosity and desire to learn. 2. Prevention methods: (1) Engineer training, talk about the use of the Redis in the Pit and Taboo (2) on the introduction of the Redis cloud, and even interested students to participate in (3) for the client to make restrictions, but the official does not recommend doing so, There is no limit to the output buffer in the official default configuration. Java Code client-output-buffer-limit normal 0 0 0 (4) Password: Redis has a weak password function, At the same time one more IO (5) Modify the client source code, prohibit some dangerous commands (shutdown, Flushall, monitor, keys *), of course, can be completed by REDIS-CLI (6) Add Command-rename configuration, Make rename of some dangerous commands (Flushall, monitor, keys *, FLUSHDB), and if necessary, find Redis's operational personnel to handle Java code rename-command flushall "Random number" rename-command flushdb "random number" Rename-command keys "Random number" Six, simulation experiment: 1. Open an empty redis (minimalist, direct redis-server) Java code redis-server Initialize memory usage as follows: Java code # memory used_memory:815072 used_memory_human:795.97k used_memory_rss:7946240 Used_memory_peak:815912 used_memory_peak_human:796.79k used_memory_lua:36864 mem_ fragmentation_ratio:9.75 mem_allocator:jemalloc-3.6.0 Client buffers: Java code # clients connected_clients:1 client_longest_output_list:0 Client_ biggest_input_buf:0 blocked_clients:0 2. Open a Monitor:java code redis-cli -h 127.0.0.1 -p 6379 monitor 3. Using Redis-benchmark:java code redis-benchmark -h 127.0.0.1 -p 6379 -c 500 -n 200000 4. Observation (1) Info memory: Memory has been increased until the end of benchmark, monitor output is complete, but Used_memory_peak_human (peak history) is still high-observe the attachment of the log (2) Info clients: client_longest_output_list: has been increasing until the end of benchmark, monitor output finished, only to become 0-Observation attachment log (3) redis-cli-h host-p Port Client List | grep "Monitor" Omem has been very high until the end of benchmark, monitor output finished, only to become 0-Observation attachment log Monitoring script: Java code while [ 1 == 1 ] do now=$ (date "+%y-%m-%d_%h:%m:%s") echo "=========================${now}===============================" echo " # Client-monitor " redis-cli -h 127.0.0.1 -p 6379 client list | grep monitor redis-cli -h 127.0.0.1 -p 6379 info clients redis-cli -h 127.0.0.1 -p 6379 info memory #休息100毫秒 usleep 100000 done Complete log file: http://dl.iteye.com/topics/download/ 096F5DA0-4318-332E-914F-6F7C7298DDC9 Part of the log: Java code =========================2015-11-06_10:07:16= ============================== #Client-monitor id=7 addr=127.0.0.1:56358 fd=6 name= age=91 idle=0 flags=o db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=4869 omem=133081288 events=rw cmd=monitor # clients connected_clients:502 client_longest_output_list:4869 Client_biggest_ input_buf:0 blocked_clients:0
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.