Read the catalogue:
- Performance-related data metrics
- Memory Utilization Used_memory
- Total command Processing total_commands_processed
- Delay Time
- Memory Fragmentation Rate
- Recycle key
- Summarize
Performance-related data metrics
Access to the Redis server through the REDIS-CLI command line interface, and then use the Info command to get all the information related to the Redis service. Use this information to analyze some of the performance metrics that are mentioned later in the article.
The data for the info command output can be divided into 10 categories, namely:
- Server
- Clients
- Memory
- Persistence
- Stats
- Replication
- Cpu
- Commandstats
- Cluster
- Keyspace
This article mainly introduces the more important 2 parts of the performance indicators memory and stats.
Note that the information returned by the Info command does not have the command to respond to delay-related data information, so we'll show you how to get the data metrics related to latency in detail later.
If you feel that info is outputting too much information and clutter, you can specify the parameters of the info command to get the data under a single category. For example, entering the info Memory command will return only memory-related data.
To quickly locate and resolve performance issues, here are 5 key data metrics that contain the performance issues that most people often encounter when using Redis.
Memory Utilization Used_memory
The Used_memory field data represents the total amount of memory allocated by the Redis allocator, in bytes (byte). The data on Used_memory_human is the same value as the Used_memory, which is displayed in M for readability only.
Used_memory is the total amount of memory used by Redis, which contains the memory consumed by the actual cache and the memory (such as metadata, LUA) that the Redis itself runs. It is the memory allocated by Redis using the memory allocator, so this data does not count the amount of memory wasted by memory fragments.
Other fields represent the meaning, in bytes:
- Used_memory_rss: Displays the total amount of memory that has been allocated from the operating system.
- Mem_fragmentation_ratio: Memory fragmentation rate.
- Used_memory_lua:lua the memory size used by the scripting engine.
- Mem_allocator: The memory allocator used by Redis specified at compile time, which can be libc, Jemalloc, Tcmalloc.
Performance issues due to memory exchange
Memory usage is the most critical part of the Redis service. If the memory usage of a Redis instance exceeds the available maximum memory (Used_memory > available maximum memory), then the operating system starts swapping memory with swap space, writing the old or unused content in memory to the hard disk (this space on the hard disk is called the swap partition). To free up new physical memory for use with a new or active page (page).
Read and write operations on the hard disk than in memory for read and write operations, the time is nearly 5 orders of magnitude, memory is 0.1μs units, and the hard disk is 10ms. If memory swapping occurs on the Redis process, redis and applications that rely on data on Redis are severely affected by the performance. By looking at the Used_memory metric, you know the memory that Redis is using, and if the maximum memory is available for used_memory>, it means that the Redis instance is swapping memory or has already swapped memory. According to this situation, the administrator executes the corresponding contingency measures.
Track Memory utilization
If the Rdb snapshot or aof persistence policy is not turned on during Redis, the cache data is at risk of being lost when Redis crashes. Because when Redis memory usage exceeds 95% of available memory, some data begins to swap back and forth between memory and swap space, and there is a risk of data loss.
When the snapshot function is turned on and triggered, Redis will fork a child process to completely copy the current in-memory data to the hard disk. Therefore, if the snapshot function is triggered when memory is currently more than 45% of the available memory, the memory exchange at this point can become very dangerous (data loss may be lost). If there are a lot of frequent updates on the instance at this time, the problem becomes more serious.
Avoid such problems by reducing the memory footprint of Redis, or use the following techniques to avoid memory exchange occurrences:
-
-
If the cached data is less than 4GB, a 32-bit Redis instance is used. Because the pointer on a 32-bit instance is only half the size of 64 bits, it takes up less space in memory. One disadvantage is that assuming that the physical memory exceeds 4GB, the memory used by 32-bit instances will still be limited to less than 4GB. If the instance is also shared with some other applications, it may require a more efficient 64-bit Redis instance, in which case switching to 32-bit is undesirable. Either way, Redis's dump files are compatible between 32-bit and 64-bit, so if there is a need to reduce memory footprint, you can try using 32-bit first, then switching to 64-bit.
-
-
Use the hash data structure whenever possible. Because Redis stores a hash structure that is less than 100 fields, its storage efficiency is very high. So when you do not need the set operation or the Push/pop operation of the list, use the hash structure as much as possible. For example, in a Web application, you need to store an object representing user information, a single key to represent a user, and each attribute stored in a hash field, which is much more efficient than setting a single key-value for each attribute. Typically, if you have data that uses a string structure and is stored with multiple keys, it should be converted into a single-key multi-field hash structure. As described in the above example, the hash structure should contain the properties of a single object or a variety of data for a single user. The operation command for the hash structure is hset (key, fields, value) and Hget (Key, field), which can be used to store or remove the specified field from the hash.
-
-
Sets the expiration time of the key. A simple way to reduce memory usage is to ensure that key expiration time is set whenever the object is stored. If key is used for a definite time period, or if the old key is unlikely to be used, the Redis Expiration Time command (Expire,expireat, Pexpire, pexpireat) can be used to set the expiration time, This allows Redis to automatically delete key when the key expires. If you know how many new Key-value are created per second, you can adjust the survival time of the key and specify thresholds to limit the maximum memory used by Redis.
-
-
Recycle key. In a Redis configuration file (commonly called redis.conf), setting the value of the "MaxMemory" property limits the maximum amount of memory used by Redis, and the restart instance takes effect when modified. You can also use the Client command config set MaxMemory to modify the value, this command is effective immediately, but will be invalidated after the restart, you need to use the Config rewrite command to refresh the configuration file. If the Redis snapshot feature is enabled, you should set the "MaxMemory" value to 45% of the system's memory, because the snapshot requires a memory-time copy of the entire data set, which means that if 45% is currently used, it becomes 95% (45%+45%+5%) during the snapshot, where 5% is reserved for other expenses. If the snapshot feature is not turned on, the maxmemory can be set to 95% of the system's available memory.
When memory usage reaches the maximum threshold set, you need to select a key recycling policy that modifies the value of the Maxmemory-policy property in the redis.conf configuration file. If the key in the Redis dataset is set to expire, then the "Volatile-ttl" strategy is the better choice. However, if key fails to expire quickly when the maximum memory limit is reached, or the expiration time is not set at all. It is appropriate to set the value of "ALLKEYS-LRU", which allows Redis to remove the least recently used key from the entire data set (LRU culling algorithm). Redis also offers a number of other elimination strategies, as follows:
- VOLATILE-LRU: Uses the LRU algorithm to retire data from the set of data sets that have an expiration time.
- VOLATILE-TTL: Select expiring data from the data collection that has been set to expire.
- Volatile-random: Randomly selects data culling from the set of data sets that have an expiration time.
- ALLKEYS-LRU: Use the LRU algorithm to retire data from all data sets.
- Allkeys-random: Choose data culling from data collection
- No-enviction: No elimination of data.
By setting MaxMemory to 45% or 95% of the available memory for the system (depending on the persistence policy) and setting "Maxmemory-policy" to "Volatile-ttl" or "ALLKEYS-LRU" (depending on expiration settings), It is possible to limit redis maximum memory usage more accurately and use these 2 methods in most scenarios to ensure that Redis does not swap memory. If you are concerned about the loss of data due to limited memory usage, you can set the Noneviction value to prohibit the elimination of data.
Command processing number total_commands_processed
The total_commands_processed field in the info information shows the total number of Redis service processing commands, and its commands are requested from one or more Redis clients. Redis handles commands from the client all the time, and it can be any of the 140 types of commands provided by Redis. The value of the total_commands_processed field is incremented, such as the Redis service processes the 2 commands that client_x requests and the 3 commands that client_y requests, and the total number of command processing (total_commands_ Processed) will add 5.
Analyze the total number of command processing and diagnose response delays.
In a Redis instance, the total number of trace command processing is the most critical part of addressing the response latency issue, because Redis is a single-threaded model, and the commands that come with the client are executed sequentially. The more common latency is bandwidth, and the latency over gigabit NICs is approximately 200μs. If the response time of the command is noticeably slower and the delay is higher than 200μs, it is likely that the number of commands waiting to be processed in the Redis command queue is greater. As mentioned above, the increase in delay time may result in slower response times due to one or more slow commands, so you can see a significant decrease in the number of command processing per second, or even the subsequent command being completely blocked, resulting in redis performance degradation. To analyze this performance issue, you need to track the number of command processing and the delay time.
For example, you can write a script that periodically records the value of total_commands_processed. When the client obviously finds that the response time is too slow, the total_commands_processed historical data value can be recorded to determine whether the total number of processing is an uptrend or a downtrend in order to troubleshoot the problem.
Use the total number of commands to resolve the delay time increase.
Compared to the historical data recorded, the total number of command processing is indeed in the ascending or descending state, then there may be 2 reasons:
- There are too many commands in the command queue, and subsequent commands are waiting.
- Several slow commands block Redis.
There are three ways to solve this problem due to the response delay caused by the above 2 reasons.
-
- use multi-parameter command: If the client sends a large number of commands in a short period of time, it will find that the response time is significantly slower, due to the subsequent command waiting in the queue before a large number of commands are executed. One way to improve the latency problem is to replace the form of a multiple command single parameter with a single command multi-parameter. For example, using the LSet command to add 1000 elements to the list structure is a way of poor performance, and a better practice is to create a list of 1000 elements in the client, Lpush or Rpush with a single command, A Redis service that sends 1000 elements at once through a multi-parameter construct. The following table is a few operations commands for Redis, with a single parameter command and commands that support multiple parameters, which minimize the number of times a command is used. , &NB Sp , &NB Sp , &NB Sp
-
-
Pipeline command: Another way to reduce multiple commands is to use a pipeline (pipeline) to combine several commands together to reduce latency caused by network overhead. Because 10 commands are sent separately to the server, it can cause 10 network latency overhead, and the pipeline will return the execution results one time, requiring only a single network latency overhead. The Redis itself supports pipeline commands, and most clients support it, and it is very effective to use pipelines to reduce latency if the current instance latency is obvious.
-
-
Slow commands to avoid large collections: if the command processing frequency is too low to cause the delay time to increase, this may be due to a command operation with high time complexity, which means that each command increases the time it takes to get the data from the collection. So reducing the use of high-time complex commands can significantly improve the performance of Redis. The table below is a list of high-time complexity commands that describe in detail the properties of the command, which helps to improve Redis performance by using these commands efficiently, rationally, and optimally, if you have to use them.
Delay Time
The latency data for Redis is not available from the info information. If you want to see the delay time, you can run it with the REDIS-CLI tool plus the--latency parameter, such as:
Redis-cli --latency -h 127.0.0.1 -p 6379
Its host and port are the IP and ports of the Redis instance. Due to the different operating conditions of the current server, the delay time may be error, usually 1G network card delay time is 200μs.
In milliseconds, the response delay time of Redis is measured, and the delay of the landlord's local unit is 300μs:
Track Redis Latency Performance
One of the main reasons why Redis is so popular is the high performance of low latency features, so addressing latency is the most straightforward way to improve redis performance. For 1G bandwidth, if the delay time is much higher than 200μs, there is a noticeable performance problem. Although there are some slow IO operations on the server, Redis is a single core that accepts requests from all clients, and all requests are queued in good order. So if a command sent by a client is a slow operation, all other requests must wait for it to finish before proceeding.
Improve performance with delay commands
Once the delay time is determined to be a performance issue, there are several ways to analyze the performance problem.
1. Use Slowlog to isolate slow commands that cause delay:The Slowlog command in Redis allows us to quickly navigate to slow commands that exceed the specified execution time, which is logged to the log by default if the execution time exceeds 10ms. Slowlog only records the time of its command execution, does not include an IO round-trip operation, and does not log a slow response due to network latency. Typically, the network latency of the 1GB bandwidth is expected to be around 200μs, which is nearly 50 times times slower than the network latency if a command takes more than 10ms to execute. To view all commands that have a slow execution time, you can view the Slowlog get command by using the Redis-cli tool, and the third field that returns the result displays the execution time of the command in a subtle bit unit. If you only need to see the last 10 slow commands, enter Slowlog get 10. See the total_commands_processed Introductory section on how to navigate to latency issues caused by slow commands.
The fields in the figure are:
- Unique identifier of the 1= log
- 2= the execution point of the command that is logged, expressed in UNIX timestamp format
- 3= the query execution time, in microseconds. The example command uses 54 milliseconds.
- The commands that are executed by the 4= are arranged in the form of arrays. The complete command is config get *.
If you want to customize the criteria for slow commands, you can adjust the thresholds for triggering logging slow commands. If you have little or no command over 10ms and want to lower the recorded threshold, such as 5 milliseconds, you can enter the following command configuration in the Redis-cli tool:
config set slowlog-log-slower-than 5000
You can also set in the Redis.config configuration file to a subtle bit unit.
2. Monitoring client connections: because Redis is a single-threaded model (single core only) to handle requests from all clients, the thread resources that process the request begin to reduce the processing time assigned to a single client connection due to the growth of the number of client connections. At this point, each client takes more time to wait for the Redis shared service to respond. In this case, it is important to monitor the number of client connections, because the number of client-created connections may exceed the expected number, or the client side may not have a valid free connection. Enter info clients in the REDIS-CLI tool to view all client connection information to the current instance. For example, the first field (connected_clients) shows the total number of client connections for the current instance:
The maximum number of Redis default allow client connections is 10000. If you see more than 5000 connections, it may affect redis performance. This number is much lower if some or most of the clients send a large number of commands over.
3. Limit client connections: since Redis2.6, allow the user to modify the maximum number of client connections on the configuration file (redis.conf) MaxClients property, or by entering config set on the Redis-cli tool MaxClients to set the maximum number of connections. Depending on the number of connections, this number should be set to between 110% and 150 of the peak of the expected number of connections, and if the number of connections exceeds this number, Redis will reject and immediately close the new connection. It is important to limit the number of connections that are not expected to increase by setting the maximum number of connections. In addition, the failure of a new connection attempt returns an error message, which allows the client to know that Redis has an unintended number of connections at this time in order to perform the corresponding processing. The above two practices are important for controlling the number of connections and for maintaining the optimal performance of Redis.
4. Enhanced memory Management: less memory causes redis latency to increase. If Redis consumes more memory than is available in the system, the operating system will swap part of the REDIS process data from physical memory to the hard disk, and memory swapping will significantly increase latency. For information on how to monitor and reduce memory usage, see the used_memory Introduction section.
5. Performance Data indicators:
Analyzing the performance issues of Redis, it is often necessary to correlate the data changes of the delay time with changes in other performance metrics. The decrease in total command processing may be caused by slow commands blocking the entire system, but if the total number of command processing increases while memory usage increases, it may be a performance issue due to memory exchange. For the analysis associated with this performance indicator, important changes in data metrics need to be observed from historical data, and all other performance indicators associated with a single performance indicator can be observed. This data can be collected on Redis, periodically invoking the Redis info script, then analyzing the output information and logging it to the log file. When the delay changes, use the log file with other data indicators, the data in series to troubleshoot the location problem.
Memory Fragmentation Rate
The Mem_fragmentation_ratio in Info information gives a data indicator of the rate of memory fragmentation, which is calculated by dividing the memory allocated by the system into the memory allocated by Redis:
The used_memory and Used_memory_rss numbers contain memory allocations:
- User-defined data: Memory is used to store key-value values.
- Internal overhead: The storage of internal redis information is used to represent different data types.
Used_memory_rss RSS is an abbreviation for resident Set size, which represents the size of the physical memory that the process occupies, and the amount of memory allocated to the Redis instance by the operating system. In addition to user-defined data and internal overhead, the Used_memory_rss metric also contains the overhead of memory fragmentation, which is caused by inefficient allocation/recycling of physical memory by the operating system.
The operating system is responsible for allocating physical memory to individual application processes, and the mapping of memory used by Redis to physical memory is done by the virtual memory management allocator on the operating system.
For example, Redis needs to allocate contiguous blocks of memory to store 1G datasets, which is more advantageous, but may not have more than 1G contiguous blocks of memory on the physical memory, and the operating system has to use multiple discrete chunks of memory to allocate and store the 1G data, resulting in memory fragmentation.
The other complex aspect of the memory allocator is that it often allocates some memory blocks to the reference in advance, which makes the application run faster.
Understanding Resource Performance
Tracking memory fragmentation rates is important for understanding the resource performance of Redis instances. It is reasonable that the memory fragmentation rate is slightly greater than 1, which means that the memory fragmentation rate is low and Redis does not have memory swapping. However, if the memory fragmentation rate exceeds 1.5, it means that Redis consumes 150% of the physical memory actually required, where 50% is the memory fragmentation rate. If the memory fragmentation rate is less than 1, the Redis memory allocation is out of physical memory and the operating system is swapping memory. Memory swapping causes a very noticeable response delay, which can be seen in the Used_memory Introduction section.
The 0.99 in the list is 99%.
Predicting performance issues with memory fragmentation rates
If the memory fragmentation rate exceeds 1.5, it could be the performance of memory management variability in the operating system or Redis instance. There are 3 ways to solve the problem of memory management variance and improve Redis performance:
1. Restart Redis server: If the memory fragmentation rate exceeds 1.5, restarting the Redis server can invalidate the extra memory fragmentation and re-use it as new memory, allowing the operating system to restore efficient memory management. The extra fragmentation occurs because Redis frees up memory blocks, but the memory allocator does not return memory to the operating system, which is specified at compile time, which can be libc, Jemalloc, or Tcmalloc. By comparing Used_memory_peak, the data metric values of Used_memory_rss and used_memory_metrics can be used to check for additional memory fragmentation. As you can see from the name, Used_memory_peak is the peak used by Redis memory in the past, not the current value of memory usage. If the values of Used_memory_peak and used_memory_rss are roughly equal, and both significantly exceed the Used_memory value, this indicates that additional memory fragmentation is being generated. Enter info memory on the REDIS-CLI tool to view information on the above three indicators:
Before restarting the server, you need to enter the shutdown Save command on the REDIS-CLI tool, which means forcing the Redis database to perform a save operation and shutting down the Redis service, which guarantees that no data will be lost when Redis is closed. After a reboot, Redis loads persisted files from the hard disk to ensure that the dataset is continuously available.
2. Limit Memory Exchange: If the memory fragmentation rate is lower than the 1,redis instance, some data may be exchanged to the hard disk. Memory swapping can severely affect redis performance, so you should increase the available physical memory or reduce the actual redis memory footprint. You can view optimization recommendations in the Used_memory section.
3. Modify the memory allocator:
Redis supports glibc ' Smalloc, JEMALLOC11, tcmalloc several different memory allocators, each with a different implementation on memory allocation and fragmentation. It is not recommended that a normal administrator modify the Redis default memory allocator because this requires a full understanding of the differences between these memory allocators and the need to recompile redis. This approach is more about working with the Redis memory allocator and, of course, a way to improve the memory fragmentation problem.
Recycle key
The Evicted_keys field in the info information shows the number of deleted keys because the maxmemory limit causes the key to be recycled. For MaxMemory's introduction, see the previous section, recovering a key only occurs when the MaxMemory value is set and the memory swap occurs without setting. When Redis needs to reclaim a key due to memory pressure, Redis first considers not to reclaim the oldest data, but to randomly select a key in the least recently used key or expiring key to delete it from the dataset.
This can be set in the configuration file with the Maxmemory-policy value of "Volatile-lru" or "Volatile-ttl" to determine whether Redis uses an LRU policy or an expiration time policy. If all keys have a definite expiration time, then the expiration time recovery strategy is more appropriate. If you do not set the expiration time of key or do not have enough expired key, it is reasonable to set the LRU policy, which can reclaim key without considering its expiration status.
Recovering location performance issues based on key
Tracking key recovery is important because by recycling key, you can guarantee a reasonable allocation of Redis's limited memory resources. If the Evicted_keys value is often more than 0, you should see an increase in client command response latency because Redis not only handles command requests from clients, but also frequently recycles key that satisfies the criteria.
It is important to note that the effect of recycling key on the performance is far from the memory exchange is serious, if the force of memory exchange and set up a recovery policy to make a choice, it is reasonable to choose to set the recycling policy, because the memory data exchange to the hard disk on the performance impact is very large (see previous section).
Reduce recycling key for improved performance
Reducing the number of recycled keys is a direct way to improve the performance of Redis, there are 2 ways to reduce the number of recycled keys:
1. Increase the memory limit: If the snapshot function is turned on, the maxmemory needs to be set to 45% of physical memory, which has little risk of triggering a memory exchange. If the snapshot function is not turned on, it is reasonable to set 95% of the available memory of the system, referring to the previous snapshot and MaxMemory restrictions section. If the maxmemory is set to less than 45% or 95% (depending on the persistence policy), increasing the value of MaxMemory allows Redis to store more keys in memory, which can significantly reduce the number of key recoveries. If the maxmemory has been set to the recommended thresholds, increasing the maxmemory limit will not only improve performance, but can trigger memory swapping, resulting in increased latency and degraded performance. The value of MaxMemory can be set by entering the Config setmaxmemory command on the Redis-cli tool.
It is important to note that this setting takes effect immediately, but is lost after a reboot and needs to be permanently saved, and then entering the Config rewrite command refreshes the new configuration in memory into the configuration file.
2. Shard An instance: sharding is the partitioning of data into appropriate sizes, stored on different Redis instances, each containing part of the entire data set. Sharding allows many servers to be combined to store data, which is equivalent to increasing the total physical memory, allowing it to store more keys without the memory Exchange and key recovery strategy. If there is a very large data set, MaxMemory has been set, the actual memory usage has exceeded the recommended thresholds, which can significantly reduce the recovery of key by data sharding, thereby improving the performance of Redis. There are a number of ways to implement a shard, and here are some common ways Redis implements Sharding:
- A. Hash shard: A relatively simple method of implementation, the hash function to calculate the hash value of key, and then the value of the range corresponding to a specific Redis instance.
- B. Proxy Shard: The client sends the request to the agent, and the agent selects the corresponding Redis instance through the Shard configuration table. Like Twitter's twemproxy, Pea Pod's codis.
- C. Consistent hash Shard: see the previous blog, "Consistency hash shard detailed"
- D. Virtual Bucket shard: See the previous blog "virtual bucket detailed"
Summarize
For developers, Redis is a very fast Key-value memory database and provides a convenient API interface. In order to optimize the use of redis, we need to understand what factors can affect Redis performance and which data metrics can help us avoid performance traps. Through this article, you can understand the important performance indicators in Redis, how to view, and more importantly, how to use these data to troubleshoot Redis performance issues.
This blog mainly translated an ebook in the middle of 15 pages, e-book address is https://www.datadoghq.com/wp-content/uploads/2013/09/ Understanding-the-top-5-redis-performance-metrics.pdf.
The translation level of the landlord is limited, if there are mistakes please forgive me, also welcome to point out the exchange, and hope this article is helpful to everyone.
Redis Performance Troubleshooting Manual (vii)