Redis has a practical slowlog function that, as you can guess, allows you to check for slow-running queries. Slowlog will record the last X query that runs longer than Y microseconds. X and Y can pass the CONFIG command at redis.conf or at run time:
Copy Code code as follows:
CONFIG SET Slowlog-log-slower-than 5000
CONFIG SET Slowlog-max-len 25
is set.
Slowlog-log-slower-than is used to set the number of microseconds, so the above setting records queries that run longer than 5 seconds. To get the log of the record, you can use the Slowlog get x command, where X is the number of record bars you want to obtain:
Copy Code code as follows:
It will show a unique ID, timestamp and occurrence of the query, the query execution time spent and the actual execution of the command + parameters. You can erase the log by Slowlog Reset.
The last time I looked at Slowlog, I was very uncomfortable to see that the DEL command took more than 20 milliseconds to execute. Remember, Redis is single-threaded, so it blocks (and seriously hinders) concurrency in our system. Also, because this is a write operation, it will block this replication process when replicating to all subordinate Redis services. Well, what the hell is this?
Maybe everyone knows the problem, except me. But this proves that the time complexity of the Redis del command is O (1) for strings and hashes, and O (n) for list, set, and sorted set (where n is the number of data items in the collection). Would you delete a set that contains millions of data? Then just wait for the jam.
Our solution is simple: Do not delete the data items, but rename them, and in the background job with small and discontinuous block to perform their deletion operations. First, it's our delayed_delete function:
Local key = keys[1] Local
data_type = Redis.call (' type ', key). OK
if data_type = = ' Set ' or data_type = ' Zset ' Then Local
temp = ' gc:tmp: ' ... redis.call (' incr ', ' gc:ids '). ':' .. Key
redis.call (' Rename ', key, temp) return
redis.call (' Sadd ', ' GC: ' ... data_type, temp) end
return
Redis.call (' del ', key)
This will rename the collection and add the new name to the Gc:set or Gc:zset set (we do not use the list, but you should also support it if you use it).
Next we've arranged a Ruby script to run every minute:
Require ' Redis '
r = redis.new (driver:: Hiredis)
r.srandmember (' Gc:set ', 10000). Each do |set|
Items = R.srandmember (set, 5000)
if Items.nil? | | items.length = = 0
r.srem (' Gc:set ', set
)
next
End R.srem (set, items)
end
r.srandmember (' Gc:zset ', 10000). Each do |zset|
If R.zremrangebyrank (zset, 0, 5000) < 5000
R.srem (' Gc:zset ', zset)
end
You can modify the numbers based on your own needs. How large is your collection and how often are they deleted? Because we do not go too frequently to do these types of output operations, we can only do a small block of delete operations at a time.
However, this method is slower than direct deletion, but it can perform well in concurrent environments.