Redis Configuration Detailed

Source: Internet
Author: User
Tags allkeys integer numbers redis cluster redis server

Web Program Ape Blog: Http://blog.csdn.net/thinkercode

In the case of a professional DBA, the instance starts with a lot of parameters to keep the system running very stable, so it is possible to add a parameter after Redis at startup to specify the path to the configuration file, starting the database as if it were a MySQL read boot configuration file. After the source code is compiled, there is a redis.conf file in the Redis directory, which is the Redis configuration file. We can start with the configuration file using the following command at startup.

[root@localhost ~]# ./redis-server /path/to/redis.conf

Some of the Redis metrics are not sensitive to the case of the units in the Redis configuration, and 1GB, 1Gb, and 1gB are the same. This also shows that Redis supports only bytes, and bit units are not supported.

# 1k => 1000 bytes# 1kb => 1024 bytes# 1m => 1000000 bytes# 1mb => 1024*1024 bytes# 1g => 1000000000 bytes# 1gb => 1024*1024*1024 bytes

Redis can introduce external configuration files much like the include directive in C + +, multiple profiles, Redis always uses the last loaded configuration item, and if the configuration you want to introduce is not rewritten, it can be introduced at the end of the main configuration file.

include /path/to/other.conf
Redis Configuration-General
# Redis default is not run as daemon, can be modified by this configuration item, use Yes to enable daemon mode. Note Redis will write the process number to the file/var/run/redis.piddaemonize when it is configured as a daemon yes## when Redis runs as a daemon, Redis writes the PID by default/var/run/ Redis.pid file, can be specified by Pidfile pidfile/var/run/redis.pid# #指定Redis监听端口, the default port is 6379, why use 6379, because 6379 on the phone keypad merz the corresponding number, And Merz was taken from the name Showgirl Alessia Merz in Italy. Port 6379## in high concurrency you need a high backlog value to avoid slow client connectivity issues. Note that the Linux kernel silently reduces this value to the value of/proc/sys/net/core/somaxconn, so you need to make sure to increase the somaxconn and tcp_max_syn_backlog two values to achieve the desired effect. Tcp-backlog 511## The default Redis listener connection for all available network interfaces on the server. You can use the "bind" configuration directive with one or more IP addresses to listen to one or more network interfaces bind 192.168.1.100 10.0.0.1bind 127.0.0.1## if Redis does not listen to the port, how can we communicate with the outside world? In fact, Redis also supports receiving requests through a UNIX socket. # you can specify the path to the UNIX socket file through the Unixsocket configuration item and specify the file's permissions through Unixsocketperm. # Specifies the path used to listen for UNIX condom sockets. There is no default value, so Redis does not listen for UNIX sockets without specifying Unixsocket/tmp/redis.sockunixsocketperm 755## when a redis-client has not been requested to be sent to the server side, Then the server side has the right to actively shut down this connection, you can set the idle timeout time by timeout, 0 means never shut down. # Sets the time-out period in seconds for client connections. When the client has not issued any instructions during this time, then close the connection # Default: 0 means disable, never close timeout 0## TCP keepalive.## if nonzero, set the so_keepalive option to send an ACK to the client for the idle connection, This is useful for the following two reasons: # 1) ability to detect unresponsive Peer # 2) Let the network device in the middle of the connection know that the connection is still alive # # on Linux, this specified value (in seconds) is the interval at which the ACK is sent. Note: To close this connection requires twice times the value of this time. # on the other cores this interval is determined by the kernel configuration # # TCP connection KeepAlive policy, can be set by tcp-keepalive configuration items, in seconds, # if set to 60 seconds, the server side will every 60 seconds to connect to idle clients to initiate an ACK request, To check if the client has been hung up, # for unresponsive clients, the connection is closed. So it takes up to 120 seconds to close a connection for a maximum of one time. If set to 0, keepalive detection is not performed. # # A reasonable value for this option is 60 seconds tcp-keepalive 0## Specify logging level # Redis supports four levels in total: Debug, verbose, notice, warning, default = verbose # Debug Logs a lot of information, Used to develop and test # Varbose a lot of streamlined useful information, unlike debug will record so many # notice common verbose, often used in production environment # warning only very important or serious information will be recorded to the log loglevel notice## Indicates the log file name. You can also use "stdout" to force Redis to write log information to standard output. # default is standard output, if you configure Redis to run as Daemon, and here is configured as logging mode as standard output, then the log will be sent to/dev/nulllogfile "" # # to use the system logger, just set "syslog-enabled" to "Yes "It's OK." # then set some other syslog parameters as needed. Syslog-enabled no## Specifies the identifier of the Linux system log syslog, if "syslog-enabled=no", this option is not valid syslog-ident redis## specifies the Linux system log syslog device ( Facility), must be user or LOCAL0 to LOCAL7 between Syslog-facility Local0 # # of available databases, default value is 16, default database is stored in DB No. 0 ID Library, no special requirements, we recommend setting only one database Databases # query database using SELECT <dbid># dbid between 0 to ' databases '-1 databasEs 16  
Redis configuration-Snapshots
# # Save the database to disk: # # Save <seconds> <changes># # writes the database to disk after specifying the number of seconds and data changes. # # The following example will be performed to write data to disk: # 900 seconds (15 minutes) after, and at least 1 key (s) Change # 300 seconds (5 minutes), and at least 10 key (s) Change # 60 seconds, and at least 10,000 keys (times) Change # # Note : If you do not need to write the disk, all "save" settings are commented out, that is, to implement a full memory server. # If you want to disable the RDB persistence policy, you can do this without setting any save instructions, or you can pass an empty string argument to save to achieve the same effect. Save 1save 10save 60 10000 # # If the user has the RDB snapshot enabled, If a failure occurs when Redis persists data to disk, Redis stops accepting all write requests by default. # The advantage of this is that it makes it clear to the user that the data in memory and the data on the disk are already inconsistent. # If Redis continues to accept write requests regardless of this inconsistency, it can cause some disastrous consequences. # If the next Rdb persists successfully, Redis will automatically resume accepting write requests. # Of course, if you don't care if this data is inconsistent or if there are other ways to find and control this inconsistency, you can turn this feature off so that Redis can continue to accept new write requests when the snapshot write fails. Stop-writes-on-bgsave-error yes#### Whether the string object is compressed with LZF when exporting to the. Rdb database. # The default setting is "Yes", # If you want to save the CPU, you can set this to "no", but if there is a key that can be compressed without compression, then the data file will become larger rdbcompression yes## Because the RDB for version 5 has a checksum for the CRC64 algorithm, it is placed at the end of the file. This will make the file format more reliable, but in the # Production and loading of RDB files, this has a performance drain (about 10%), so you can turn it off to get the best performance. # The generated off-checksum RDB file has a checksum of 0, which tells the load code to skip checking the file name and storage path of the rdbcompression yes## database dbfilename dump.rdb## working directory # The local database will be written to this directory, The file name is the value of "Dbfilename" above. # Accumulate filesalso put here. # Note that you must specify a directory, not a file name.  Dir./
redis configuration-Sync
# master-Slave synchronization. Backups of Redis instances are implemented through the slaveof configuration. Note that this is where the data is copied locally from the remote. In other words, you can have different database files, bind different IPs, and listen to different ports locally. # When this machine is from the service, set the IP and port of the main service, when Redis starts, it automatically synchronizes data from the main service slaveof <masterip> <masterport> # # If the Master Service Master has set a password ( Configured by the "requirepass" option below), the slave service connects to the master's password, so slave must authenticate before starting the synchronization, otherwise its synchronization request will be rejected. # When this machine is from service, set the connection password of the main service Masterauth <master-password> # # When a slave loses the connection to master, or if synchronization is in progress, there are two possible behaviors of slave: # 1) if Slave-serve-stale-data is set to "Yes" (the default), slave continues to respond to client requests, either normal or empty data that has not yet obtained a value. # 2) If Slave-serve-stale-data is set to "no", Slave will reply "Synchronizing with Master in progress" to handle various requests, in addition to the INFO and slaveof commands. Slave-serve-stale-data yes## You can configure whether the salve instance accepts write operations. A writable slave instance may be useful for storing temporary data (since the data written to salve# will be very much deleted after synchronizing with Master), # But if the client may have problems writing to it due to a misconfiguration. # from Redis2.6 default all slave is read-only # Note: read-only slave is not designed to expose to untrusted clients on the Internet. It is just a protective layer against misuse of instances. # A read-only slave supports all administrative commands such as Config,debug. To limit your ability to use ' rename-command ' to hide all administrative and dangerous commands to enhance the security of read-only slave slave-read-only yes## replication set synchronization policy: disk or socket# New slave connection or old slave reconnect time can not only receive different, have to do a full synchronization. A new Rdb file needs to be dump out and then transmitted from masterto slave. There are two scenarios: # 1) based on hard disk (disk-backed): Master creates a new process dump RDB, which is passed to slaves by the parent process (that is, the main process) increment. # 2) based on socket (diskless): Master creates a new process that directly dumps the RDB to the slave socket without passing through the main process without the hard drive. # based on the hard disk, when the Rdb file is created, once created, you can serve more slave at the same time. Based on the socket, after the new slave arrived, have to queue (if beyond the repl-diskless-sync-delay have not come), the end of the next one. # when using diskless, master waiting for a repl-diskless-sync-delay seconds, if not slave to, then directly, then the queue to wait. Otherwise you can pass it together. # disk is slow, and the network is faster, you can use diskless. (Default with disk-based) Repl-diskless-sync no## is set to 0, the transfer starts Asaprepl-diskless-sync-delay 5## slave sends a PING request to the server based on the specified interval. # The time interval can be set by Repl_ping_slave_period. # default 10 seconds Repl-ping-slave-period 10## The following option to set the sync timeout time # 1) slave has a large amount of data transfer during the master Sync, resulting in a timeout # 2) at slave angle, master timeout, including data, Ping etc # 3) at the master angle, slave times out when master sends replconf ACK pings# # To make sure that this value is greater than the specified Repl-ping-slave-period, Otherwise, when the master-slave traffic is not high, each time the timeout repl-timeout 60## is disabled tcp_nodelay after slave socket sends sync? # # If you choose "Yes" Redis will use fewer TCP packets and bandwidth to send data to slaves. # But this will cause a delay in data transfer to slave, the Linux kernel will have a default configuration of 40 milliseconds, # If you choose "No" the delay of data transfer to salve will decrease but to use more bandwidth by default we will optimize for low latency, # but high traffic conditions or excessive hops between master and slave, Setting this option to "yes" is a good choice. Repl-dIsable-tcp-nodelay no## Set the backlog size of the data backup. # Backlog is a buffer that records salve data when disconnected over a period of time, # so a slave does not need a full amount of synchronization when reconnecting, but an incremental synchronization is enough to slave the lost portion of the data during the slave period. # The larger the backlog, the longer the slave is able to make incremental synchronizations and allow the disconnection to take place. # The backlog is assigned only once and requires at least one slave connection repl-backlog-size 1mb## when Master is no longer connected to any slave for a period of time, the backlog will be released. The following options configure how many seconds after the last slave break starts and the backlog buffer is released. # 0 indicates that the priority of Backlogrepl-backlog-ttl 3600## slave is never released is an integer displayed in the info output of the Redis. If master no longer works, # Sentinel will use it to select a Slave promotion = l to master. Priority number salve will take precedence over promotion to master,# so for example there are three slave priorities for 10,100,25,# Sentinel will pick slave with the lowest priority Number 10. # 0 As a special priority, this slave cannot be used as master, so a slave with a priority of 0 will never be promoted to master# by Sentinel pick as the default priority is 100slave-priority 100## If Master has less than N of connected slave with a delay of less than or equal to M seconds, it can stop receiving write operations. # n slave need to be "oneline" state # The delay is in seconds and must be less than or equal to the specified value, which is the count from the last ping received from slave (usually sent per second). # For example, require at least 3 slave with a delay of less than or equal to 10 seconds with the following directive: # One of the two is set to 0 to disable this feature. # The default Min-slaves-to-write value is 0 (this feature is disabled) and the Min-slaves-max-lag value is 10. Min-slaves-to-write 3min-slaves-max-lag 10
Redis Configuration-Security
# 要求客户端在处理任何命令时都要验证身份和密码。# 这个功能在有你不信任的其它客户端能够访问redis服务器的环境里非常有用。# 为了向后兼容的话这段应该注释掉。而且大多数人不需要身份验证(例如:它们运行在自己的服务器上)# 警告:因为Redis太快了,所以外面的人可以尝试每秒150k的密码来试图破解密码。这意味着你需要# 一个高强度的密码,否则破解太容易了。requirepass foobared## 命令重命名# 在共享环境下,可以为危险命令改变名字。比如,你可以为 CONFIG 改个其他不太容易猜到的名字,# 这样内部的工具仍然可以使用,而普通的客户端将不行。# 例如:# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52# 请注意:改变命令名字被记录到AOF文件或被传送到从服务器可能产生问题。# 也可以通过改名为空字符串来完全禁用一个命令rename-command CONFIG ""
Redis Configuration-Limitations
# Set the maximum number of simultaneous clients to connect. The default limit is 10,000 clients, however if the Redis server cannot configure the # processing file limit number to satisfy the specified value, the maximum number of client connections is set to the current file limit by 32 (because # reserved some file descriptors for Redis server for internal use) # Once this limit is reached, Redis closes all new connections and sends the error ' max number of clients reached ' maxclients 10000## do not use more memory than the set limit. Once the memory usage is capped, Redis will delete the key# based on the selected recycling policy (see: Maxmemmory-policy) If Redis cannot remove key because of the deletion policy, or if the policy is set to "Noeviction", Redis will reply to the need for more # Multiple memory error messages to the command. For example, Set,lpush and so on, but will continue to respond to read-only commands like get. # This option is often useful when using Redis as the LRU cache, or when setting a hard memory limit for an instance (using the "noeviction" policy). # Warning: When there are multiple slave attached to an instance of the upper memory limit, Master is required to synchronize the output buffer of slave # Memory is not calculated in memory. This way, when the key is evicted, the key# cycle will not be triggered by the network problem/resynchronization event, which in turn slaves the output buffer is filled with the key evicted del Command, which will trigger the deletion of more key,# until the database is completely emptied # # anyway ... If you need to attach more than one slave, it is recommended that you set a slightly smaller maxmemory limit so that the system will have free # memory as the output buffer of the slave (but not necessary if the maximum memory policy is set to "Noeviction") MaxMemory < bytes>## Maximum memory policy: If the memory limit is reached, Redis chooses to delete key. You can choose from the following five behaviors: # # VOLATILE-LRU----to be removed based on the expiration time generated by the LRU algorithm. # ALLKEYS-LRU removes any key according to the LRU algorithm. # volatile-random randomly deletes keys based on expiration settings. # allkeys->random-no-difference random deletion. # Volatile-ttl, based on the most recent Expiration Time (with TTL) # noeviction, who does not delete,An error is returned directly in the write operation. # Note: For all policies, if Redis cannot find a suitable key to delete, it will return an error during the write operation. # The commands involved so far: set setnx setex append# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd# Sin ter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby# zunionstore zinterstore hset hsetnx hmset Hincrby Incrby decrby# getset mset msetnx exec sort## default values are as follows: Maxmemory-policy volatile-lru## the implementation of the LRU and minimum TTL algorithms is not very precise, but very close (in order to save memory ), so you can do the test with sample size. # For example: The default Redis checks 3 keys and takes the oldest one, you can set the number of samples by using the following configuration instructions. Maxmemory-samples 3
Redis Configuration-Append mode
# By default, Redis is asynchronously exporting data to disk. This mode is good enough in many applications, but the Redis process # issues or power outages can cause write loss over time (depending on the configured Save command). # # AOF is a more reliable alternative to persistence mode, such as using the default data write file policy (see following configuration) # Redis can only lose 1 seconds of write operations when encountering unexpected events such as a server power outage or a single-write scenario where Redis itself is in trouble but the operating system is still functioning. # # AOF and RDB persistence can be started at the same time and there is no problem. # if AOF is turned on, Redis will load the aof file at boot time, which is more reliable for the data. # Please see http://redis.io/topics/persistence for more information. appendonly no## Append file name (default: "Appendonly.aof") Appendfilename " Appendonly.aof "# # Fsync () system call tells the operating system to write data to disk instead of waiting for more data to enter the output buffer. # Some operating systems will actually flash the data to disk, and others will try to do so as soon as possible. # # Redis supports three different modes: # # No: Do not brush immediately, only when the operating system needs to brush. Relatively fast. # always: Each write is immediately written to the AoF file. Slow, but most secure. # Everysec: Write once per second. Compromise solution. The default "Everysec" usually gives a better balance between speed and data security. According to your understanding # decide if you can relax the configuration to "no" to get better performance (but if you can tolerate some data loss, consider using the # Default snapshot persistence mode), or conversely, using "always" will be slower but more secure than everysec. everysec## if the AOF synchronization policy is set to "always" or "everysec", and the background storage process (background store or write aof# log) generates a lot of disk I/O overhead. Some Linux configurations will cause Redis to block for a long time because of Fsync () system calls. Note that there is currently no perfect correction for this situation, even fsync of different threads() will block our synchronous write (2) call. # # In order to alleviate this problem, you can use the following option. It can block Fsync () when BGSAVE or bgrewriteaof is processed. # This means that if a child process is doing a save operation, then Redis is in an "unsynchronized" state. This actually means that in the worst case, you might lose 30 seconds of log data. (Default Linux settings) # # If you set this to "yes" to cause latency problems, keep "No", which is the safest way to save persistent data. No-appendfsync-on-rewrite no## automatically rewrite aof files # if the aof log file grows to a specified percentage, Redis can automatically rewrite bgrewriteaof log files through aof. # How it works: Redis remembers the size of the AoF file when it was last rewritten (if there is no write operation after the restart, just use the AOF size at startup) # # This benchmark size is compared to the current size. If the current size exceeds the specified scale, an override action is triggered. You also need to specify the minimum size of the rewritten # log so that the specified percentage is avoided and the size is still small and rewritten. # # Specifying a percentage of 0 disables the AOF auto-override attribute. The auto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb## aof file may be incomplete at the end (last system shutdown has a problem, especially mount The data=ordered option is not added to the Ext4 file system. It only happens when the OS dies, and Redis itself is not completely dead. # That's a problem when the Redis restarts when the load comes into memory. When it happens, you can choose Redis to initiate an error, or load as much normal data as possible. # If aof-load-truncated is yes, it will automatically publish a log to the client and load (default). If it is no, the user must manually redis-check-aof repair the aof file before it can. aof-load-truncated Yes
Redis Configuration-Lua scripting
# 如果达到最大时间限制(毫秒),redis会记个log,然后返回error。# 当一个脚本超过了最大时限。只有SCRIPT KILL和SHUTDOWN NOSAVE可以用。第一个可以杀没有调write命令的东西。要是已经调用了write,只能用第二个命令杀。# 设置成0或者负值,时限就无限。lua-time-limit 5000
Redis configuration-Cluster

WARNING Redis cluster is not yet a stable version of the 3.0.X version

# 开启集群cluster-enabled yes## 每一个集群节点都有一个集群配置文件cluster-config-file nodes-6379.conf## 集群节点的超时时间,单位为毫秒cluster-node-timeout 15000## 控制从节点FailOver相关的设置# 设为0,从节点会一直尝试启动FailOver.# 设为正数,失联大于一定时间(factor*节点TimeOut),不再进行FailOvercluster-slave-validity-factor 10## 最小从节点连接数cluster-migration-barrier 1## 默认为Yes,丢失一定比例Key后(可能Node无法连接或者挂掉),集群停止接受写操作# 设置为No,集群丢失Key的情况下仍提供查询服务cluster-require-full-coverage yes
redis configuration-Slow log
# Redis慢查询日志可以记录超过指定时间的查询。运行时间不包括各种I/O时间,例如:连接客户端,# 发送响应数据等,而只计算命令执行的实际时间(这只是线程阻塞而无法同时为其他请求服务的命令执行阶段)# # 你可以为慢查询日志配置两个参数:一个指明Redis的超时时间(单位为微秒)来记录超过这个时间的命令# 另一个是慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录从队列中移除。## 下面的时间单位是微秒,所以1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录# 所有命令。slowlog-log-slower-than 10000## 这个长度没有限制。只是要主要会消耗内存。你可以通过 SLOWLOG RESET 来回收内存。slowlog-max-len 128
Redis Configuration-Latency monitoring
# 默认情况下禁用延迟监控,因为它基本上是不需要的,单位为毫秒latency-monitor-threshold 0
Redis Configuration-Event notification
# Redis 能通知 Pub/Sub 客户端关于键空间发生的事件# 这个功能文档位于http://redis.io/topics/keyspace-events## 例如:如果键空间事件通知被开启,并且客户端对 0 号数据库的键 foo 执行 DEL 命令时,将通过# Pub/Sub发布两条消息:# PUBLISH [email protected]__:foo del# PUBLISH [email protected]__:del foo## 可以在下表中选择Redis要通知的事件类型。事件类型由单个字符来标识:## K    键空间通知,以[email protected]<db>__为前缀# E    键事件通知,以[email protected]<db>__为前缀# g    DEL , EXPIRE , RENAME 等类型无关的通用命令的通知, ...# $    String命令# l    List命令# s    Set命令# h    Hash命令# z    有序集合命令# x    过期事件(每次key过期时生成)# e    驱逐事件(当key在内存满了被清除时生成)# A    g$lshzxe的别名,因此”AKE”意味着所有的事件## notify-keyspace-events 带一个由0到多个字符组成的字符串参数。空字符串意思是通知被禁用。## 例子:启用List和通用事件通知:# notify-keyspace-events Elg## 例子2:为了获取过期key的通知订阅名字为 [email protected]__:expired 的频道,用以下配置# notify-keyspace-events Ex## 默认所用的通知被禁用,因为用户通常不需要该特性,并且该特性会有性能损耗。# 注意如果你不指定至少K或E之一,不会发送任何事件。notify-keyspace-events ""
Redis Configuration-Advanced Configuration
# when there is a lot of data, it is appropriate to hash (which will require more memory) and the maximum number of elements cannot exceed the given limit. # The Redis hash is a hashmap inside value, and if the map has fewer members, it will be stored in a compact format similar to one-dimensional linear, # which eliminates the memory overhead of a large number of pointers. The following 2 conditions any one of the conditions above the set value will be converted to a real hashmap,# when the value of this map does not exceed how many members will be stored in a linear compact format, the default is 64, that is, value within 64 members of the following is the use of linear compact storage, Exceeding this value automatically turns into a true hashmap. Hash-max-zipmap-entries 512## A linear compact storage is used to save space when value does not exceed the number of bytes per member value within the map. Hash-max-zipmap-value 64## is similar to hash-max-zipmap-entries OTP, where data elements are less likely to be encoded in another way, saving a lot of space. # The list data type how many nodes below will take the compact storage format of the List-max-ziplist-entries 512## list data type node value size less than how many bytes will be in a compact storage format List-max-ziplist-value 64# # There is a case of a special encoding: The data is all a string of 64-bit unsigned integer numbers. # The following configuration item is used to limit the maximum limit for using this encoding in this case. Set-max-intset-entries 512 # # In contrast to the first and second cases, ordered sequences can also be processed in a special coding way, saving a lot of space. # This encoding is only suitable for sequential sequences in which the length and elements meet the following limitations: Zset-max-ziplist-entries 128zset-max-ziplist-value 64 # # about Hyperloglog:/http Www.redis.io/topics/data-types-intro#hyperloglogs # Hyperloglog Sparse represents a throttling setting, and if its value is greater than 16000, it still uses a dense representation, since dense means more efficient use of memory # The recommended value is Hll-sparse-max-bytes 3000 # # hash flush, and every 100 CPU milliseconds will take 1 milliseconds to flush the Redis main hash table (top-level key-value map). # Implementation of the hash table used by Redis (see dict.cA lazy hash flush mechanism is used: the more you operate on a hash table, the more frequent the hash flush operation is, and if the server is very inactive then that is to say, save the hash table with a bit of memory. # The default is to make 10 hash table refreshes per second to refresh the dictionary and free up memory as soon as possible. # Recommendation: # If you are concerned about latency, use "activerehashing no", each request delayed 2 milliseconds is not very good. # set "Activerehashing yes" if you don't care too much about latency and want to free up memory as soon as possible. activerehashing Yes # # client output buffer limit, can be used to force disconnect those clients that are not fast enough to read data from the server for some reason, # (a common reason is that a publish/subscribe client consumes messages at a speed that cannot catch up with the speed of producing them) # # You can set different restrictions on three different clients: # normal client # slave-slave and MONITOR client # PubSub-Subscribe to at least one pubsub channel or pattern client # # Here are each client-output-buffer-limit syntax: # client-output-buffer-limit <class>

Detailed Redis configuration

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.