Redis configuration details, redis details

Source: Internet
Author: User
Tags allkeys failover integer numbers syslog strong password redis cluster

Redis configuration details, redis details

Web programmer blog: http://blog.csdn.net/thinkercode

If it is a professional DBA, many parameters will be added when the instance is started to make the system run very stably, so that a parameter may be added after Redis at startup, you can specify the path of the configuration file to start the database by reading the startup configuration file like mysql. After the source code is compiled, there is a Redis. conf file under the redis Directory, which is the Redis configuration file. You can use the following command to start a config file during startup.

[root@localhost ~]# ./redis-server /path/to/redis.conf

Some metrics of apsaradb for Redis are case-insensitive in redis configurations. 1 GB, 1 GB, and 1 GB are the same. This also indicates that redis only supports bytes and does not support bit units.

# 1k => 1000 bytes# 1kb => 1024 bytes# 1m => 1000000 bytes# 1mb => 1024*1024 bytes# 1g => 1000000000 bytes# 1gb => 1024*1024*1024 bytes

Redis can introduce an external configuration file like the include command in C/C ++. Multiple configuration files always use the last loaded configuration item. If you want to introduce the configuration, it will not be overwritten, it can be introduced at the end of the main configuration file.

include /path/to/other.conf
Redis configuration-General
# Redis does not run as a daemon by default. You can modify this configuration item and use yes to enable daemon. Note that after the daemon is configured, Redis will write the process number to the file/var/run/redis. piddaemonize yes # When Redis runs as a daemon, Redis writes the pid to/var/run/redis by default. pid file. You can use pidfile to specify pidfile/var/run/redis. pid ## specify the Redis listening port. The default port is 6379. Why is 6379 used? Because 6379 is the MERZ number on the mobile phone key, MERZ is taken from Alessia Merz's name port 6379 # In a high-concurrency environment, you need a high backlog value to avoid slow Client Connection issues. # Note that the Linux kernel silently reduces this value to the value of/proc/sys/net/core/somaxconn. # Make sure to increase the values of somaxconn and tcp_max_syn_backlog to achieve the desired effect. Tcp-backlog 511 # by default, Redis listens to connections of all available network interfaces on the server. You can use the "bind" configuration command to listen to one or more network interfaces with one or more IP addresses: bind 192.168.1.100 10.0.0.1bind 127.0.0.1 # If redis does not listen to the port, in fact, redis also supports receiving requests through unix socket. # You can use the unixsocket configuration item to specify the path of the unix socket file, and use unixsocketperm to specify the File Permission. # Specify the path used to listen to the Unix socket set. There is no default value, so Redis will not listen to Unix socket unixsocket/tmp/redis if it is not specified. sockunixsocketperm 755 # When a redis-client never sends a request to the server, the server has the right to close the connection. You can set the "Idle timeout period" Through timeout ", 0 indicates that it will never be closed. # Set the timeout time for client connection, in seconds. When the client does not issue any command during this period, close the connection # default value: 0 indicates disabled, and never disable timeout 0 # TCP keepalive. # if it is not zero, set the SO_KEEPALIVE option to send ACK to the client with idle connection. This is useful for the following two reasons: #1) capable of detecting unresponsive peer #2) Let the network device in the middle of the connection know that the connection is still alive # On Linux, the specified value (unit: seconds) the interval at which ACK is sent. # Note: closing the connection requires two times the time value. # This interval on other kernels is determined by the Kernel configuration # TCP connection retention policy, which can be set through the tcp-keepalive configuration item, in seconds. # If it is set to 60 seconds, the server initiates an ACK request to the idle client every 60 seconds to check whether the client has crashed. # The server closes the connection for a client that has no response. Therefore, it takes up to 120 seconds to close a connection. If this parameter is set to 0, no active/active detection is performed. # A reasonable value for this option is 60 seconds tcp-keepalive 0 # specify the log record level # Redis supports four levels in total: debug, verbose, notice, warning, by default, verbose # debug records a lot of information for development and testing # varbose records a lot of concise and useful information, unlike debug records so much # notice common verbose, often used in the production environment # warning only records very important or serious information to the log loglevel notice # specifies the log file name. You can also use "stdout" to force Redis to write log information to the standard output. # The default value is standard output. If Redis is configured to run in daemon mode, and the log record mode is configured as standard output, the log will be sent to/dev/nulllogfile "" # to use the system logger, you only need to set "syslog-enabled" to "yes. # Set other syslog parameters as needed. Syslog-enabled no # specify the syslog identifier of linux system logs. If "syslog-enabled = no ", then this option is invalid syslog-ident redis # specify the linux System Log syslog device (facility), which must be between USER or LOCAL0 and LOCAL7 syslog-facility local0 # number of available databases, the default value is 16. The default database is stored in Database ID 0 with no special requirements, we recommend that you set only one database databases 1 # query the database using SELECT <dbid> # dbid between 0 and 'databases'-1 databases 16
Redis configuration-Snapshot
# Save the database to the disk: # save <seconds> <changes> # writes the database to the disk after the specified number of seconds and the number of data changes. # The following example will write data to the disk: # After 900 seconds (15 minutes), there will be at least one key (times) change # after 300 seconds (5 minutes), there are at least 10 key changes # After 60 seconds, there are at least 10000 key changes # Note: if you do not need to write data to a disk, comment out all "save" settings to enable a full-memory server. # If you want to disable the RDB persistence policy, you only need to disable any save command, you can also pass an empty string parameter to save 900 1 save 300 10 save 60 10000 # If you have enabled the RDB snapshot function, if redis persists data to the disk and fails, redis will stop accepting all write requests by default. # The advantage of doing so is that you can clearly know that the data in the memory is inconsistent with the data on the disk. # If redis keeps receiving write requests without such inconsistency, some catastrophic consequences may occur. # If the next RDB persistence succeeds, redis will automatically resume receiving write requests. # Of course, if you do not care about such data inconsistency or have other means to discover and control such inconsistency, you can disable this function to prevent snapshot write failures, it can also ensure that redis continues to accept new write requests. Stop-writes-on-bgsave-error yes #### whether to use LZF to compress string objects when exporting data to the. rdb database. # The default value is "yes". # If you want to save CPU usage, you can set this value to "no". However, if keys that can be compressed are not compressed, then the data file will become larger rdbcompression yes # Because RDB of version 5 has a CRC64 algorithm checksum placed at the end of the file. This will make the file format more reliable, but when # produces and loads RDB files, this has a performance consumption (about 10%), so you can turn it off to get the best performance. # The generated RDB file for disabling verification has a 0 checksum, which tells the loading code to skip the check rdbcompression yes # database file name and storage path dbfilename dump. rdb # working directory # The local database will write to this directory, and the file name is the value of "dbfilename" above. # Add files here. # Note that you must specify a directory instead of a file name. Dir ./
Redis configuration-synchronization
# Master-slave Synchronization. Use slaveof configuration to back up Redis instances. # Note: Data is locally copied from the remote end. That is to say, local hosts can have different database files, bind different IP addresses, and listen on different ports. # When the local machine is a slave service, set the IP address and port of the master service. When Redis is started, it automatically synchronizes data from the master service. If the master service sets a password (configured using the "requirepass" option below ), the password of the Server Load balancer service to connect to the master node. Therefore, the Server Load balancer must perform authentication before synchronization starts. Otherwise, its synchronization request is rejected. # When the local machine is a slave service, set the master service connection password masterauth <master-password> # When a slave loses its connection to the master or the synchronization is in progress, slave has two possibilities: #1) If slave-serve-stale-data is set to "yes" (default), slave will continue to respond to client requests, which may be normal data, it may also be that empty data has not yet been obtained. #2) If slave-serve-stale-data is set to "no", slave will reply "synchronizing with master in progress" to process various requests, besides the INFO and SLAVEOF commands. Slave-serve-stale-data yes # You can configure whether the salve instance accepts write operations. Writable slave instances may be useful for storing temporary data (because the data written to salve # Will be deleted after synchronization with the master ), # However, some problems may occur during write operations due to configuration errors on the client. # From Redis2.6, all slave values are read-only by default. # Note: a read-only slave is not designed to expose it to untrusted clients on the Internet. It is only a protective layer to prevent misuse of instances. # A read-only slave supports all management commands, such as config and debug. To restrict the use of 'rename-command', # Hide all management and Dangerous commands to enhance the security of read-only slave-read-only yes # replica set synchronization policy: when a disk or socket # is connected to a new slave or the old slave is reconnected, you must perform full synchronization instead of receiving different requests. A new RDB file needs to be dumped and then transferred from the master to the slave. There are two possible scenarios: #1) disk-based (backed): The master creates a new process dump RDB, and the parent process (that is, the main process) Incrementally transmits the process to slaves after completion. #2) based on the socket (diskless): The master creates a new process to directly dump the RDB to the slave socket without going through the main process or the hard disk. # After the RDB file is created based on the hard disk, more slave services can be provided once it is created. Based on the socket, after the new slave comes, you have to wait in line (if the repl-diskless-sync-delay is exceeded), and then proceed to the next step. # When diskless is used, the master waits for a repl-diskless-sync-delay in seconds. If no Server Load balancer is available, it will be passed directly. Later, it will have to wait in queue. Otherwise, data can be transmitted together. # Diskless can be used when disks are slow and the network is fast. (Disk-based by default) repl-diskless-sync no # if it is set to 0, transmission starts ASAPrepl-diskless-sync-delay 5 # slave sends ping requests to the server based on the specified time interval. # The time interval can be set through repl_ping_slave_period. # The default value is 10 seconds repl-ping-slave-period 10 # The following options set the synchronization timeout time #1) slave has a large amount of data transmission during SYNC with the master, cause timeout #2) in the slave angle, the master times out, including data and ping #3) in the master angle, the slave times out, when the master sends replconf ack pings # ensure that the value is greater than the specified repl-ping-slave-period, otherwise, when the traffic between the master and slave nodes is low, the system will detect the timeout repl-timeout 60 # Do you want to disable TCP_NODELAY after the slave socket sends SYNC? # If you choose "yes" Redis, less TCP packets and bandwidth will be used to send data to slaves. # However, This will delay data transmission to the slave, and the default Linux Kernel configuration will reach 40 ms, # If you select "no", the latency for data transmission to salve will be reduced, but we will optimize the low latency by default if you want to use more bandwidth, # setting this option to "yes" is a good option in high traffic conditions or when there are too many hops between the master and slave nodes. Repl-disable-tcp-nodelay no # Set the backlog size of the data backup. # Backlog is the buffer for recording salve data when a slave is disconnected for a period of time. # When a slave is reconnected, full synchronization is not necessary, instead, an incremental synchronization is enough to transmit part of the data lost by slave during the disconnection period. # The larger the synchronized backlog, the longer the slave can perform incremental synchronization and allow disconnection. # Backlog is allocated only once and requires at least one slave connection repl-backlog-size 1 mb # When the master is no longer connected to any slave for a period of time, the backlog will be released. The following option configures how many seconds after the last slave disconnection starts timing, the backlog buffer will be released. #0 indicates that the backlogrepl-backlog-ttl 3600 is never released # the priority of slave is an integer displayed in the Info output of Redis. If the master node no longer works normally, # Sentinel will use it to select a Server Load balancer instance to upgrade = to the master node. Salve with a small priority number will give priority to master, # For example, three slave priorities are 10,100, 25, and # Sentinel will select the slave with a minimum priority number of 10. #0 is a special priority, and the slave cannot be used as the master, therefore, a server Load balancer with a priority of 0 will never be selected by Sentinel and upgraded to master # The default priority is 100slave-priority 100 # If the master has less than N Connected Server Load balancer with a latency of less than or equal to M seconds, you can stop receiving write operations. # N slave statuses must be "oneline" # The latency is measured in seconds and must be smaller than or equal to the specified value. The latency is the ping (usually sent per second) received from the last slave) start counting. # For example, If you need at least three slave instances with a latency of less than or equal to 10 seconds, use the following command: # if either of them is set to 0, this function will be disabled. # The default value of min-slaves-to-write is 0 (this function is disabled) and the value of min-slaves-max-lag is 10. Min-slaves-to-write 3min-slaves-max-lag 10
Redis configuration-Security
# The client is required to verify the identity and password when processing any command. # This function is useful in environments where other clients you do not trust can access the redis server. # Comment out this section for backward compatibility. And most people do not need authentication (for example, they run on their own servers) # warning: Because Redis is too fast, so people outside the company can try a K password every second to crack the password. This means you need a strong password. Otherwise, it will be too easy to crack. Requirepass foobared # command rename # In a shared environment, you can change the name of a dangerous command. For example, you can change the name of CONFIG to another name that is not easy to guess. # This way, internal tools can still be used, but normal clients won't. # Example: # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # Note: Changing the command name is recorded in the AOF file or being transferred to the slave server may cause problems. # You can also disable a command rename-command CONFIG "" by renaming it to a null string ""
Redis configuration-Restrictions
# Set the maximum number of clients connected simultaneously. By default, this limit is set to 10000 clients. However, if the Redis server cannot configure # The Limit number of files to be processed to meet the specified value, the maximum number of client connections is set to 32 minus the current file limit (because # Some file descriptors are reserved for the Redis server for internal use) # Once this limit is reached, redis closes all new connections and sends an error 'max number of clients reached' maxclients 10000 # Do not use more memory than the configured upper limit. Once the memory usage reaches the upper limit, Redis will delete the key based on the selected recycle policy (see maxmemmory-policy) # If the key cannot be deleted due to the deletion policy Redis, or the policy is set to "noeviction ", redis will reply an error message that requires more # memory to the command. For example, SET and LPUSH, but will continue to respond to read-only commands like Get. # This option is usually useful when Redis is used as the LRU cache or when a hard memory limit is set for the instance (using the "noeviction" policy. # Warning: when multiple slave instances are connected to an instance that has reached the memory limit, the master node is required to synchronize the slave output buffer # The memory is not counted in the memory. In this way, when the key is evicted, the cycle of the evicted key # will not be triggered due to network problems/re-synchronization events. In turn, the slaves output buffer is filled with the DEL command of the key to be evicted, this will trigger the deletion of more keys, # Until the database is completely cleared # In summary... if you need to attach multiple slave instances, we recommend that you set a slightly smaller maxmemory limit, in this way, the system will have idle # memory as the slave output cache (but it is unnecessary if the maximum memory policy is set to "noeviction) maxmemory <bytes> # maximum Memory Policy: if the memory limit is reached, how does Redis choose to delete the key. You can choose ## volatile-lru-> Delete according to the expiration time generated by the LRU algorithm in the following five actions. # Allkeys-lru-> delete any key based on the LRU algorithm. # Volatile-random-> Delete keys randomly based on expiration settings. # Allkeys-> random deletion without difference. # Volatile-ttl-> Delete (supplemented by TTL) based on the latest expiration time # noeviction-> no one deletes the content, and an error is returned directly during the write operation. # Note: For all policies, if Redis cannot find a suitable key that can be deleted, an error will be returned during the write operation. # Commands involved so far: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset limit sadd # sinter sinterstore sunion limit sdiffstore zadd limit # adjust zinterstore hset limit hmset hincrby limit # getset mset exec limit sort # # The default value is as follows: maxmemory-policy volatile-lru # The implementation of LRU and the minimum TTL algorithm is not very accurate, but very close (to save memory), so you can use the sample size for testing. # For example, by default, Redis checks three keys and obtains the oldest one. You can set the number of samples through the following configuration commands. Maxmemory-samples 3
Redis configuration-append Mode
# By default, Redis asynchronously exports data to the disk. This mode is good enough in many applications, but Redis process # a problem or power failure may cause a loss of write operations for a period of time (depending on the configured save command ). # AOF is a more reliable alternative to persistence mode. For example, you can use the default data writing policy (see the configuration below) # Redis can only lose one second of write operations in case of an emergency such as a server power failure or a single write failure, but the operating system is still running normally. # AOF and RDB persistence can be started at the same time without any problems. # If AOF is enabled, Redis will load the AOF file at startup, which guarantees data reliability. # View the http://redis.io/topics/persistence for more information. appendonly no # append the file name (default: "appendonly. aof ") appendfilename" appendonly. aof "# fsync () System Call tells the operating system to write data to the disk, instead of waiting for more data to enter the output buffer. # Some operating systems will actually fl data to the disk immediately; some will try to do so as soon as possible. # Apsaradb for Redis supports three different modes: # no: Do not fl immediately. You can only fl it when the operating system needs to fl it. Fast. # Always: Every write operation is immediately written to the aof file. Slow, but safest. # Everysec: Write once per second. Compromise. # The default "everysec" can usually strike a better balance between speed and data security. According to your understanding, # decide if you can relax the configuration to "no" to get better performance (but if you can tolerate some data loss, you can consider using the # default snapshot persistence mode), or, on the contrary, using "always" will be slower but safer than everysec. # Please refer to the following article for more details # http://antirez.com/post/redis-persistence-demystified.html # if not sure, use "everysec" appendfsync everysec # If the AOF synchronization policy is set to "always" or "everysec", and the background storage process (the background storage or write AOF # logs) it will produce a lot of disk I/O overhead. Some Linux configurations may cause Redis to be blocked for a long time due to fsync () system calls. # Note: the current situation has not been perfectly corrected, and even fsync () of different threads will block the synchronous write (2) Call. # To alleviate this problem, use the following option. It can block fsync () during BGSAVE or BGREWRITEAOF processing (). # This means that if a sub-process is performing the save operation, Redis will be in the "non-synchronous" state. # This means that in the worst case, 30 seconds of log data may be lost. (Default Linux setting) # if this setting is set to "yes", the "no" will be maintained, which is the safest way to save persistent data. No-appendfsync-on-rewrite no # automatically rewrite the AOF file # If the AOF log file increases to the specified percentage, Redis can automatically rewrite the AOF log file through BGREWRITEAOF. # Working principle: Redis remembers the size of the AOF file during the last rewrite (if no write operation is performed after restart, the AOF file size at startup will be used directly) # Compare the reference size with the current size. If the current size exceeds the specified proportion, the rewrite operation is triggered. You also need to specify the minimum size of the overwritten # log to avoid rewriting when the specified percentage is reached but the size is still small. # If the percentage is 0, the AOF automatic rewrite feature is disabled. Auto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64 mb # The AOF file may be incomplete at the end (the last time the system was shut down, in particular, the data = ordered option is not added to the mount ext4 file system. Only when the OS is dead, redis itself is not completely dead ). # There is a problem when redis is restarted and loaded into the memory. When an error occurs, you can select redis startup to report an error or load as much normal data as possible. # If aof-load-truncated is yes, a log is automatically published to the client and then loaded (default ). If no, you must manually fix the aof file in redis-check-AOF. Aof-load-truncated yes
Redis configuration-LUA script
# If the maximum time limit is reached (in milliseconds), redis records a log and returns an error. # When a script exceeds the maximum time limit. Only script kill and shutdown nosave can be used. The first one can kill things without the write command. If you have already called write, you can only use the second command to kill it. # If it is set to 0 or a negative value, the time limit is unlimited. Lua-time-limit 5000
Redis configuration-Cluster

WARNING Redis Cluster is not a stable version in version 3.0.X.

# Enable cluster-enabled yes # each cluster node has a cluster configuration file cluster-config-file nodes-6379.conf # cluster node timeout time, unit: millisecond cluster-node-timeout 15000 # control slave node FailOver settings # set to 0, the slave node will always try to start FailOver. # It is set to a positive number, and the loss is due to a certain time (factor * node TimeOut ), failOvercluster-slave-validity-factor 10 # minimum number of slave node connections cluster-migration-barrier 1 # Yes by default, after a certain percentage of keys are lost (Node may fail to connect or fail), the cluster stops accepting write operations # Set it to No, cluster-require-full-coverage yes
Redis configuration-slow log
# Redis slow query logs can record queries that have exceeded the specified time. The running time does not include various I/O times, such as connecting to the client, # Sending response data, etc, but only calculate the actual time of command execution (this is only the execution phase of the command that the thread is blocked and cannot serve other requests at the same time) # You can configure two parameters for the slow query log: one command that specifies the Redis timeout time (in microseconds) to record the time exceeding this time # The other is the length of the slow query log. When a new command is written into the log, the oldest record is removed from the queue. # The following time unit is microsecond, so 1000000 is 1 second. Note that the negative time will disable slow query logs, and 0 will force Record # All commands. Slowlog-log-slower-than 10000 # There is no limit on the length. It only consumes memory. You can use slowlog reset to recycle memory. Slowlog-max-len 128
Redis configuration-latency monitoring
# Delay monitoring is disabled by default because it is basically not required, in milliseconds latency-monitor-threshold 0
Redis configuration-Event Notification
# Redis can notify Pub/Sub client about events in the key space # This feature document is located in the http://redis.io/topics/keyspace-events## For example: if the key space Event Notification is enabled, when the client executes the DEL command on the key foo of database 0, two messages will be published through # Pub/Sub: # PUBLISH _ keyspace @ 0 __: foo del # PUBLISH _ keyevent @ 0 __: del foo # You can select the event type to be notified by Redis in the following table. The event type is identified by a single character: # K key space notification, prefixed with _ keyspace @ <db >__ # E key event notification, notifications of generic commands with the prefix of _ keysevent @ <db >__ # g DEL, EXPIRE, RENAME, and other irrelevant types ,... # $ String command # l List command # s Set command # h Hash command # z ordered collection command # x expiration event (generated every time the key expires) # e eviction event (generated when the key is fully occupied by cleanup) # Alias of A g $ lshzxe, therefore, "AKE" means that all events # Your Y-keyspace-events contain a string parameter consisting of 0 to multiple characters. A Null String indicates that the notification is disabled. # Example: Enable List and general Event Notifications: # notify-keyspace-events Elg # Example 2: To obtain the expired key, the subscription name is _ keyevent @__: for expired channels, use the following configuration # notify-keyspace-events Ex # The Notification used by default is disabled because users do not need this feature and this feature may cause performance loss. # Note: if you do not specify at least K or one of E, no event will be sent. Policy-keyspace-events ""
Redis configuration-Advanced Configuration
# When there is a large amount of data, it is suitable to use Hash encoding (this requires more memory), the maximum number of elements cannot exceed the given limit. # Redis Hash is a HashMap inside the value. If the number of members of this Map is small, it will be stored in a compact format similar to one-dimensional linear, # saves a lot of pointer memory overhead. If any of the following two conditions exceeds the set value, it will be converted into a real HashMap, # When the value Map contains no more than one members, it is stored in a linear compact format. The default value is 64. That is, if the value contains less than 64 members, it uses linear compact storage, if this value is exceeded, it is automatically converted to a real HashMap. Hash-max-zipmap-entries 512 # When the length of each member value in the Map is no more than a few bytes, a linear compact storage is used to save space. Hash-max-zipmap-value 64 # similar to hash-max-zipmap-entries hash, you can use another encoding method to save a lot of space. # List the number of nodes in the data type. The following uses the pointer-free compact storage format list-max-ziplist-entries 512 # the number of bytes smaller than the list data node value will use the compact storage format list. -max-ziplist-value 64 # There is another special encoding: the data is a string consisting of 64-bit unsigned integer numbers. # The following configuration item is used to limit the maximum value of this encoding in this case. Set-max-intset-entries 512 # similar to the first and second cases, ordered sequences can also be processed in a special encoding method, saving a lot of space. # This encoding is only suitable for sequential sequences whose lengths and elements comply with the following restrictions: zset-max-ziplist-entries 128zset-max-ziplist-value 64 # Introduction to HyperLogLog: http://www.redis.io/topics/data-types-intro#hyperloglogs # HyperLogLog sparse representation limit settings, if the value is greater than 16000, the dense representation is still used, because the dense representation is more efficient with memory # the recommended value is 3000 hll-sparse-max-bytes 3000 # hash refresh, every 100 CPU milliseconds takes one millisecond to refresh the Redis master hash table (top-level key-value ing table ). # Implementation of the hash table used by redis (see dict. c) Adopt the delayed hash refresh mechanism: the more operations you perform on a hash table, the more frequent the hash refresh operation. # vice versa, if the server is very inactive, the hash table is saved in the dot memory. # By default, 10 hash table refreshes are performed every second to refresh the dictionary and release the memory as soon as possible. # Suggestion: # If you are concerned about latency, use "activerehashing no". The latency of each request is 2 ms. # If you do not care much about latency and want to release the memory as soon as possible, set "activerehashing yes ". Activerehashing yes # limits on the client's output buffer, which can be used to forcibly Disconnect Clients that read data from the server for some reason, # (one common reason is that the speed at which a publish/subscribe client consumes messages cannot catch up with the production speed) # You can set different limits on three different clients: # normal-> normal client # slave-> slave and MONITOR client # pubsub-> subscribe to at least one pubsub channel or pattern client # below is each client-output-buffer-limit syntax: # client-output-buffer-limit <class> 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.