Redis2.8 configuration file Chinese detailed _redis

Source: Internet
Author: User
Tags allkeys bind compact lua redis time interval volatile strong password

Add by Zhj: The original text of this article was not found. In addition, the Redis configuration file Chinese translation is also good, can be with this article to look after. Both articles are introduced by Redis2.8.

When the Redis-server service is started directly in Redis, the default configuration file is used. Use Redis-server xxx.conf to run the Redis service in a specified configuration file. The following is the Chinese explanation for the Redis2.8.9 configuration file.


#daemonize No by default, Redis does not run in the background, and if you need to run in the background, change the value of the item to Yes Daemonize Yes # when Redis runs in the background, Redis defaults to the PID file in/var/ru
N/redis.pid, you can configure to other addresses. # when running multiple Redis services, you need to specify different PID files and Ports Pidfile/var/run/redis_6379.pid # Specify the ports that Redis run, by default 6379 Port 6379 # in high concurrency environments, to avoid slow client Connectivity issues, you need to set up a high speed background log tcp-backlog 511 # Specifies that Redis receive only requests from that IP address and, if not set, all requests # bind 192.168.1.100 10.0.0.1 # bind 127. 0.0.1 # Sets the timeout time, in seconds, for client connections. When the client does not issue any instructions during this time, turn off the connection # 0 to turn off this setting timeout 0 # TCP KeepAlive # on Linux, specify the value (in seconds) to send the ACKs time. Note that it takes twice as long to close the connection.
The default is 0. Tcp-keepalive 0 # Specify logging level, production environment recommended notice # Redis supports a total of four levels: Debug, verbose, notice, warning, default to verbose # debug record many letters  Used to develop and test # Varbose useful information, unlike debug will record so many # notice ordinary verbose, often used in production environment # warning only very important or serious information will be recorded to log loglevel notice #
Configure log file address # The default value is stdout, standard output, if the background mode will be output to/dev/null. Logfile/var/log/redis/redis.log # Number of available databases the default value is 16, the default database is 0, and the database ranges between 0-(database-1) databases 16 ################### ############# Snapshot ################################# # Save data to disk in the following format: # save <seconds> <changes> # Indicates how many update operations have been done in a long period of time, synchronizing the data to the data file RDB. # equivalent to trigger capture snapshots, this can be multiple conditions with the # such as settings in the default profile, set three conditions # Save 900 1 900 seconds at least 1 keys changed # Save 300 10 + 300 seconds with at least 300 key
Changed # Save 60 10000 60 seconds at least 10,000 keys have been changed # Save 900 1 # Save # Save 60 10000 # Background Store error stopped writing. Stop-writes-on-bgsave-error Yes # whether to compress data when it is stored to the local database (persisted to RDB file), the default is Yes rdbcompression Yes # RDB file Direct idol Chcksum Rdbcheck
Sum Yes # Local persistent database file name, default is Dump.rdb dbfilename Dump.rdb # working Directory # The path of the file placement of the database mirroring backup. # The path and filename are configured separately because Redis writes the state of the current database to a temporary file while the backup is in progress, and the backup completes, # then replaces the temporary file with the file specified above.
The temporary files and the backup files configured here will be placed in this specified path. # aof files will also be stored under this directory # Note that you must create a directory here instead of a file dir/var/lib/redis-server/################################# replication ############### ################## # master copy.
Set the database as a different database from the database. # Set the IP address and port of the master service when the native is Slav service, it automatically synchronizes data from master when Redis is started # slaveof <masterip><masterport> Master Service is password protected (password established with Requirepass) # Slave Service Connection MasterPassword # Masterauth <master-password> # When a connection is lost from the library or replication is in progress, there are two ways to run from the hangar: # 1 if the slave-serve-stale-data is set to Yes (the default), Requests from the library to continue responding to the client # 2 if slave-serve-stale-data means no, any requests outside of the INFO and SLAVOF commands return a # error ' SYNC with Master in progress Slave-serve-stale-data Yes # configures whether the slave instance accepts write.
Writing slave is useful for storing transient data (which can be easily removed after synchronizing with master data), but is not configured, client writes may send a problem. # from Redis2.6, the default slave is read-only slaveread-only Yes # sends pings to the main library at a time interval from the library. You can set this time interval by repl-ping-slave-period the default is 10 seconds # repl-ping-slave-period # Repl-timeout set the main library bulk data transfer time or ping reply interval, the default value is 6  0 seconds # Make sure Repl-timeout is greater than Repl-ping-slave-period # repl-timeout 60 # Disable Tcp_nodelay after SYNC of slave socket if you choose Yes
, Redis will use a smaller digital TCP packet and less bandwidth to send data to slave, but this could result in delays in sending data to the slave side and 40 milliseconds for the default configuration of Linux kernel.
# If you select ' No ', the delay in sending data to the slave side is reduced, but more bandwidth will be used for replication.
Repl-disable-tcp-nodelay No # Sets the background log size for replication.
# The larger the replicated background log, the longer it takes to slave disconnect and possibly perform partial replication later.
# The background log is assigned only once when there is at least one slave connection. # repl-backlog-size 1MB # is no longer connected in master After slave, the background log will be freed.
The following configuration defines the time (in seconds) that needs to be freed after disconnecting from the last slave. # 0 means never releasing the background log # Repl-backlog-ttl 3600 # If Master does not work again, then in more than one slave, select a slave with the lowest priority value to master, and a priority of 0 to not promote to Mas
ter.
Slave-priority 100 # If there are fewer than N slave connections, and the delay time is <=m seconds, master can configure to stop accepting write operations. # For example, require at least 3 slave connections and delay <=10 second configuration: # min-slaves-to-write 3 # min-slaves-max-lag 10 # Set 0 to disable # default Min-slaves-to-wri TE is 0 (disabled), Min-slaves-max-lag 10 ################################## Security ################################### # Set client connections
The password to be used before any other designation.
# Warning: Because the Redis speed is very fast, so in a better server, an external user can be a 150K password attempt in a second, which means you need to specify very very strong password to prevent brute force # requirepass foobared # command Rename. # in a shared environment, you can rename a relatively dangerous command.
For example, rename the CONFIG to a character that is not easily guessed. # example: # Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # If you want to delete a command, simply rename it to a null character "", as follows: # Rename-comma nd CONFIG "" "################################### constraint ################################### #设置同一时间最大客户端连接数, the default unrestricted, #Redis
The number of client connections that can be opened at the same time is the maximum number of file descriptors that the Redis process can open, #如果设置 maxclients 0, which means no restrictions. #当客户端连When the number reaches the limit, Redis closes the new connection and returns max number of clients reached error message # MaxClients 10000 # to the client to specify Redis maximum memory limit, Redis load data into memory at startup , Redis will try to clear the expired Key after the maximum memory has been reached if the Redis cannot provide enough space after the policy is cleared, or if the policy is set to "noeviction," commands that use more space will have an error, such as set, Lpush, and so on.
But you can still read it. Note: Redis The new VM mechanism, which holds the Key in memory, and Value is stored in the Swap # This option is useful for LRU policies.
# MaxMemory settings are more suitable for using Redis as a memcached cache, rather than as a real DB. # when using Redis as a real database, memory usage will be a big overhead # maxmemory <bytes> # When the memory reaches the maximum, Redis chooses which data to delete? There are five ways to choose # VOLATILE-LRU-> use the LRU algorithm to remove a key that has set an expiration time (LRU: Recently used least recentlyused) # Allkeys-lru-> Remove any k using the LRU algorithm EY # volatile-random-> removes a random key # Allkeys->random-> Remove a randomkey that has set an expiration time, any key # Volatile-ttl-> removal Expiring key (minor TTL) # noeviction-> do not remove any can, just return a write error # Note: For the above strategy, if there is no appropriate key to remove, when write Redis will return an error # The default is: Volati
LE-LRU # maxmemory-policy VOLATILE-LRU # LRU and minimal TTL algorithms are not accurate algorithms, but relatively accurate algorithms (to conserve memory), optionally you can choose sample size for detection. # Redis Default Gray Selection 3 samples for testing, you can pass MaxmemoRy-samples to set # Maxmemory-samples 3 ############################## aof############################### # By default, Redis
will be in the background asynchronously to the database mirroring backup to disk, but the backup is very time-consuming, and backup can not be very frequent, if there is such as power rationing, pull Plug and so on, it will result in a larger range of data loss.
# so Redis provides another, more efficient way of database backup and disaster recovery.
# after opening append only mode, Redis appends each write request received to the Appendonly.aof file, and when Redis restarts, the file is restored from the previous state.
# but this will cause appendonly.aof file is too large, so Redis also support the bgrewriteaof instructions, appendonly.aof to reorganize. # You can open asynchronous dumps and AoF appendonly No # aof file name (default: "Appendonly.aof") # appendfilename Appendonly.aof # Red is a policy that supports three simultaneous aof files: # No: No sync, System to operate.
Faster. # always:always indicates that each write is synchronized.
Slow, safest. # Everysec: Represents a cumulative write operation, synchronized once per second.
Compromise.
# The default is "Everysec", which is the best in terms of speed and security compromise.
# If you want Redis to run more efficiently, you can also set it to "no" and let the operating system decide when to execute # or the opposite to make the data safer you can also set it to "always" if you are unsure, use "everysec". # Appendfsync always appendfsync everysec # Appendfsync No # AOF when the policy is set to always or EVERYSEC, the background processing process (background save or AOF log rewrite) executes A large number of I/O operations # in some Linux configurations blocks too long Fsync () requests. Note that there is no fix now, even if FsyncProcessing on another thread # to mitigate this problem, you can set the following parameter No-appendfsync-on-rewrite no-appendfsync-on-rewrite no # aof auto Rewrite # when aof file grows to a certain size R Edis can invoke bgrewriteaof to rewrite the log file # It works like this: Redis remembers the size of the file after the last log (if it hasn't been rewritten since it was turned on, the size of the day is determined on the boot) # The base size is compared with the current size. If the current size is larger than the base size, the override will start # and you need to specify a minimum size for the AOF rewrite, which is used to block the AoF file if the file is small but grows a lot. # set percentage to 0 to turn off this feature Auto-ao F-rewrite-percentage auto-aof-rewrite-min-size 64mb ################################ LUASCRIPTING ##############
############### # A Lua script has a maximum execution time of 5000 milliseconds (5 seconds) and an infinite execution time if 0 or a negative number is represented. Lua-time-limit 5000 ############################### #LOW log################################ # Redis Slow LOG Records commands that exceed a specific execution time. Execution time does not include I/O calculations, such as connecting clients, returning results, and so on, only the command execution time # can be set by two parameters slow log: One is to tell Redis to execute more than how much time is recorded for the parameter Slowlog-log-slower-than (subtle), # another One is the length of the slow log.
The earliest command will be removed from the queue when a new command is logged # The time below is subtle, so 1000000 represents one second. 
# Note Specifying a negative number will turn off the slow log, and setting 0 will force each command to record Slowlog-log-slower-than 10000 # There is no limit to the length of the log, just be aware that it consumes memory # can reclaim memory consumed by slow logs via Slowlog RESET # The default value is recommended to use 128, when the slow log exceeds 128, the first entry into the queue will be kicked out of the Slowlog-max-len 128 ################################ event notification #############################
# When an event occurs, Redis can notify the Pub/sub client. # You can select the type of event Redis to notify in the following table. The event type is identified by a single character: # K Keyspace event, _keyspace@<db>_-prefixed with the # E KeyEvent event, issued as a _keysevent@<db>_ prefix # G pass   With events (unspecified types), like DEL, EXPIRE, RENAME, ... # $ String Command # s Set command # h Hash Command # Z ordered collection command # x expiration event (generated every time key expires) # E Clears the event (generated when key is purged by memory) # A G$lshzxe, so "ake" means that all events # Notify-keyspace-events takes a string argument consisting of 0 to more characters.
The empty string means that the notification is disabled.
# Example: Enable list and common events: # notify-keyspace-events ELG # The notifications used by default are disabled because the user typically does not need to change the feature, and the feature has a performance loss.
# Note that if you do not specify at least one of the K or E, no events will be sent. Notify-keyspace-events "" ############################## Advanced Configuration ############################### # when hash Contains more than the specified number of elements and the largest element does not exceed the critical value, the # hash will be stored in a special encoding (greatly reducing memory usage), where you can set both thresholds # Redis hash corresponding to value inside is actually a HashMap, there will be 2 kinds of Different implementations, # The members of this Hash are relatively young redis in order to save memory will use a similar one-dimensional array of methods to compact storage, instead of using a real HASHMAP structure, the corresponding valueredisobject encoding for ZIPMAP, # When the number of members increases, it automatically turns into a real HashMap, at which point encoding is HT.
Hash-max-zipmap-entries Hash-max-zipmap-value 64 # and hash, multiple small lists are encoded in a specific way to conserve space.
# list data type node value size less than how many bytes will be in a compact storage format.
List-max-ziplist-entries List-max-ziplist-value # Set Data type internal data if all are numeric, and how many nodes are included, the following will be stored in compact format.
Set-max-intset-entries 512 # As with Hashe and list, the sorted set is stored in a specified encoding in a specified length to conserve space # Zsort data type node value size less than how many bytes will be in compact storage format. Zset-max-ziplist-entries 128 Zset-max-ziplist-value # Redis will use 1 milliseconds of CPU time per 100 milliseconds to hash the Redis hash table to reduce the internal
Stored usage # When you use the scene, there is very strict real-time needs, can not accept Redis occasionally to request a 2 millisecond delay, this is configured to No. # If there is no such stringent real time requirement, you can set it to Yes so that you can free up the memory activerehashing Yes # Client's output buffer as quickly as possible, for some reason the client is not fast enough to read data from the server.
Can be used to force a disconnect (a common cause is the speed at which a publish/SUBSCRIBE client consumer message cannot catch up with the production of them).  # can be set by three different clients: # normal-> client # slave-> slave and MONITOR client # PubSub-> Subscribe to at least one pubsub channel or pattern  Client # per Client-output-buffer-limit syntax: # client-output-buffer-limit <class> 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.