#daemonize no , redis is not running in the background by default, if you need to run in the background, change the value of the item to yesdaemonize yes# when the redis is running in the background,, redis will put the pid file in /var/run/redis.pid  by default; You can configure it to a different address. # when running multiple redis services, you need to specify a different pid file and port pidfile /var/run/redis_6379.pid# Specify the redis run port, by default 6379port 6379# in high Concurrency environment, in order to avoid slow client connection problems, Need to set up a high-speed background log tcp-backlog 511# specify redis only receive requests from this IP address, and if not set, all requests will be processed # bind 192.168.1.100 10.0.0.1# bind 127.0.0.1# sets the time-out period in seconds for client connections. When the client has not issued any instructions during this time, then closing the connection # 0 is to turn off this setting timeout 0# tcp keepalive# in linux On , specify the time that the value (in seconds) is used to send ACKs . Note that it takes twice as long to close the connection. The default is 0 . tcp-keepalive 0# Specify logging level, production environment recommendation notice# Redis Total support four levels: debug , verbose , notice , warning , default = verbose# debug records a lot of information for developing and testing # varbose useful information, unlike debug will record so many # notice common verbose , often used in production environments # warning Only very important or critical information is logged to the log loglevel notice# configuration log file address # default value is stdout , standard output, if the background mode will be output to /dev/null . logfile /var/log/redis/redis.log# the number of available databases # The default is &NBSP;16&NBSP, and the default database is 0 Database scope databases 16################################ snapshots between 0- ( database-1 ) ####### ########################### save data to disk in the following format :# save <seconds> <changes># points out how many times the update operation will synchronize the data to the data file rdb . # is equivalent to conditional triggering of a snapshot, which can be combined with multiple conditions # such as the settings in the default configuration file, set three conditions # At least 1 key changed in save 900 1 900 seconds # save 300&nbsAt least 300 in p;10 300 seconds key is changed # save 60 10000 At least 10000 key changed in 60 seconds # save 900 1# save 300 10# save 60 10000# Background Store error stop writing. stop-writes-on-bgsave-error yes# data is compressed when stored to a local database (persisted to rdb file), default = yesrdbcompression yes# rdb file is direct idol chcksumrdbchecksum yes# local persistent database file name, The default value is the path to the file placement of the dump.rdbdbfilename dump.rdb# working directory # database mirroring backup. # The path here is configured separately from the file name because redis , when it is being backed up, writes the current database's state to a temporary file, and so on, when the backup is complete,# The temporary file is replaced with the file specified above, and the temporary file and the backup file configured above will be placed in the specified path. # aof files will also be stored under this directory # Note that there must be a directory instead of files dir /var/lib/redis-server/#################### ############# replication ################################## Master-slave replication . set the database for other databases from the database .# set up master when this machine is slav serviceThe IP address and port of the service, which automatically synchronizes data from master at Redis start # slaveof <masterip> <masterport># When password protection is set for master service ( password requirepass with ) # slave Service Connection master password # masterauth <master-password># When the connection is lost from the library or the replication is in progress, there are two modes of operation from the hangar: # 1) If slave-serve-stale-data is set to yes ( default setting )  , from the library will continue to respond to client requests # 2) if slave-serve-stale-data is referred to as no , go out Any request other than the info and SLAVOF command will return a # error "Sync with master in progress "slave-serve-stale-data yes# configuration slave whether the instance accepts write. Write slave to store ephemeral data (which can be easily removed after synchronizing with master data) is useful, but not configured, client write may send a problem. # from Redis2.6 , the default slave is read-onlyslaveread-only yes# The library is sent to the main library at a time interval PINGs. can be repl-ping-slave-period set this time interval, default is 10 second # repl-ping-slave-period 10# repl-timeout Set the main library bulk data transfer time or ping reply interval, the default value is 60 second # make sure repl-timeout is greater than repl-ping-slave-period# repl-timeout 60# in slave socket SYNC After disabling TCP_NODELAY# if you select " yes " ,Redis will use a smaller number TCP Packets and less bandwidth send the data to slave , but this can cause the data to be sent to the slave end with a delay , if it is Linux The default configuration of the kernel will reach 40 milliseconds .# if you select "No"  , the delay in sending the data to the slave side will decrease, However, more bandwidth will be used to replicate .repl-disable-tcp-nodelay no# set the background log size for replication. The larger the # replicated background log, the longer it will take to, slave to disconnect and later possibly perform partial replication. # background logs are allocated only once when there is at least one slave connection. # repl-backlog-size 1mb# after master no longer connects slave , the background log will be released. The following configuration defines the time (in seconds) to be freed from the last slave disconnected. # 0 means never releasing the background log # repl-backlog-ttl 3600# if master does not work, then in multiple slave , select one of the least priority values slave promoted to master The precedence value of 0 indicates that it cannot be promoted to master . slave-priority 100# If there is less than N slave connection and the delay time <=M seconds, the master can be configured to stop accepting write operations. # For example requires a configuration of at least 3 slave connections, and a delay of <=10 seconds: # min-slaves-to-write 3# min-slaves-max-lag 10# settings 0 to disable # default min-slaves-to-write for 0 (disabled), min-slaves-max-lag for 10########################### ####### Security #################################### Sets the password to be used before any other specified client connection is made. # Warning: Because the redis speed is quite fast, in a better server, an external user can be in a second 150K times the password attempt, This means that you need to specify very very strong passwords to prevent brute Force # requirepass foobared# command renaming .# In a shared environment, you can rename a relatively dangerous command. For example, the name CONFIG is a character that is not easy to guess. # Example :# rename-command config b840fc02d524045429941cc15f59e41cb7be6c52# If you want to delete a command, rename it directly to a null character "" , as follows:# rename-command config "" ################################### constraint ############## ##################### #设置同一时间最大客户端连接数, default unrestricted, #Redis Number of client connections that can be opened at the same time as the maximum number of file descriptors that the Redis process can open, # If &NBSP;&NBSP;MAXCLIENTS&NBSP;0&NBSP is set, it means no restrictions. #当客户端连接数到达限制时, redis will close the new connection and return the max number of clients reached error message to the client # maxclients 10000# specify Redis maximum memory limit, redis will load data into memory at startup, after maximum memory is reached, Redis will attempt to purge expired Key# if Redis after policy cleanup fails to provide sufficient space, or if the policy is set to , according to the purge policy Noeviction " , the command to use more space will be an error, such as SET, LPUSH . But can still be read operation # Note: redis New vm mechanism, will put Key storage memory, value will be stored in swap area # This option is useful for LRU policies. The # maxmemory setting is better suited to use redis as a cache like memcached ,And not as a real DB . # when using Redis as a real database, memory usage will be a big expense # maxmemory <bytes># What data does Redis choose to delete when the memory reaches the maximum value? There are five ways to choose # volatile-lru -> Use the LRU algorithm to remove key (lru: most recently used) with set expiration time Least RecentlyUsed ) # allkeys-lru -> uses the LRU algorithm to remove any key# volatile-random -> remove random key# allkeys->random -with set over expiration time > remove a randomkey, any key# volatile-ttl -> Remove Expiring key (MINOR&NBSP;TTL) # noeviction -> do not remove any can, just return a write error # Note: For the above policy, If there is no suitable key can be removed, when writing Redis will return an error # default is : volatile-lru# maxmemory-policy volatile-lru # lru and minimal TTL Algorithms are not exact algorithms, but relatively accurate algorithms ( in order to save memory ) &NBSP, optionally you can select sample size for detection. # redis default Gray selection 3 samples for detection, you can set it through maxmemory-samples # maxmemory-samples 3###### ######################## aof################################ By default, redis Asynchronously backs up the database image to disk in the background, but the backup is very time consuming, and the backup is not very frequent, if the situation such as power cut, unplug, etc., will result in a large range of data loss. # so redis provides another more efficient way of database backup and disaster recovery. # Open append only mode, redis will append every write request received to appendonly.aof File, when redis restarts, the previous state is recovered from the file. # but this causes the appendonly.aof file to be too large, so redis also supports the BGREWRITEAOF directive, the appendonly.aof to be re-organized. # you can open asynchronous dumps and AOFappendonly no# AOF file names at the same time ( default : "Appendonly.aof") # appendfilename appendonly.aof# redis Support three kinds of synchronization AOF file strategy :# no: do not synchronize, the system to operate . faster.# always: always indicates that each write operation is synchronized . slow, safest.# everysec: indicates that the write operation is cumulative, synchronized once per second . Compromise.# default is " Everysec "&NBSP; it's best to compromise on speed and safety. # If you want Redis to run more efficiently, you can also set the "No"  , and let the operating system decide when to execute # Or on the contrary, to make the data more secure you can also set the "Always" # if you are unsure, use "Everysec" .# appendfsync alwaysappendfsync everysec# appendfsync no# aof policy set to always or everysec , the background processing process ( background save or AOF log rewrite ) performs a lot of I/O operations # Too long fsync () requests are blocked in some Linux configurations. Note that there is now no fix, even if fsync is working on another thread # in order to mitigate this problem, you can set the following parameter no-appendfsync-on-rewriteno-appendfsync-on-rewrite no# aof automatic rewrite # when AOF When the file grows to a certain size Redis can invoke BGREWRITEAOF rewrite the log file # it works like this: Redis remembers the size of the file after the last log ( if it has not been rewritten since it was powered on, the day size is determined at boot time) # The base size is compared to the current size. If the currentWhen the size is larger than the base size, the rewrite feature will start # and also need to specify a minimum size for AOF rewrite, which is used to block even if the file is small but grows very large also to rewrite AOF The case of the file # set percentage for 0 turns off this feature auto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb################################ luascripting ################ ############## a Lua script has a maximum execution time of 5000 milliseconds ( 5 seconds), and if the 0 or negative number represents an infinite execution time. lua-time-limit 5000############################### #LOW LOG################################# redis slow log records a command that exceeds a specific execution time. Execution time does not include I/O calculations such as connecting clients, returning results, etc., only the command execution time # can be set by two parameters slow log : One is to tell redis slowlog-log-slower-than ( subtle ) ,# another is slow how much time is logged for the parameter to be executed The length of the log . When a new command is recorded, the oldest command will be removed from the queue # the time in subtle units, so 1000000 stands for one second. # Note Specifying a negative number will turn off slow logging, and setting to 0 will force each command to log slowlog-log-slower-than 10000# to the journal lengthThere is no limit, just be aware that it consumes memory # can be used by SLOWLOG RESET recycle memory consumed by slow log # recommended default value 128 When the slow log exceeds 128 , the record that first enters the queue is kicked out slowlog-max-len 128################################ Event Notification ############################## , redis can notify pub/sub when an event occurs Client. # You can select the type of event to be notified in the following table Redis . The event type identifies the:# k keyspace event by a single character to [email protected]<db>_ prefix to publish the # e keyevent event to [email protected]<db>_ The prefix method publishes # g common events (unspecified types), like del, expire, rename, ...# $ String Command # s set command # h Hash command # z ordered set command # x expiration events (generated each time key expires) # e clearEvent (when key is generated when memory is cleared) # a g$lshzxe 's nickname, so "AKE" means that all events # notify-keyspace-events with a string parameter consisting of 0 to more than one character. An empty string means that the notification is disabled. # Example: Enable list and General event:# notify-keyspace-events elg# The default notification is disabled, Because the user usually does not need to change the characteristics, and this feature will have a performance loss. # Note If you do not specify at least one of the K or E , no events will be sent. notify-keyspace-events "" ############################## Advanced Configuration ###################### ########## when hash contains more than the specified number of elements and the largest element does not exceed the critical time,,# hash will be stored in a special encoding method (greatly reducing memory usage). Here you can set the two critical values # redis hash corresponds to Value inside the actual is a hashmap actually there will be 2 Different implementations,# This Hash 's members are relatively young Redis in order to save memory will be similar to a one-dimensional array in a way to compact storage, without the use of real hashmap structure, corresponding valueredisObject encoding zipmap,# when the number of members increases, it automatically turns into a real hashmap, at this time encoding for ht . hash-max-zipmap-entries 512hash-max-zipmap-value 64 # and Hash , many small list coded in a specific way to save space. # list data Type node value size is less than how many bytes are in a compact storage format. list-max-ziplist-entries 512list-max-ziplist-value 64# set data type Internal data if all are numeric, And how many nodes are included below are stored in a compact format. set-max-intset-entries 512# and hashe , sort of set like list Save space by specifying encoding within the specified length # zsort data type node value size is less than how many bytes are in a compact storage format. zset-max-ziplist-entries 128zset-max-ziplist-value 64# redis will be used in every 100 milliseconds 1 milliseconds of CPU time to re- hash  the hash table of redis , which can reduce the use of memory # When you have a very strict real-time requirement in your usage scenario, you cannot accept Redis occasionally have a 2 millisecond delay for the request, which is configured as no . # If you do not have such strict real-time requirements, you can set it to yes , so that you can free up memory activerehashing yes# the client's output buffer limit as soon as possible. For some reason, the client is not fast enough to read data from the server,# can be used to force disconnection (a common reason is that the speed at which a publish / subscribe client consumes messages cannot catch up with the speed at which they are produced). # can be set up in three different client ways:# normal -> Normal client # slave -> slave and MONITOR client # pubsub -> has subscribed to at least one pubsub channel or pattern client # each client-output-buffer-limit Grammar :# client-output-buffer-limit <class>
Redis (2.8 version) configuration file parameters in Chinese