Redis configuration file in a detailed

Source: Internet
Author: User
Tags allkeys compact redis cluster

./redis-server /opt/redis/redis_6379.conf# note In order to read a configuration file, you must use start and file path as the first parameter include /opt/redis/ redis-common.conf# can extend the custom configuration file. (if this parameter should be placed on the last line) daemonize yes# whether the next process runs, the default is nopidfile /opt/redis/run/redis_6379.pid#, such as the next process, You need to specify a PID file and path. port 6379# listening port, default is 6379tcp-backlog 511# in high concurrency environment, in order to avoid the client connection slow problem, need to set up a high-speed background log,linux  kernel default is small need to modify  /proc/sys/net/core/somaxconn  the corresponding value. bind 0.0.0.0# binds the host IP, the default value is 127.0.0.1 (written here as 4 0 for ease of application access) timeout 0# sets the time-out period for the client connection, in seconds. When the client has not issued any instructions during this time, then closing the connection #0  is to turn off this setting tcp-keepalive 0# on  Linux , specify the value (in seconds) for the time to send  ACKs . Note that it takes twice as long to close the connection. The default is  0 loglevel notice# specifies the logging level, production environment  notice#Redis  total support four levels: debug ,   verbose ,  notice ,  warning , default =  verbose#debug      Record a lot of information for developing and testing #varbose    useful information, unlike  debug  will record so many #notice     common &NBSP;VERBOSE&NBSP, often used in production environments #warning    only very important or serious information will be recordedRecord to log logfile  "/var/log/redis/redis_6379.log" #配置  log  file Address # Default value is  stdout , standard output, if background mode is output to  /dev/null . databases 16# number of available databases # default value is  16 , default database is  0 . save 900 1save 300 10save 60 10000# data to disk, how long, how many times the update operation, will be synchronized to the data file &NBSP;RDB. #相当于条件触发抓取快照, this can be multiple conditional mates # such as the default configuration file settings, set three conditions #save 900 1  900  seconds of at least  1   key  changed in #save 300 10  300  seconds at least  300   key  is changed #save At least  10000   key  changed in  60 10000  60  seconds Stop-writes-on-bgsave-error  yes# Background Store error stopped writing. rdbcompression yes# The data is compressed when it is stored to the local database (persisted to the Dump.rdb file), the default is  yesrdbchecksum yes# to verify the Rdb file. Dbfilename  dump_6379.rdb# The local persistent database file name, the default value is the path to the file placement of the  dump.rdbdir /opt/redis/data# database mirroring backup. #这里的路径跟文件名要分开配置是因为  redis  When a backup is made, the state of the current database is written to a temporary file, and the backup is completed, #再把该该临时文件替换为上面所指定的文件, The temporary files here and the backup files configured above will be placed in the specified path. #AOF   files will also be stored under this directory # Note that there must be aDirectory rather than file slaveof <masterip> <masterport> #设置该数据库为其他数据库的从数据库时启用该参数.   #设置当本机为slav服务时, the  IP  address and port of the set  master  service, which automatically starts at  Redis  Synchronizing the data masterauth <master-password> #slave服务连接master的密码slave-serve-stale-data yes# When the connection from the library is lost or the replication is in progress, there are two modes of operation from the hangar: #1) If  slave-serve-stale-data  is set to  yes (  default settings  )   From the library will continue to respond to client requests # # #) If  slave-serve-stale-data  is meant for  no , go out  INFO  and  SLAVOF  Any request other than the command will return an error   "sync with master in progress" slave-read-only yes# configuration   Whether the slave  instance accepts writes. Write  slave  to store ephemeral data (which can be easily removed after synchronizing with  master data) is useful, but in the case of not configured, client write may send a problem. repl-ping-slave-period 10# from the library will be sent to the main library at a time interval  PINGs.  can be through  repl-ping-slave-period  Set this time interval, default is  10  seconds repl-timeout 60#repl-timeout   set the main library bulk data transfer time or  ping  reply time interval, The default value is  60  sec #  make sure  repl-timeout  is greater than  repl-ping-slave-periodrepl-diskless-sync norepl-diskless-sync-delay 5repl-disable-tcp-nodelay no# in  slave  socket  after  SYNC  disable  tcp_nodelay# if you select  yes   ,Redis  will use a smaller number  TCP  packets and less bandwidth send data to  slave ,  but this can cause the data to be sent to the  slave  end there will be a delay  ,  if it is   linux kernel  default configuration, will reach  40  milliseconds  . #如果选择   "No", the delay in sending data to the  slave  side is reduced, However, more bandwidth will be used for replication. repl-backlog-size 1mb# sets the background log size for replication. The longer it takes #复制的后台日志越大, slave  to disconnect and later possibly perform partial replication. #后台日志在至少有一个  slave  When connected, allocate only once. repl-backlog-ttl 3600# after  master  no longer connects  slave , the background log will be released. The following configuration defines the time (in seconds) that needs to be freed from the last  slave  disconnected. #0意味着从不释放后台日志slave-priority 100# If  master  does not work properly, Then in multiple  slave , select one of the lowest priority values for the  slave  promotion to  master&nbsp, and the priority value to  0  indicates that cannot be promoted to   mastermin-slaves-to-write 0min-slaves-max-lag 10# if less than  N   slave  connected, and the delay time   <=M  seconds, then  master  can be configured to stop accepting write operations. #例如需要至少  3   slave  Connection, and delay  <=10  second configuration: #设置  0  Disable # default   min-slaves-to-write  for  0  (disabled),min-slaves-max-lag  for  10requirepass foobared# Sets the password to be used before any other specified client connection is made. #警告: Because the  redis  speed is quite fast, under a better server, an external user can make a  150K  password attempt in a second, This means that you need to specify very very strong passwords to prevent brute force cracking rename-command config  "" #命令重命名  . #在一个共享环境下可以重命名相对危险的命令. For example, the name  CONFIG  is a character that is not easy to guess. #举例  : #rename-command config b840fc02d524045429941cc15f59e41cb7be6c52# If you want to delete a command, rename it directly to a null character   ""   can be as follows: maxclients 10000# set the maximum number of client connections at the same time, default unrestricted, #Redis   The number of client connections that can be opened simultaneously  Redis  The maximum number of file descriptors that a process can open, #如果设置   maxclients 0&nbsp, which means no restrictions. #当客户端连接数到达限制时, redis  will close the new connection and return to the client  max number of clients reached  Error message maxmemory 100m# Specifies the Redis maximum memory limit,redis  will load the data into memory at startup, and,redis  will attempt to purge the expired key# following the purge policy when the maximum memory is reached If Redis does not provide enough space after the policy is cleared, or if the policy is set to   "Noeviction" &nbsp, then the command with more space will error, such as  SET, LPUSH . But still can read operation # Note:redis  New  vm  mechanism, will put  Key  storage memory, value  will be stored in the  swap  area this option  LRU  strategy is useful. #maxmemory   settings are better suited to use  redis  as a similar  memcached  cache rather than as a real  DB . #当把  Redis  when used as a real database, memory usage will be a big overhead ##### #当内存达到最大值的时候  Redis  What data will be deleted? There are five different ways to choose volatile-lru -> remove the key with an expire set  using an lru algorithm#volatile-lru ->   using the  LRU  algorithm to remove the set expiration time   key  (lru:  recently used   Least RecentlyUsed ) Allkeys-lru -> remove any  key according to the LRU algorithm#allkeys-lru ->   use   lru  algorithm to remove any  keyvolatile-random -> remove a random key with an  expire set#volatile-random ->   remove random &NBSP;KEYALLKEYS-RANDOM&NBSP;-&G with set over expiration timet; remove a random key, any key#allkeys-random ->  Randomly delete keyvolatile-ttl -> remove the key with the nearest expire  time  (MINOR&NBSP;TTL) #volatile-ttl ->   Remove the expiring  key (MINOR&NBSP;TTL) noeviction  -> don\ ' T expire at all, just return an error on write  operations. #noeviction  ->   do not remove any key, just return a write error maxmemory-policy noeviction # Note: For the above policy, if there is no suitable  key  can be removed, when writing  Redis  will return an error # default is: volatile-lrumaxmemory-samples 3# lru   and   minimal TTL  algorithms are not exact algorithms, but relatively accurate algorithms   (  in order to save memory  )   Optionally, you can select a sample size to detect. #Redis   Default Gray selection  3  samples for detection, you can set by  maxmemory-samples  appendonly yes# by default, Redis backs up database mirroring to disk asynchronously in the background, but the backup is time-consuming and cannot be backed up very often, resulting in a wide range of data loss if conditions such as power cuts, unplug, etc., occur. #所以  redis  provides another more efficient way of database backup and disaster recovery. #开启  append only  mode, Redis appends every write request received to the  appendonly.aof  file, and when redis  restarts, the previous state is recovered from the file. #但是这样会造成  appendonly.aof  files are too large, so  redis  also supports  BGREWRITEAOF  directives for  appendonly.aof   Re-organize. #你可以同时开启  asynchronous dumps    AOFappendfilename  "appendonly.aof" #AOF   file name (   Default  :  "Appendonly.aof") appendfsync everysec#redis  supports three kinds of synchronization  AOF  file policies  :# No: Not Fsync, the system to operate the decision with the execution time. (Fast way). #always:always  indicates that each write operation is synchronized (the safest way). #everysec: Indicates that the write operation is cumulative and synchronized once per second (in a moderate way). # The default is   "Everysec", which is best in terms of speed and security compromise. #如果想让  Redis  can run more efficiently, you can also set the   "No" to let the operating system decide when to execute. #或者相反想让数据更安全你也可以设置为   always #如果不确定就用     the "everysec" .no-appendfsync-on-rewrite yes#aof  policy is set to  always  or  everysec , Background processing process   (  background save or  AOF  log rewrite  )   performs a number of  I/O  operations # in some  Linux  configurations, it blocks too long  fsync ()   request. Note that there is no fix now, even if  fsync  is working on another thread # to mitigate this problem, you can set the nextSurface This parameter  no-appendfsync-on-rewriteauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mb# aof   Auto Rewrite # When the  AOF  file grows to a certain size  Redis  can call bgrewriteaof to rewrite the log file # It works like this: Redis remembers the size of the file after the last log (if it has not been rewritten since it was powered on, the day size is determined  ) #基础大小会同现在的大小进行比较. If the size is now larger than the base size, the rewrite feature will start # and need to specify a minimum size for  AOF  override, this is used to block cases where the file is small but grows very large also to rewrite  AOF  file # settings   percentage  turn off this feature for  0  aof-load-truncated yeslua-time-limit 5000# a   The lua  script has a maximum execution time of  5000  milliseconds ( 5  seconds), and an infinite execution time if it is a  0  or a negative number. cluster-enabled yes# Open the Redis cluster. cluster-config-file nodes-6379.confcluster-node-timeout 15000cluster-migration-barrier  1slowlog-log-slower-than 10000#redis slow log   records a command that exceeds a specific execution time. Execution time excludes  I/O  calculations such as connecting clients, returning results, and so on, only the command execution time # can be set by two parameters  slow log: one is to tell  Redis  The length of the parameter  slowlog-log-slower-than (  subtle  ) &nbsp that is executed more than how much time is recorded, #另一个是  slow log 。 When a new command is recorded, the oldest command will be removed from the queue # the time in the following is in subtle units, so  1000000  stands for one second. #注意指定一个负数将关闭慢日志, and set to  0  will force each command to be logged slowlog-max-len 128# has no limit on log length, just be aware that it consumes memory available through    slowlog reset  recovery of memory consumed by slow logs # recommended default value of 128, when the slow log exceeds  128 , the first record to enter the queue will be kicked out. Latency-monitor-threshold  0notify-keyspace-events  "" #默认所用的通知被禁用, because the user usually does not need to change the characteristics, and this feature will have a performance loss. #注意如果你不指定至少  K  or  E , no events will be sent. hash-max-ziplist-entries 512hash-max-ziplist-value 64# when  hash  contains more than the specified number of elements and the largest element does not exceed the threshold, # hash  will be stored in a special encoding (significantly reduced memory usage), where these two thresholds can be set #redis hash  corresponding  Value  internal is actually a  hashmap   Actually there will be  2  different implementations, #这个  Hash  members are relatively young  Redis  in order to save memory will be similar to a one-dimensional array to compact storage, without the use of real  HashMap  structure, corresponding  valueredisObject   encoding  for  zipmap, #当成员数量增大时会自动转成真正的  HashMap,  at this time  encoding  for  ht.list-max-ziplist-entries 512list-max-ziplist-value  64# and  Hash , multiple small  list  coded in a specific way to saveSpace. #list   Data Type node value size is less than how many bytes are in a compact storage format. set-max-intset-entries 512#set  data type Internal data if all are numeric, and the number of nodes that are included below is stored in a compact format. zset-max-ziplist-entries 128zset-max-ziplist-value 64# and  hashe  and  list  as     ordered  set  is stored in the specified length in the specified encoding to save space #zsort  data Type node value size is less than how many bytes will be in a compact storage format. hll-sparse-max-bytes 3000activerehashing yes#redis  will be used in  1  milliseconds per  100  ms  CPU  time to re- hash&nbsp the  hash  table of  redis , can reduce the use of memory # when you use the scene, there are very strict real-time needs, It is not possible to accept  Redis  from time to time a delay of  2  milliseconds for the request, which is configured as  no . #如果没有这么严格的实时性要求, you can set it to  yes&nbsp so that you can free up memory as quickly as possible client-output-buffer-limit normal 0 0  0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub The  32mb 8mb 60# client's output buffer is limited because, for some reason, the client is not fast enough to read data from the server, #可用于强制断开连接 (a common cause is a release  /  The speed at which subscribers consume messages cannot catch up with the speed at which they are produced). #可以三种不同客户端的方式进行设置: #normal  ->   Normal Client #slave  -> slave  and  MONITOR  client #pubsub ->   subscribed to at least one  pubsub channel   or  pattern  client # each  client-output-buffer-limit  syntax  : #client-output-buffer-limit  <class>

Novice on the road, not exactly the place, kindly pointed out, greatly appreciated

This article is from "Xiao Zhang's blog" blog, please be sure to keep this source http://xiaozhangit.blog.51cto.com/3432391/1734714

Redis configuration file in a detailed

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.