# By default, redis is not running in the background mode. If you need to run the program in the background, change the value of this item to yes. The default value is no. Daemonize: whether to run in daemon mode # For example, redis writes the pid to the/run/Redis. pid file group by default when processes in redis are running. You can configure the pid to other file paths. # When running multiple redis services, you must specify different pid files and ports. Pidfile: pid File Location # Specify the redis listening port. The default value is 6379. # If the port is set to 0, Redis will not listen to TCP sockets. Port: the port number of the listener. # Specify that redis only receives requests from this IP address. If this parameter is not set, all requests are processed by default, # It is best to set this item in the production environment Bind 127.0.0.1 # Set the timeout time for client connection, in seconds. When the client does not send any commands during this period, close the connection. # Default value: 0 indicates disabled and never closed Timeout: Request timeout # Specify the path for listening to the connected unxi socket. There is no default value for this, so Redis will not listen through unix socket if it is not specified. # Unixsocket/tmp/redis. sock # Unixsocketperm 755 # Specify the log record level # Redis supports four levels: debug, verbose, notice, and warning. The default value is verbose. # Debug records a lot of information for development and testing # Varbose provides a lot of concise and useful information, not as many as debug records # Notice common verbose, usually used in the production environment # Warning only records very important or serious information. Loglevel: log information level # Configure the log file name and full path address # The default value is stdout and "standard output" is used. The default background mode is output to/dev/null. Logfile: Location of the log file # Number of available databases. The default value is 16. The default database is stored in database No. 0 ID. You are advised to set only one database databases 1. # Use SELECT to query databases <dbid> # Dbid is between 0 and 'databases'-1 Databases: number of databases Enabled Save **: the frequency at which snapshots are saved. The first "*" indicates the duration and the third "indicates the number of write operations performed. Snapshots are automatically saved when a certain number of write operations are performed within a certain period of time. You can set multiple conditions. Rdbcompression: whether to use Compression Dbfilename: Data snapshot file name (only file name, excluding directory) Dir: directory for storing data snapshots (this is the Directory) Appendonly: whether to enable appendonlylog. If it is enabled, a log is recorded for each write operation, which improves data risk resistance but affects efficiency. Appendfsync: How to synchronize appendonlylog to the disk (three options are force-call fsync for each write, enable fsync once per second, and do not call fsync to wait for the system to synchronize itself) ########## REPLICATION synchronization ########## # # Master-slave Synchronization. Use slaveof configuration to back up Redis instances. # Note: Data is locally copied from the remote end. That is to say, local hosts can have different database files, bind different IP addresses, and listen on different ports. # When the local machine is a slave service, it sets the IP address and port of the master service. When Redis is started, it will automatically synchronize data from the master service. # Slaveof <masterip> <masterport> # If a password is set for the master service (configured using the "requirepass" option below) and the password for the slave service to connect to the master, slave must perform authentication before synchronization starts, otherwise, its synchronization request is rejected. # When the local machine is a slave service, set the master service connection password # Masterauth <master-password> # When an slave loses its connection to the master or the synchronization is in progress, there are two possible slave actions: #1) If slave-serve-stale-data is set to "yes" (default), slave will continue to respond to client requests, which may be normal data, it may also be that empty data has not yet been obtained. #2) If slave-serve-stale-data is set to "no", slave will reply "synchronizing with master in progress" to process various requests, besides the INFO and SLAVEOF commands. Slave-serve-stale-data yes # Slave sends a ping request to the server based on the specified interval. # The time interval can be set through repl_ping_slave_period. #10 seconds by default # Repl-ping-slave-period 10 # The options below set the expiration time for large data I/O, Data Request to master, and ping response. # The default value is 60 seconds. # Make sure that the value is greater than repl-ping-slave-period. Otherwise, the transmission expiration time between the master and slave is shorter than expected. # Repl-timeout 60 ######### SECURITY ########## # The client is required to verify the identity and set the password when processing any command. # This function is useful if you do not trust the requester. # For backward compatibility, comment out this section. And most people do not need authentication (for example, they run on their own servers .) # Warning: an external user can try a K password every second to crack the password. This means that you need a strong password. Otherwise, it will be too easy to crack. # Set the connection password # Requirepass foobared # Command rename. You can set multiple # In a shared environment, you can change the name of a dangerous command. For example, you can change a name that is not easy to guess for CONFIG so that you can still use it, but others cannot know it. # Example: # Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # Rename-command info info_biran # Rename-command set set_biran # You can even assign an empty string to the command to completely disable the command: # Rename-command CONFIG "" ######### LIMITS restrictions ########## # Set the maximum number of connected clients at the same time. # There is no limit by default. This is related to the number of file descriptors that can be opened by the Redis process. # The special value "0" indicates no restriction. # Once this limit is reached, Redis will close all new connections and send an error "reaching the maximum number of users (max number of clients reached )" # Maxclients 128 # Do not use more memory than the configured upper limit. Once the memory usage reaches the upper limit, Redis will delete the key based on the selected recycle policy (see maxmemmory-policy: Memory policy setting. # If Redis cannot delete the key due to deletion policy issues, or the policy is set to "noeviction", Redis will reply to the command with more memory errors. # For example, SET and LPUSH. However, it will continue to respond to read-only commands properly, such as GET. # This option is useful when Redis is used as the LRU cache or when the "noeviction" policy is set for the instance. # Warning: when a bunch of slave instances are connected to the memory limit, the memory required to respond to the output cache required by slave is not counted in the memory usage. # In this way, when a deleted key is requested, the network issue/re-synchronization event will not be triggered, and slave will receive a bunch of delete commands until the database is empty. # In short, if you have a Server Load balancer instance connected to a master, we recommend that you set the master memory limit to a smaller value to ensure that enough system memory is used as the output cache. # (It doesn't matter if the policy is set to "noeviction) # Set the maximum memory. When the maximum memory is reached, Redis will first try to clear expired or expiring keys. After this method is processed, any of them will reach the maximum memory setting, no more write operations can be performed. # Maxmemory 256000000 allocates MB of memory # Maxmemory <bytes> # Memory Policy: if the memory limit is reached, how does Redis Delete the key. You can choose from the following five policies: # # Volatile-lru-> delete an instance based on the expiration time generated by the LRU algorithm. # Allkeys-lru-> delete any key based on the LRU algorithm. # Volatile-random-> Delete keys randomly based on expiration settings. # Allkeys-> random deletion without difference. # Volatile-ttl-> Delete (supplemented by TTL) based on the latest expiration time) # Noeviction-> no one will delete it, and an error will be returned directly during the write operation. # # Note: For all policies, if Redis cannot find a suitable key that can be deleted, an error will be returned during the write operation. # # The command involved here: set setnx setex append # Incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # Sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # Zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # Getset mset msetnx exec sort # # The default value is as follows: # Maxmemory-policy volatile-lru # The implementation of LRU and the minimum TTL algorithm is not very accurate, but very close (to save memory), so you can use the example for testing. # For example, by default, Redis checks the three keys and obtains the oldest one. You can set the number of samples through the following configuration items. # Maxmemory-samples 3 ######### Append only mode pure accumulative MODE ########## # By default, Redis asynchronously exports data to the disk. Because redis synchronizes data files according to the save conditions above, some data will only exist in the memory for a period of time. In this case, when Redis is down, the latest data is lost. # If you do not want to lose any piece of data, you should use the pure accumulate mode: once this mode is enabled, Redis will write the data written each time to the appendonly. aof file after receiving it. # Redis reads the data in this file into the memory every time it starts. # # Note: database files exported asynchronously and pure accumulated files can coexist. (comment out all the above "save" Settings and disable the export mechanism ). # If the pure accumulate mode is enabled, Redis will load the log file at startup and ignore the exported dump. rdb file. # # Important: View BGREWRITEAOF to learn how to reprocess the log file in the background when the log file is too large. # Setting: yes is the pure accumulate mode. Appendonly no # Set the name and save path of the accumulated file. Default Value: "appendonly. aof" # Appendfilename appendonly. aof # Fsync () requests the operating system to immediately write data to the disk. Do not wait. # Some operating systems will actually fl data to the disk immediately; some of them need to be honed, but they will do so as soon as possible. # Redis supports three different modes: # # No: do not click it immediately. You can click it again only when the operating system needs to click it. Fast. # Always: Every write operation is immediately written to the aof file. Slow, but safest. # Everysec: Write once per second. Compromise. # The default "everysec" is usually a good balance between speed and data security. # If you really understand what this means, you can set "no" to achieve better performance (if data is lost, you can only get one snapshot that is not very new ); # On the contrary, you choose "always" to sacrifice speed to ensure data security and integrity. # # If you are not sure about the usage of these modes, we recommend using "everysec" # # Appendfsync always Appendfsync everysec # Appendfsync no # If the AOF synchronization policy is set to "always" or "everysec", background Storage Processes (storing or writing AOF logs in the background) will incur a lot of disk I/O overhead. # Some Linux configurations may cause Redis to be blocked for a long time due to fsync. # Note: the current situation has not been perfectly corrected, and even fsync () of different threads will block our write (2) requests. # # Use the following option to alleviate this problem. It can block fsync () during BGSAVE or BGREWRITEAOF processing (). # # This means that if a sub-process is performing the save operation, Redis will be in the "non-synchronous" state. # This means that in the worst case, 30 seconds of log data may be lost. (Default Linux settings) # # If you have latency problems, set this to "yes"; otherwise, keep "no", which is the safest way to save persistent data. No-appendfsync-on-rewrite no # Automatically override AOF files # If the AOF log file is larger than the specified percentage, Redis can automatically rewrite the AOF log file through BGREWRITEAOF. # # Working principle: Redis remembers the size of the AOF log during the last rewrite (or if no write operation is performed after the restart, the AOF file will be used directly ), # Compare the reference size with the current size. If the current size exceeds the specified proportion, the rewrite operation is triggered. # # You also need to specify the minimum size of the log to be rewritten, so as to avoid rewriting when the agreed percentage is reached but the size is still small. # # If the percentage is 0, the AOF automatic rewrite feature is disabled. Auto-aof-rewrite-percentage 100 Auto-aof-rewrite-min-size 64 mb ######### Slow log slow query LOG ########## # Redis slow query logs can record queries that have exceeded the specified time. The running time does not include various I/O times. # For example, connect to the client and send response data. Only calculate the actual time of command running (this is the only scenario where the command running thread is blocked and cannot serve other requests at the same time) # # You can configure two parameters for slow query logs: one is the time exceeding the standard, and the Unit is subtle. commands that record the time exceeding the limit are used. # The other is the length of the slow query log. When a new command is written into the log, the oldest record is deleted. # # The following time unit is microsecond, so 1000000 is 1 second. Note that the negative time will disable slow query logs, and 0 will force all commands to be logged. Slowlog-log-slower-than 10000 # There is no limit on the length. You only need enough memory. You can use slowlog reset to release memory. Slowlog-max-len 128 ######### Virtual memory ########## ### Warning! Redis 2.4 is against in the virtual environment. Due to performance problems, the VM mechanism of version 2.4 is completely deprecated. We do not recommend using this configuration !!!!!!!!!!! # The virtual memory allows Redis to save all data sequences in the memory when the memory is insufficient. # To do this, the high-frequency key will be transferred to the memory, and the low-frequency key will be transferred to the swap file, just like the Memory Page used by the operating system. # To use virtual memory, set "vm-enabled" to "yes" and set the following three virtual memory parameters as needed. Vm-enabled no # This is the path of the swap file. As you may have guessed, the swap file cannot be shared among multiple Redis instances, so make sure that each Redis instance uses an independent swap file. # SSD is the best medium for storing swap files (randomly accessed ). # *** Warning ** if you use a shared host, it is not safe to put the default swap file under/tmp. # Create a writable directory for Redis users and configure Redis to create swap files here. Vm-swap-file/tmp/redis. swap # "Vm-max-memory": configure the maximum available memory capacity of the virtual memory. # If the swap file still has space, all excess parts will be placed in the swap file. # If "vm-max-memory" is set to 0, the system will use all available memory. We recommend that you set it to 60%-80% of the remaining memory. # Store all data greater than vm-max-memory into the virtual memory. No matter how small the vm-max-memory settings are, all the index data is stored in the memory (Redis's index data is keys ), that is to say, when vm-max-memory is set to 0, all values exist on the disk. The default value is 0. Vm-max-memory 0 # Redis swap files are divided into multiple data pages. # A stored object can be saved on multiple consecutive pages, but one data page cannot be shared by multiple objects. # Therefore, if your data page is too large, small objects will waste a lot of space. # If the data page is too small, there will be less swap space for storage (assuming you set the same number of Data Pages) # If you use many small objects, it is recommended that the page size be 64 or 32 bytes. # If you use many large objects, use a larger size. # If you are not sure, use the default value :) Vm-page-size 32 # Total number of data pages in the swap file. # Based on the memory split page table (used/unused data page distribution), each 8 data pages on the disk consumes one byte of memory. # Swap Zone capacity = vm-page-size * vm-pages # Based on the default 32-byte data page size and 134217728 of the data page size, Redis data page files occupy 4 GB, while paging tables in the memory consume 16 MB of memory. # Set the minimum and sufficient number for your fulfillment program. The default value below is too large in most cases. Vm-pages 134217728 # The number of virtual memory I/O threads that can run simultaneously, that is, the number of threads that access the swap file. # These threads can read and write data from swap files, or process data interaction and encoding/decoding between memory and disk. # More threads can improve the processing efficiency to a certain extent. Although I/O operations depend on physical devices, they do not increase the efficiency of a single read/write operation because of more threads. # The special value 0 will disable thread-Level I/O and enable the blocking virtual memory mechanism. # It is recommended that you set the number of server cores not exceed. If it is set to 0, all operations on swap files are serial. it may cause a long delay, but it guarantees data integrity. Vm-max-threads 4 ######### Advanced config ########## # When there is a large amount of data, it is suitable to use Hash encoding (this requires more memory), the maximum number of elements cannot exceed the given limit. # Redis Hash is a HashMap inside the value. If the number of members of this Map is small, it will be stored in a compact format similar to one-dimensional linear, this saves a lot of pointer memory overhead. If any of the following two conditions exceeds the set value, it will be converted into a real HashMap, # When the value Map contains no more than one members, it is stored in a linear compact format. The default value is 64. That is, if the value contains less than 64 members, it uses linear compact storage, if this value is exceeded, it is automatically converted to a real HashMap. Hhash-max-zipmap-entries 512 # When the length of each member value in the Map is no more than a few bytes, the linear compact storage is used to save space. Hash-max-zipmap-value 64 # Similar to the hash-max-zipmap-entries hash, if there are few data elements, you can use another method for encoding to save a lot of space. # The number of nodes in the list data type follows the compact storage format of de-pointer. List-max-ziplist-entries 512 # The number of bytes smaller than the node value of the list data type will use the compact storage format List-max-ziplist-value 64 # There is also a special encoding: The data is a string consisting of 64-bit unsigned integer numbers. # The following configuration item is used to limit the maximum value of this encoding in this case. Set-max-intset-entries 512 # Similar to the first and second cases, ordered sequences can also be processed in a special encoding method, saving a lot of space. # This encoding is only applicable to the ordered sequence with the length and elements meeting the following restrictions: Zset-max-ziplist-entries 128 Zset-max-ziplist-value 64 # Hash Refresh: every 100 CPUs refresh the Redis master hash table (top-level key-value ing table) in milliseconds ). # Implementation of the hash table used by redis (see dict. c) adopts the delayed hash refresh mechanism: the more you operate on a hash table, the more frequent the hash refresh operation; # If the server is very inactive, the hash table is saved in the dot memory. # By default, 10 hash table refreshes are performed every second to refresh the dictionary and release the memory as soon as possible. # Suggestion: # If you are concerned about latency, use "activerehashing no". The latency of each request is 2 milliseconds. # If you do not care much about latency and want to release the memory as soon as possible, set "activerehashing yes ". Activerehashing yes ######### INCLUDES des ########## # Contains one or more other configuration files. # This is useful when you have a standard configuration Template but each redis server requires personalized settings. # The File Inclusion feature allows you to attract other configuration files, so make good use of it. # Include/path/to/local. conf # Include/path/to/other. conf |