# By default, Redis is not running in background mode, if it needs to run in a background process, change the value of the item to Yes, default to No Daemonize: Whether to run daemon mode later # When the Redis service is running, Redis defaults to writing the PID to the/RUN/REDIS.PID filegroup, which you can configure to other file paths. # when running multiple Redis services, you need to specify a different PID file and port Pidfile:pid File Location # Specify Redis listening port, default is 6379 # If the port is set to 0,redis it will not listen for TCP sockets. Port: Port number for listening # Specifies that Redis receives only requests from that IP address, and if not set, all requests are processed by default. # It is best to set this item in a production environment Bind 127.0.0.1 # Sets the time-out period in seconds for client connections. When the client has not issued any instructions during this time, the connection is closed # Default: 0 means disable, never shut down Timeout: Request time-out # Specifies the path of the unxi socket used to listen for connections. This does not have a default value, so if you do not specify it, Redis will not listen through UNIX sockets. # Unixsocket/tmp/redis.sock # unixsocketperm 755 # Specify logging Levels # Redis supports a total of four levels: Debug, verbose, notice, warning, default = verbose # Debug logs A lot of information for development and testing # Varbose A lot of streamlined useful information, not as much as debug will record # Notice common verbose, often used in production environments # warning only very important or serious information will be recorded in the log Loglevel:log Information level # Configure log file name and full path address # default value is stdout, using "standard output", the default background mode will be output to/dev/null Logfile:log File Location # The number of available databases, the default is 16, the default database is stored in the DB No. 0 ID Library, there is no special requirement, it is recommended to set only one database databases 1 # Query database using SELECT <dbid> # dbid between 0 and ' databases '-1 Databases: number of open databases Save *: How often the snapshot is saved, the first * indicates how long, and the third * indicates how many times the write operation is performed. Snapshots are automatically saved when a certain number of writes are performed within a certain amount of time. You can set multiple conditions. Rdbcompression: Whether to use compression Dbfilename: Data Snapshot file name (only file name, excluding directory) Dir: Save directory for Data snapshot (this is the directory) AppendOnly: If the appendonlylog is turned on, each write will record a log, which will improve the data anti-risk ability, but affect the efficiency. Appendfsync:appendonlylog How to sync to disk (three options, each write is forced to call Fsync, Fsync per second, do not call Fsync wait for the system to synchronize itself) ########## REPLICATION Sync ########## # # master-Slave synchronization. Backups of Redis instances are implemented through the slaveof configuration. Note that this is where the data is copied locally from the remote. In other words, you can have different database files, bind different IPs, and listen to different ports locally. # When this machine is from the service, set the IP and port of the main service, and when Redis starts, it automatically synchronizes data from the main service # slaveof <masterip> <masterport> # If the Master Service Master has a password (configured by the "requirepass" option below), the slave service connects to master's password, then slave must authenticate before starting the synchronization, otherwise its synchronization request will be rejected. #当本机为从服务时, set the connection password for the primary service # Masterauth <master-password> # when a slave loses a connection to master, or if synchronization is in progress, there are two possible behaviors of slave: # 1) If Slave-serve-stale-data is set to "Yes" (the default), slave will continue to respond to client requests, which may be normal or empty data that has not yet obtained a value. # 2) If Slave-serve-stale-data is set to "no", Slave will reply "Synchronizing with Master in progress" to handle various requests, in addition to the INFO and slaveof commands. Slave-serve-stale-data Yes # slave sends a PING request to the server based on the specified interval. # The time interval can be set by Repl_ping_slave_period. # Default 10 seconds # Repl-ping-slave-period 10 # The following options set the chunk data I/O, the request data to master, and the expiration time of the ping response. # The default value is 60 seconds. # One important thing is to make sure that this value is larger than repl-ping-slave-period, otherwise the transfer expiration time between master and slave is shorter than expected. # Repl-timeout 60 ########## Security ########## # requires the client to verify the identity and set the password when processing any command. # This feature is useful if you don't trust the requester. # for backwards compatibility, this paragraph should be commented out. And most people don't need authentication (for example, they run on their own servers.) ) # Warning: External users can try to break the password by trying 150k password every second, which means you need a strong password, otherwise the hack is too easy. # Set Connection Password # Requirepass Foobared # command Rename, can set multiple # in a shared environment, you can change the name for a dangerous command. For example, you can change the CONFIG to another name that is not easily guessed so that you can still use it, and others don't know it. For example # Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # Rename-command Info Info_biran # Rename-command Set Set_biran # You can even disable this command entirely by assigning an empty string to the command: # Rename-command CONFIG "" ########## LIMITS Limit ########## # Set the maximum number of simultaneous connections to the client. # There is no Limit by default, which is related to the number of file descriptors that the Redis process can open. # The special value "0" means there is no limit. # Once this limit is reached, Redis closes all new connections and sends the error "max number of clients reached" # maxclients 128 # Do not use more memory than the upper limit set. Once the memory usage is capped, Redis removes the key based on the selected recycle policy (see: Maxmemmory-policy: Memory policy setting). # If Redis cannot remove key because of the removal policy, or if the policy is set to "Noeviction", Redis will reply to the command with more memory error messages. # For example, Set,lpush and so on. However, it will continue to respond reasonably to read-only commands, such as GET. # This option is useful when using Redis as the LRU cache, or when setting a hard memory limit for an instance (using the "noeviction" policy). # Warning: When a heap of slave is connected to an instance of the upper memory limit, the memory required for the output cache in response to slave is not counted in memory usage. # This will not trigger a network problem/resynchronization event when requesting a deleted key, and then slave will receive a bunch of delete instructions until the database is empty. In short, if you have slave with a master, it is recommended that you set the master memory limit to a lesser extent, making sure that there is enough system memory to use as the output cache. # (It doesn't matter if the policy is set to "Noeviction") # Set the maximum memory, when the maximum memory setting is reached, Redis will first attempt to clear the expired or expiring key, and when this method is processed, the maximum memory setting is reached and no further write operations can be made. # maxmemory 256000000 allocating 256M of memory # maxmemory <bytes> # Memory Policy: If the memory limit is reached, how Redis removes the key. You can choose from the following five strategies: # # VOLATILE-LRU is removed based on the expiration time generated by the LRU algorithm. # ALLKEYS-LRU removes any key according to the LRU algorithm. # volatile-random randomly deletes keys based on expiration settings. # allkeys->random-no-difference random deletion. # Volatile-ttl, based on the most recent expiration date (TTL supported) # noeviction, who does not delete, the error is returned directly in the write operation. # Note: For all policies, if Redis cannot find a suitable key to delete, it will return an error during the write operation. # # Here are the commands involved: set SETNX Setex Append # incr DECR rpush lpush rpushx lpushx linsert lset Rpoplpush sadd # sinter Sinterstore sunion sunionstore sdiff sdiffstore zadd Zincrby # zunionstore Zinterstore hset hsetnx hmset hincrby Incrby Decrby # getset Mset msetnx exec sort # # default values are as follows: # Maxmemory-policy VOLATILE-LRU # The implementation of the LRU and the minimum TTL algorithm is not very precise, but very close (in order to save memory), so you can use the sample to do the test. # For example: The default Redis checks three keys and takes the oldest one, you can set the number of samples by using the following configuration items. # Maxmemory-samples 3 ########## APPEND only mode pure cumulative ########## # By default, Redis is asynchronously exporting data to disk. Because the Redis itself synchronizes data files in sync with the save conditions above, some data will only exist in memory for a period of time, in which case the latest data is lost when Redis goes down. # If you do not want to lose any of the data, you should use the pure accumulation mode: Once this mode is turned on, Redis writes the data written to the appendonly.aof file after each write. # Redis will read the file's data into memory each time it is started. # Note that the asynchronous exported database file and the pure additive file can coexist (you need to comment out all of the "save" settings above and turn off the export mechanism). # If the pure accumulation mode is turned on, Redis will load the log file at startup and ignore the exported Dump.rdb file. # # Important: Check the bgrewriteaof to see how this log file will be re-processed in the background after the accumulated log file is too large. # setting: Yes for pure additive mode AppendOnly No # Set the pure cumulative file name and save path, default: "Appendonly.aof" # Appendfilename Appendonly.aof # Fsync () request the operating system to write the data to disk immediately, do not wait any longer. # Some operating systems will actually flash the data to disk, while others will take a break, but do it as soon as possible. # Redis supports three different modes: # # No: Do not brush immediately, only when the operating system needs to brush. Relatively fast. # always: Each write is immediately written to the AoF file. Slow, but most secure. # Everysec: Write once per second. Compromise solution. # The default "Everysec" usually gives a good balance between speed and data security. # If you really understand what this means, then setting "No" can get better performance (if you lose the data, you'll only get a snapshot that's not very new); # or conversely, you choose "Always" to sacrifice speed to ensure data security and integrity. # # If you are unsure about the use of these patterns, it is recommended to use "Everysec" # # Appendfsync Always Appendfsync everysec # Appendfsync No # If the AOF synchronization policy is set to "always" or "everysec", then the background storage process (background store or write aof log) generates a lot of disk I/O overhead. # Some Linux configurations will cause Redis to block for a long time because of Fsync (). Note that there is currently no perfect correction for this situation, and even the Fsync () of different threads will block our write (2) request. # # in order to alleviate this problem, you can use the following option. It can block Fsync () when BGSAVE or bgrewriteaof is processed. # # This means that if a child process is doing a save operation, then Redis is in an "unsynchronized" state. This actually means that in the worst case, you might lose 30 seconds of log data. (Default Linux settings) # # If you have problems with latency then set this to "yes" or "no", which is the safest way to save persistent data. No-appendfsync-on-rewrite No # Auto Rewrite aof file # If the AoF log file is larger than the specified percentage, Redis can automatically rewrite the aof log file through bgrewriteaof. # # How it works: Redis remembers the size of the AOF log when it was last rewritten (or if there is no write operation after the restart, then use the AoF file directly at this time), # The reference size is compared to the current size. If the current dimension exceeds the specified scale, the override action is triggered. # # You also need to specify the minimum size of the rewritten log so that you can avoid having to override it by reaching the agreed percentage but still having a small size. # # Specifying a percentage of 0 disables the AOF auto-override attribute. Auto-aof-rewrite-percentage 100 Auto-aof-rewrite-min-size 64MB ########## SLOW Log Slow query log ########## # The Redis slow query log can record queries that exceed a specified time. The run time does not include various I/O times. # For example: Connect the client, send the response data, etc. Only the actual time that the command is run is calculated (This is the only scenario where the command runs on a thread that is blocked and cannot serve the other requests) # # You can configure two parameters for the slow query log: One is the superscalar time, the unit is subtle, the command that records more than one time. # The other is the slow query log length. When a new command is written into the log, the oldest record is deleted. # # The time unit below is microseconds, so 1000000 is 1 seconds. Note that a negative time disables the slow query log, while 0 forces all commands to be logged. Slowlog-log-slower-than 10000 # There is no limit to this length. As long as there is enough memory on the line. You can free up memory by Slowlog RESET. Slowlog-max-len 128 ########## Virtual Memory ########## # # # WARNING! Redis 2.4 is opposed in the virtual, and because of performance issues, the 2.4 VM mechanism is completely deprecated and is not recommended for use with this configuration!!!!!!!!!!! # virtual memory allows Redis to keep all data sequences in memory without enough memory. To do this, the high-frequency key is transferred to the memory, and the low-frequency key goes to the swap file, just as the operating system uses memory pages. # To use virtual memory, just set "vm-enabled" to "yes" and set the following three virtual memory parameters as needed. Vm-enabled No # This is the path to the swap file. Guess you guessed that the swap files cannot be shared among multiple Redis instances, so make sure that each Redis instance uses a separate swap file. # The best way to save a swap file (accessed randomly) is a solid state drive (SSD). # * * * * WARNING * * * If you use a shared host, it is not safe to put the default swap file into/tmp. # Create a Redis user-writable directory and configure Redis to create swap files here. Vm-swap-file/tmp/redis.swap # "Vm-max-memory" configures the maximum amount of memory available for virtual memory. # If there is room for the swap file, all the superscalar parts will be placed in the swap file. # "Vm-max-memory" is set to 0 to indicate that the system will use all available memory and is recommended to be set to 60%-80% of the remaining memory. # store all data greater than vm-max-memory in virtual memory, regardless of the vm-max-memory settings, all index data is memory stored (REDIS index data is keys), that is, when the vm-max-memory is set to 0, In fact, all value is present on disk. The default value is 0. Vm-max-memory 0 # The Redis Interchange file is divided into multiple data pages. # A storage object can be saved in multiple contiguous pages, but a data page cannot be shared by multiple objects. # So if your data page is too big, then small objects will waste a lot of space. # If the data page is too small, there will be less swap space for storage (assuming you set the same number of data pages) # If you use many small objects, it is recommended that the paging size be 64 or 32 bytes. # If you use a lot of big objects, then use a larger size. # If you're not sure, use the default value:) Vm-page-size 32 # The total number of data pages in the swap file. # Depending on the in-memory paging table (distribution of used/unused data pages), each 8 data page on disk consumes 1 bytes in memory. # Swap Area capacity = Vm-page-size * Vm-pages # according to the default 32-byte data page size and 134217728 of the number of data pages, Redis data page file will occupy 4GB, and the memory of the paging table will consume 16MB of memory. # It's good to set the minimum and sufficient number for your fulfillment program, and the following default value is large in most cases. Vm-pages 134217728 # The number of virtual memory I/O threads that can be run concurrently, that is, the number of threads accessing the swap file. # These threads can do the data read and write from the swap file, and can also handle the interaction and encoding/decoding of the data between the memory and the disk. # More threads can improve processing efficiency somewhat, although the I/O operation itself relies on physical device limitations and does not increase the efficiency of a single read and write operation because of more threads. # A special value of 0 turns off thread-level I/O and turns on the blocking virtual memory mechanism. # set the best not to exceed the machine's core number, if set to 0, then all the swap file operation is serial. This can result in a long delay, but with good assurance of data integrity. Vm-max-threads 4 ########## Advanced config configuration ########## # when there is a lot of data, it is appropriate to hash (which will require more memory) and the maximum number of elements cannot exceed the given limit. # Redis Hash is a hashmap within value, and if the map has fewer members, it will be stored in a compact format similar to one-dimensional linear, which eliminates the memory overhead of a large number of pointers. The following 2 conditions any one condition above the set value will be converted into a real hashmap, # when value is not more than the number of members in the map is stored in a linear compact format, the default is 64, that is, the value of 64 members of the following is the use of linear compact storage, more than this value automatically turned into a true hashmap. Hash-max-zipmap-entries 512 # when value does not exceed the number of bytes per member within the map, it uses linear compact storage to save space. Hash-max-zipmap-value 64 # Similar to Hash-max-zipmap-entries OTP, with fewer data elements, you can encode in another way to save a lot of space. # The list data type how many nodes below will use a compact storage format for pointers List-max-ziplist-entries 512 # list data type node value size less than how many bytes will be in compact storage format List-max-ziplist-value 64 # There is a case of a special encoding: The data is all a string of 64-bit unsigned integer numbers. # The following configuration item is used to limit the maximum limit for using this encoding in this case. Set-max-intset-entries 512 # Similar to the first and second cases, ordered sequences can also be processed in a special coding way, saving a lot of space. # This encoding is only suitable for sequential sequences where the length and elements meet the following limitations: Zset-max-ziplist-entries 128 Zset-max-ziplist-value 64 # hash flush, each 100 CPU milliseconds will take out 1 milliseconds to flush the Redis main hash table (top-level key-value map). # The hash table implementation used by Redis (see DICT.C) uses a deferred hash flush mechanism: the more you operate on a hash table, the more frequent the hash flush operation is. On the other hand, if the server is very inactive then that is to save the hash table with a bit of memory. # The default is to make 10 hash table refreshes per second to refresh the dictionary and free up memory as soon as possible. Recommendations # If you're concerned about latency, use "activerehashing no", and 2 milliseconds per request is a bad thing. # set "Activerehashing yes" if you don't care too much about latency and want to free up memory as soon as possible. activerehashing Yes ########## includes contains ########## # contains one or more other configuration files. # This is useful when you have a standard configuration template but each Redis server needs a personality setting. # include file feature allows you to draw other profiles, so make good use of it. # include/path/to/local.conf # include/path/to/other.conf |