Redis configuration file redis.conf Chinese version

Source: Internet
Author: User
Tags allkeys integer numbers strong password redis server

Transfer from: http://www.jb51.net/article/50605.htm

# Redis Sample configuration file

# Note The unit problem: When you need to set the memory size, you can use a common format like 1k, 5GB, 4M:
#
# 1k = bytes
# 1kb = 1024x768 bytes
# 1m = 1000000 bytes
# 1MB = 1024*1024 bytes
# 1g = 1000000000 bytes
# 1GB = 1024*1024*1024 bytes
#
# The units are case insensitive, so the 1GB 1Gb 1gB notation is exactly the same.

# Redis is not running as a daemon by default. You can set this to "yes" to run it as a daemon.
Note that Redis will write the process ID to/var/run/redis.pid when it is a daemon
Daemonize No

# when running as a daemon, Redis writes the process ID to/var/run/redis.pid by default. You can modify the path here.
Pidfile/var/run/redis.pid

# The specific port that accepts the connection, the default is 6379.
# If the port is set to 0,redis it will not listen for TCP sockets.
Port 6379

# If you want to, you can tie the order of an interface, if not set up separately, then all the interface connection will be monitored.
#
# bind 127.0.0.1

# Specifies the path of the unxi socket used to listen for connections. This does not have a default value, so if you do not specify it, Redis will not listen through UNIX sockets.
#
# Unixsocket/tmp/redis.sock
# unixsocketperm 755

#一个客户端空闲多少秒后关闭连接. (0 means disable, never close)
Timeout 0

# Set the server debug level.
# Possible values:
# Debug (lots of information, useful for dev/test)
# verbose (a lot of streamlined useful information, but not as much as the debug level)
# Notice (the right amount of information is basically the level you need in your production environment)
# Warning (only important/serious information will be recorded)
LogLevel verbose

# indicates the log file name. You can also use "stdout" to force Redis to write log information to standard output.
Note: If Redis is running as daemon and you set the log to display to standard output, then the log will be sent to/dev/null
LogFile stdout

# to use the system logger is simple, just set "syslog-enabled" to "yes".
# then set some other syslog parameters as needed.
# syslog-enabled No

# indicates the syslog identity
# syslog-ident Redis

# indicates a syslog device. Must be a user or one of the LOCAL0 ~ LOCAL7.
# syslog-facility Local0

# Set the number of databases. The default database is DB 0, and you can use select <dbid> WHERE dbid (0~ ' databases '-1) for each connection using a different database.
Databases 16

################################ Snapshot #################################

#
# Save the database to disk:
#
# Save <seconds> <changes>
#
# writes the database to disk after specifying the number of seconds and the number of data changes.
#
# The following example will perform the operation of writing data to disk:
# 900 seconds (15 minutes) after, and at least 1 changes
# 300 seconds (5 minutes) after, and at least 10 changes
# After 60 seconds, and at least 10,000 changes
#
Note: If you want to not write the disk, you can comment out all the "save" settings.

Save 900 1
Save 300 10
Save 60 10000

# whether to compress string objects with LZF when exporting to an. Rdb database.
# is set to "Yes" by default, so it is almost always in effect.
# If you want to save the CPU you can set this to "no", but if you have a compressible key, the data file will be larger.
Rdbcompression Yes

# The file name of the database
Dbfilename Dump.rdb

# working Directory
#
# The database will be written to this directory, and the file name is the value of "Dbfilename" above.
#
# Add-on files are also put here.
#
# Note that you must specify a directory, not a file name.
Dir./

################################# Synchronous #################################

#
# master-Slave synchronization. Backups of Redis instances are implemented through the slaveof configuration.
Note that this is where the data is copied locally from the remote. In other words, you can have different database files, bind different IPs, and listen to different ports locally.
#
# slaveof <masterip> <masterport>

# If Master sets a password (configured with the "requirepass" option below), then slave must authenticate before starting the synchronization, otherwise its synchronization request will be rejected.
#
# Masterauth <master-password>

# when a slave loses a connection to master, or if synchronization is in progress, there are two possible behaviors of slave:
#
# 1) If Slave-serve-stale-data is set to "Yes" (the default), slave will continue to respond to client requests, which may be normal or empty data that has not yet obtained a value.
# 2) If Slave-serve-stale-data is set to "no", Slave will reply "Synchronizing with Master in progress" to handle various requests, in addition to the INFO and slaveof commands.
#
Slave-serve-stale-data Yes

# slave sends a PING request to the server based on the specified interval.
# The time interval can be set by Repl_ping_slave_period.
# Default 10 seconds.
#
# Repl-ping-slave-period 10

# The following options set the chunk data I/O, the request data to master, and the expiration time of the ping response.
# The default value is 60 seconds.
#
# One important thing is to make sure that this value is larger than repl-ping-slave-period, otherwise the transfer expiration time between master and slave is shorter than expected.
#
# Repl-timeout 60

################################## Safety ###################################

# requires the client to verify identity and password when processing any command.
# This is useful when you can't trust a visitor.
#
# for backwards compatibility, this paragraph should be commented out. And most people don't need authentication (for example, they run on their own servers.) )
#
# Warning: Because Redis is so fast, intentioned can try to crack the password by trying 150k password per second.
# This means you need a strong password, or it's too easy to crack.
#
# Requirepass Foobared

# command Rename
#
# in a shared environment, you can change the name for a dangerous command. For example, you can change the CONFIG to another name that is not easily guessed so that you can still use it, while others cannot do bad things.
#
For example
#
# Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# You can even disable this command entirely by assigning an empty string to the command:
#
# Rename-command CONFIG ""

################################### Limit ####################################

#
# Set the maximum number of simultaneous connections to the client.
# There is no Limit by default, which is related to the number of file descriptors that the Redis process can open.
# The special value "0" means there is no limit.
# Once this limit is reached, Redis closes all new connections and sends the error "max number of clients reached"
#
# maxclients 128

# do not use more memory than the set limit. Once the memory usage reaches the upper limit, Redis deletes the key based on the selected recycling policy (see: Maxmemmory-policy).
#
# If Redis cannot remove key because of the deletion policy, or if the policy is set to "Noeviction", Redis will reply to the command with more memory error messages.
# For example, Set,lpush and so on. However, it will continue to respond reasonably to read-only commands, such as GET.
#
# This option is useful when using Redis as the LRU cache, or when setting a hard memory limit for an instance (using the "noeviction" policy).
#
# Warning: When a heap of slave is connected to an instance of the upper memory limit, the required memory for the output cache that responds to slave needs is not counted in memory usage.
# This will not trigger a network problem/resynchronization event when requesting a deleted key, and then slave will receive a bunch of delete instructions until the database is empty.
#
# In short, if you have slave connected to a master, it is recommended that you set the master memory limit to a lesser extent, making sure that there is enough system memory to use as the output cache.
# (It doesn't matter if the policy is set to "Noeviction")
#
# maxmemory <bytes>

# Memory Policy: How Redis removes key if memory limit is reached. You can select from the following five strategies:
#
# VOLATILE-LRU, which is deleted based on the expiration time generated by the LRU algorithm.
# ALLKEYS-LRU removes any key based on the LRU algorithm.
# volatile-random randomly deletes keys based on expiration settings.
# Allkeys->random-no-difference random deletion.
# Volatile-ttl, based on the most recent Expiration Time (with TTL)
# noeviction, who does not delete, returns an error directly in the write operation.
#
# Note: For all policies, if Redis cannot find a suitable key to delete, it will return an error during the write operation.
#
#       the commands involved here: Set setnx Setex append
#        INCR decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter si Nterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore Zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort The
#
# default values are as follows:
#
# maxmemory-policy VOLATILE-LRU

# The implementation of the LRU and the minimum TTL algorithm is not very precise, but very close (in order to save memory), so you can use the sample to do the test.
# For example: The default Redis checks three keys and takes the oldest one, you can set the number of samples by using the following configuration items.
#
# Maxmemory-samples 3

############################## Pure Additive Mode ###############################

# By default, Redis is asynchronously exporting data to disk. In this case, when Redis hangs up, the latest data is lost.
# If you do not want to lose any of the data, you should use the pure accumulation mode: Once this mode is turned on, Redis writes the data written to the appendonly.aof file after each write.
# Redis will read the file's data into memory each time it is started.
#
Note that asynchronous exported database files and pure additive files can coexist (you have to comment out all of the "save" settings above and turn off the export mechanism).
# If the pure accumulation mode is turned on, Redis will load the log file at startup and ignore the exported Dump.rdb file.
#
# Important: Check the bgrewriteaof to see how this log file will be re-processed in the background after the accumulated log file is too large.

AppendOnly No

# Pure Cumulative file name (default: "Appendonly.aof")
# Appendfilename Appendonly.aof

# Fsync () request the operating system to write the data to disk immediately, do not wait any longer.
# Some operating systems will actually flash the data to disk, while others will take a break, but do it as soon as possible.
#
# Redis supports three different modes:
#
# No: Do not brush immediately, only when the operating system needs to brush. Relatively fast.
# always: Each write is immediately written to the AoF file. Slow, but most secure.
# Everysec: Write once per second. Compromise solution.
#
# The default "Everysec" usually gives a good balance between speed and data security.
# If you really understand what this means, then setting "No" can get better performance (if you lose the data, you'll only get a snapshot that's not very new);
# or conversely, you choose "Always" to sacrifice speed to ensure data security and integrity.
#
# If you're unsure, use "everysec"

# Appendfsync Always
Appendfsync everysec
# Appendfsync No

# If the AOF synchronization policy is set to "always" or "everysec", then the background storage process (background store or write aof log) generates a lot of disk I/O overhead.
# Some Linux configurations will cause Redis to block for a long time because of Fsync ().
Note that there is currently no perfect correction for this situation, and even the Fsync () of different threads will block our write (2) request.
#
# in order to alleviate this problem, you can use the following option. It can block Fsync () when BGSAVE or bgrewriteaof is processed.
#
# This means that if a child process is doing a save operation, then Redis is in an "unsynchronized" state.
This actually means that in the worst case, you might lose 30 seconds of log data. (Default Linux settings)
#
# If you have problems with latency then set this to "yes" or "no", which is the safest way to save persistent data.
No-appendfsync-on-rewrite No

# Auto Rewrite aof file
#
# If the AoF log file is larger than the specified percentage, Redis can automatically rewrite the aof log file through bgrewriteaof.
#
# How it works: Redis remembers the size of the AOF log when it was last rewritten (or if there is no write operation after the restart, then use the AoF file directly at this time),
# The reference size is compared to the current size. If the current dimension exceeds the specified scale, the override action is triggered.
#
# You also need to specify the minimum size of the rewritten log so that you can avoid having to override it by reaching the agreed percentage but still having a small size.
#
# Specifying a percentage of 0 disables the AOF auto-override attribute.

Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64MB

################################## Slow Query Log ###################################

# The Redis slow query log can record queries that exceed a specified time. The run time does not include various I/O times.
# For example: Connect the client, send the response data, etc. Only the actual time that the command is run is calculated (This is the only scenario where the command runs on a thread that is blocked and cannot serve the other requests)
#
# You can configure two parameters for the slow query log: One is the superscalar time, the unit is subtle, the command that records more than one time.
# The other is the slow query log length. When a new command is written into the log, the oldest record is deleted.
#
# The time unit below is microseconds, so 1000000 is 1 seconds. Note that a negative time disables the slow query log, while 0 forces all commands to be logged.
Slowlog-log-slower-than 10000

# There is no limit to this length. As long as there is enough memory on the line. You can free up memory by Slowlog RESET. (Translator Note: The log is actually in memory of the Orz)
Slowlog-max-len 128

################################ Virtual Memory ###############################

# # # WARNING! Redis 2.4 is opposed in the virtual presence.
# # # Very discouraged using virtual memory!!

# virtual memory allows Redis to keep all data sequences in memory without enough memory.
To do this, the high-frequency key is transferred to the memory, and the low-frequency key goes to the swap file, just as the operating system uses memory pages.
#
# To use virtual memory, just set "vm-enabled" to "yes" and set the following three virtual memory parameters as needed.

Vm-enabled No
# vm-enabled Yes

# This is the path to the swap file. Guess you guessed that the swap files cannot be shared among multiple Redis instances, so make sure that each Redis instance uses a separate swap file.
#
# The best way to save a swap file (accessed randomly) is a solid state drive (SSD).
#
# * * * * WARNING * * * If you use a shared host, it is not safe to put the default swap file into/tmp.
# Create a Redis user-writable directory and configure Redis to create swap files here.
Vm-swap-file/tmp/redis.swap

# "Vm-max-memory" configures the maximum amount of memory available for virtual memory.
# If there is room for the swap file, all the superscalar parts will be placed in the swap file.
#
# "Vm-max-memory" is set to 0 to indicate that all available memory is used by the system.
# This is not the default value, just the memory you can use all the time, to save a little margin will be better.
# For example, set to 60%-80% of the remaining memory.
Vm-max-memory 0

# The Redis Interchange file is divided into multiple data pages.
# A storage object can be saved in multiple contiguous pages, but a data page cannot be shared by multiple objects.
# So if your data page is too big, then small objects will waste a lot of space.
# If the data page is too small, there will be less swap space for storage (assuming you set the same number of data pages)
#
# If you use many small objects, it is recommended that the paging size be 64 or 32 bytes.
# If you use a lot of big objects, then use a larger size.
# If you're not sure, use the default value:)
Vm-page-size 32

# The total number of data pages in the swap file.
# Depending on the in-memory paging table (distribution of used/unused data pages), each 8 data page on disk consumes 1 bytes in memory.
#
# Swap Area capacity = Vm-page-size * Vm-pages
#
# according to the default 32-byte data page size and 134217728 of the number of data pages, Redis data page file will occupy 4GB, and the memory of the paging table will consume 16MB of memory.
#
# It's good to set the minimum and sufficient number for your fulfillment program, and the following default value is large in most cases.
Vm-pages 134217728

# Number of virtual memory I/O threads that can run concurrently.
# These threads can do the data read and write from the swap file, and can also handle the interaction and encoding/decoding of the data between the memory and the disk.
# More threads can improve processing efficiency somewhat, although the I/O operation itself relies on physical device limitations and does not increase the efficiency of a single read and write operation because of more threads.
#
# A special value of 0 turns off thread-level I/O and turns on the blocking virtual memory mechanism.
Vm-max-threads 4

############################### Advanced Configuration ###############################

# when there is a large amount of data, it is appropriate to hash-encode (requires more memory) and the maximum number of elements cannot exceed the given limit.
# You can set these limits with the following options:
Hash-max-zipmap-entries 512
Hash-max-zipmap-value 64

# Similar to OTP, with fewer data elements, you can encode in a different way to save a lot of space.
# This method can only be used when the following restrictions are met:
List-max-ziplist-entries 512
List-max-ziplist-value 64

# There is a case of a special encoding: The data is all a string of 64-bit unsigned integer numbers.
# The following configuration item is used to limit the maximum limit for using this encoding in this case.
Set-max-intset-entries 512

# Similar to the first and second cases, ordered sequences can also be processed in a special coding way, saving a lot of space.
# This encoding is only suitable for sequential sequences where the length and elements meet the following limitations:
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64

# hash flush, each 100 CPU milliseconds will take out 1 milliseconds to flush the Redis main hash table (top-level key-value map).
# The hash table implementation used by Redis (see DICT.C) uses a deferred hash flush mechanism: the more you operate on a hash table, the more frequent the hash flush operation is.
On the other hand, if the server is very inactive then that is to save the hash table with a bit of memory.
#
# The default is to make 10 hash table refreshes per second to refresh the dictionary and free up memory as soon as possible.
#
Recommendations
# If you're concerned about latency, use "activerehashing no", and 2 milliseconds per request is a bad thing.
# set "Activerehashing yes" if you don't care too much about latency and want to free up memory as soon as possible.
activerehashing Yes

################################## contains ###################################

# contains one or more other configuration files.
# This is useful when you have a standard configuration template but each Redis server needs a personality setting.
# include file feature allows you to draw other profiles, so make good use of it.
#
# include/path/to/local.conf
# include/path/to/other.conf

Redis configuration file redis.conf Chinese version

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.