Redis Profile redis.conf Chinese version (based on 2.4) _redis

Source: Internet
Author: User
Tags allkeys hash memory usage redis syslog time interval volatile redis server

Copy Code code as follows:

# Redis Sample configuration file

# Note Unit problem: When you need to set the memory size, you can use a common format like 1k, 5GB, 4M:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1GB => 1024*1024*1024 bytes
#
# units are case insensitive, so the 1GB 1Gb 1gB is written in exactly the same way.

# Redis is not run as a daemon by default. You can set this to "yes" to run it as a daemon.
# Note that when you are a daemon, Redis writes the process ID to/var/run/redis.pid
Daemonize No

# when running as a daemon, Redis will write the process ID to/var/run/redis.pid by default. You can modify the path here.
Pidfile/var/run/redis.pid

# Accept the specific port of the connection, default is 6379.
# If the port is set to 0,redis, TCP sockets are not listening.
Port 6379

# If you want, you can tie an order to an interface; if there is no separate setting, then all connections to the interface will be monitored.
#
# bind 127.0.0.1

# Specifies the path of the Unxi socket that is used to listen for connections. This has no default value, so if you don't specify it, Redis will not be listening through UNIX sockets.
#
# Unixsocket/tmp/redis.sock
# unixsocketperm 755

#一个客户端空闲多少秒后关闭连接. (0 means disabled, never close)
Timeout 0

# Set the server debug level.
# Possible values:
# Debug (lots of information, useful for development/testing)
# verbose (lots of useful information, but not as much as debug level)
# Notice (right amount of information, basically the extent of your production environment)
# Warning (only important/serious information will be recorded)
LogLevel verbose

# indicates the log file name. You can also use "stdout" to force Redis to write log information to standard output.
# Note: If Redis is running as a daemon and you set the log to display to standard output, the log is sent to/dev/null
LogFile stdout

# to use the system logger is simple, just set "syslog-enabled" to "yes".
# then set some other syslog parameters as needed.
# syslog-enabled No

# indicates the syslog identity
# syslog-ident Redis

# indicates the device for the syslog. Must be a user or one of the LOCAL0 ~ LOCAL7.
# syslog-facility Local0

# Set the number of databases. The default database is DB 0, and you can use a different database for each connection through select <dbid> WHERE dbid (0~ ' databases '-1).
Databases 16

################################ Snapshot #################################

#
# Save the database to disk:
#
# Save <seconds> <changes>
#
# writes the database to disk after the specified number of seconds and the number of data changes.
#
# The following example will be done to write the data to disk:
# 900 seconds (15 minutes) after, and at least 1 changes
# 300 seconds (5 minutes) after, and at least 10 changes
# After 60 seconds, and at least 10,000 changes
#
# Note: If you want to do not write a disk, you can comment out all the "save" settings.

Save 900 1
Save 300 10
Save 60 10000

# Use LZF to compress string objects when exporting to the. Rdb database.
# The default setting is yes, so it's almost always in effect.
# If you want to save the CPU you can set this to ' no ', but if you have a compressible key, the data file will be larger.
Rdbcompression Yes

# The file name of the database
Dbfilename Dump.rdb

# working Directory
#
# The database will be written to this directory, and the filename is the value of the "Dbfilename" above.
#
# The cumulative file is also put here.
#
# Note that you must specify a directory, not a filename.
Dir./

################################# Sync #################################

#
# master-slave sync. A backup of the Redis instance is implemented through the slaveof configuration.
# Note that this is a local copy of the data from the remote. In other words, local can have different database files, bind different IP, listen to different ports.
#
# slaveof <masterip> <masterport>

# If Master sets a password (configured by the "requirepass" option below), then slave must authenticate before beginning the synchronization, otherwise its synchronization request will be rejected.
#
# Masterauth <master-password>

# when a slave loses its connection to master, or synchronization is in progress, there are two possible slave behaviors:
#
# 1 if Slave-serve-stale-data is set to Yes (the default), slave will continue to respond to client requests, either normal data or empty data that has not yet obtained a value.
# 2 If the slave-serve-stale-data is set to "no", Slave will reply "is synchronizing from master (sync with Master in progress)" to handle various requests, except for the INFO and slaveof commands.
#
Slave-serve-stale-data Yes

# slave sends ping requests to the server based on the specified time interval.
# The time interval can be set by Repl_ping_slave_period.
# Default 10 seconds.
#
# Repl-ping-slave-period 10

# The following options set the expiration time for bulk data I/O, request data to master, and ping responses.
# The default value is 60 seconds.
#
# One important thing is to make sure that this value is larger than the repl-ping-slave-period, otherwise the transfer expiration time between master and slave is shorter than expected.
#
# Repl-timeout 60

################################## Safety ###################################

# requires the client to verify identity and password when processing any command.
# This is useful when you can't trust a visitor.
#
# for backwards compatibility, this paragraph should be commented out. And most people don't need authentication (for example: they run on their own servers.) )
#
# Warning: Because the Redis is too fast, people with bad intentions can try 150k passwords every second to try to crack the password.
# This means you need a high strength password, or it's too easy to crack.
#
# Requirepass Foobared

# command renaming
#
# in a shared environment, you can change the name for a dangerous command. For example, you can change a name that is not easy to guess for CONFIG so that you can still use it, while others can't do bad things.
#
e.g.
#
# Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# You can even disable this command completely by assigning an empty string to the command:
#
# Rename-command CONFIG ""

################################### Limit ####################################

#
# Set the maximum number of simultaneous clients.
# There is no Limit by default, which is related to the number of file descriptors that the Redis process can open.
# special value ' 0 ' indicates no limit.
# Once this limit is reached, Redis closes all new connections and sends the error "up to maximum User count (max number of clients reached)"
#
# maxclients 128

# do not use more memory than the upper limit set. Once the memory usage reaches the upper limit, Redis deletes the key based on the selected recycle policy (see: Maxmemmory-policy).
#
# If Redis cannot delete the key because of a delete policy issue, or if the policy is set to ' noeviction ', Redis will reply to the command for an error message that requires more memory.
# For example, Set,lpush, and so on. But it will continue to respond reasonably to read-only commands, such as: get.
#
# This option is useful when you use Redis as a LRU cache, or when you set a hard memory limit for an instance (using the "noeviction" policy).
#
Warning: When a heap of slave is connected to an instance of the memory limit, the memory required for the response slave output cache is not counted in use memory.
# This will not trigger a network problem/resync event when a deleted key is requested, and then slave will receive a bunch of deletion instructions until the database is empty.
#
# In short, if you have slave connected to a master, it is recommended that you set the master memory limit to a smaller amount to ensure that there is enough system memory to use as output caching.
# (It doesn't matter if the policy is set to "Noeviction")
#
# maxmemory <bytes>

# Memory Policy: If the memory limit is reached, redis how to remove key. You can choose from the following five strategies:
#
# VOLATILE-LRU-> is deleted according to the expiration time generated by the LRU algorithm.
# ALLKEYS-LRU-> deletes any key according to the LRU algorithm.
# Volatile-random-> deletes the key randomly according to the expiration setting.
# allkeys->random-> random deletion without distinction.
# Volatile-ttl-> deleted based on recent expiration (auxiliary ttl)
# noeviction-> who does not delete, returns an error directly in the write operation.
#
Note: For all policies, if Redis cannot find a suitable key to delete, it will return an error in the write operation.
#
#       the commands involved here: Set setnx Setex append
#        INCR decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter Sin Terstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore Zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default values are as follows:
#
# maxmemory-policy VOLATILE-LRU

# The implementations of the LRU and minimum TTL algorithms are not very precise, but they are close (to save memory), so you can test them with examples.
# For example: The default redis will check three keys and then take the oldest one, you can set the number of samples by the following configuration items.
#
# Maxmemory-samples 3

############################## Pure Additive Mode ###############################

# By default, Redis is asynchronously exporting data to disk. In this case, when the Redis hangs up, the latest data is lost.
# If you don't want to lose a single piece of data, you should use the pure additive mode: Once this mode is turned on, Redis writes each written data to the Appendonly.aof file after receiving it.
# Every time you start, Redis reads the data of this file into memory.
#
# Note that the asynchronous exported database file and the pure cumulative file can coexist (you have to comment out all the save settings above, and turn off the export mechanism).
# If the pure additive mode is turned on, Redis will load the log file at startup and ignore the exported Dump.rdb file.
#
# Important: View bgrewriteaof to learn how to process this log file in the background after the cumulative log file is too large.

AppendOnly No

# Pure Cumulative file name (default: "Appendonly.aof")
# Appendfilename Appendonly.aof

# Fsync () request the operating system to write data to disk immediately, do not wait.
# Some operating systems will actually brush the data to disk right away, while others will have to do it as quickly as possible.
#
# Redis supports three different modes:
#
# No: Do not brush immediately, only when the operating system needs to brush and then brush. Relatively fast.
# always: Writes to the aof file at once for each write operation. Slow, but safest.
# Everysec: Write once per second. Compromise solution.
#
# The default ' Everysec ' is usually a good balance between speed and data security.
# If you really understand what this means, setting ' no ' can achieve better performance (if you lose the data, you can get a snapshot that is not very new);
# or, conversely, you choose ' always ' to sacrifice speed to ensure data security and integrity.
#
# If you're in doubt, use ' everysec '

# Appendfsync Always
Appendfsync everysec
# Appendfsync No

# If the AOF synchronization policy is set to "always" or "everysec", then the background storage process (background storage or write to the AOF log) generates a lot of disk I/O overhead.
# Some Linux configurations can cause Redis to block for Fsync () for a long time.
# Note that there is no perfect fix for this situation, and even the Fsync () of different threads will block our write (2) request.
#
# to alleviate this problem, you can use the following option. It can block Fsync () when bgsave or bgrewriteaof processing.
#
# This means that if a child process is being saved, then Redis is in a state of "not synchronized."
# This is actually saying that in the worst case, you might lose 30 seconds of log data. (Default Linux settings)
#
# If you have a problem with the delay, set this to Yes, or keep ' no ', which is the safest way to keep persistent data.
No-appendfsync-on-rewrite No

# automatically rewrite aof files
#
# If the aof log file is large to a specified percentage, Redis can automatically rewrite aof log files through bgrewriteaof.
#
# How it works: Redis Remember the size of the AOF log when it was last rewritten (or if there was no write after the reboot, then use the AoF file right now),
# The base size is compared with the current size. If the current size exceeds the specified proportions, the override operation is triggered.
#
# You also need to specify the minimum size of the rewritten log, which avoids the need to override the agreed percentage but still has a small size.
#
# Specifying a percentage of 0 disables the AoF automatic override attribute.

Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64MB

################################## Slow Query Log ###################################

# Redis Slow query logs can record queries that exceed a specified time. Run time does not include various I/O times.
# For example: Connect the client, send the response data, and so on. Calculates only the actual time the command is running (this is the only scenario where the command runs a thread that is blocked and cannot serve other requests at the same time)
#
# you can do it for slow query log configuration two parameters: one is exceeding the time, the unit is subtle, records more than a time command.
# Another is slow query log length. When a new command is written into the log, the oldest record is deleted.
#
# The unit of time below is microsecond, so 1000000 is 1 seconds. Note that negative time disables the slow query log, while 0 forces all commands to be logged.
Slowlog-log-slower-than 10000

# There is no limit to this length. As long as there is enough memory on the line. You can free up memory by Slowlog RESET. (Translator Note: The log is actually in the memory of the Orz)
Slowlog-max-len 128

################################ Virtual Memory ###############################

### Warning! The virtual existence of Redis 2.4 is opposed.
### is very discouraged from using virtual memory!!

# virtual memory enables Redis to keep all data sequences in memory in the event of insufficient memory.
# in order to do this, the High-frequency key will be transferred to the memory, and the Low-frequency key will be transferred to the swap file, just as the operating system uses the memory page.
#
# To use virtual memory, just set "vm-enabled" to "yes" and set the following three virtual memory parameters as needed.

Vm-enabled No
# vm-enabled Yes

# This is the path of the interchange file. I guess you guessed it. The swap file cannot be shared among multiple Redis instances, so make sure that each Redis instance uses a separate interchange file.
#
# The best way to save a swap file (which is randomly accessed) is a solid-state hard drive (SSD).
#
# * * * * * * WARNING * * * * If you use a shared host, it is not safe to put the default swap file in/tmp.
# Create a Redis user writable directory and configure Redis to create an interchange file here.
Vm-swap-file/tmp/redis.swap

# "Vm-max-memory" configures the maximum amount of memory available for virtual memory.
# If there's room for the swap file, all the excess parts will be placed in the swap file.
#
# "Vm-max-memory" set to 0 indicates that all available memory is used by the system.
# This default value is not good, but you can use all the memory, leaving a little margin will be better.
# For example, set to 60%-80% of the remaining memory.
Vm-max-memory 0

# Redis interchange files are divided into multiple data pages.
# A removable object can be stored in multiple contiguous pages, but a data page cannot be shared by multiple objects.
# So, if your data page is too large, a small object can waste a lot of space.
# If the data page is too small, there will be less swap space for storage (assuming you set the same number of data pages)
#
# If you use many small objects, it is recommended that the paging size be 64 or 32 bytes.
# If you use a lot of big objects, use a larger size.
# If you're not sure, use the default value:)
Vm-page-size 32

# The total number of data pages in the swap file.
# every 8 data pages on a disk consume 1 bytes of memory, depending on the paging table in memory (the Used/unused data page distribution).
#
# Swap Area capacity = Vm-page-size * Vm-pages
#
# based on the default 32-byte data page size and 134217728 of the number of data pages, the Redis data page file will occupy 4GB, and the paging table in memory will consume 16MB memory.
#
# It's good to set the minimum and sufficient number for your fulfillment program, and the following defaults are large in most cases.
Vm-pages 134217728

# The number of virtual memory I/O threads that can be run at the same time.
# These threads can complete the data read and write from the interchange file, and can also handle the data between memory and disk interaction and encoding/decoding processing.
# More threads can improve processing efficiency to some extent, although I/O operations depend on the limitations of physical devices and do not increase the efficiency of single read and write operations because of more threads.
#
# special value 0 turns off thread-level I/O and opens the blocking virtual memory mechanism.
Vm-max-threads 4

############################### Advanced Configuration ###############################

# when there is a lot of data, it is appropriate to hash code (requires more memory), the upper limit of the number of elements can not exceed the given limit.
# You can set these limits with the following options:
Hash-max-zipmap-entries 512
Hash-max-zipmap-value 64

# similar to Hashishan, with fewer data elements, it can be encoded in a different way to save a lot of space.
# This method can only be used when the following restrictions are met:
List-max-ziplist-entries 512
List-max-ziplist-value 64

# There is also a special encoding: The data is a string of 64-bit unsigned integer digits.
# The following configuration item is used to limit the maximum limit for using this encoding in this case.
Set-max-intset-entries 512

# Similar to the first and second situations, ordered sequences can be processed in a special way, saving a lot of space.
# This encoding only fits the ordered sequence of lengths and elements that conform to the following restrictions:
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64

# hash flush, each 100 CPU milliseconds will take out 1 milliseconds to refresh the Redis Master hash table (the top-level key value map).
# The hash table implementation used by Redis (see DICT.C) uses a delayed hash refresh mechanism: the more you operate on a hash, the more frequent the hash refresh operation is;
# Conversely, if the server is very inactive then it is a bit of memory to save the hash table.
#
# The default is 10 hash table refreshes per second, used to refresh the dictionary, and then release the memory as soon as possible.
#
Recommendations
# If you care about latency and use ' activerehashing no ', it's not good to delay 2 milliseconds per request.
If you don't care too much about delays and want to free up memory as soon as possible, set "Activerehashing yes".
activerehashing Yes

################################## contains ###################################

# contains one or more other configuration files.
# This is useful when you have a standard configuration template but each Redis server needs personality settings.
# contains file attributes that allow you to draw other profiles, so take advantage of it.
#
# include/path/to/local.conf
# include/path/to/other.conf

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.