Redis. conf Chinese version (based on 2.4)

Source: Internet
Author: User
Tags allkeys integer numbers strong password redis cluster install redis redis server

# Redis sample configuration file

# Note the unit problem: when you need to set the memory size, you can use a common format like 1 k, 5 GB, 4 m:
#
#1 k = & gt; 1000 bytes
# 1kb => 1024 bytes
#1 m => 1000000 bytes
#1 mb => 1024*1024 bytes
#1g => 1000000000 bytes
#1 gb => 1024*1024*1024 bytes
#
# The unit is case insensitive, so the 1 GB 1 Gb 1 GB format is the same.

# Redis is not run as a daemon by default. You can set this to "yes" to run as a daemon.
# Note: when used as a daemon, Redis writes the process ID to/var/run/redis. pid.
Daemonize no

# When running as a daemon, Redis writes the process ID to/var/run/redis. pid by default. You can modify the path here.
Pidfile/var/run/redis. pid

# The port that accepts the connection. The default value is 6379.
# If the port is set to 0, Redis will not listen to TCP sockets.
Port 6379

# If you want to bind a single interface, if not set separately, all interfaces will be listened.
#
# Bind 127.0.0.1

# Specify the path for listening to the connected unxi socket. There is no default value for this, so if you do not specify it, Redis will not listen through a unix socket.
#
# Unixsocket/tmp/redis. sock
# Unixsocketperm 755

# How many seconds after a client is idle. (0 indicates disabled and never disabled)
Timeout 0

# Set the server debugging level.
# Possible values:
# Debug (a lot of information, useful for development/Testing)
# Verbose (many simple and useful information, but not as many debug levels)
# Notice)
# Warning (only important/serious information is recorded)
Loglevel verbose

# Specify the log file name. You can also use "stdout" to force Redis to write log information to the standard output.
# Note: If Redis runs as a daemon, and you set the log to display the standard output, the log will be sent to/dev/null.
Logfile stdout

# It is easy to use the system logger. You only need to set "syslog-enabled" to "yes.
# Set other syslog parameters as needed.
# Syslog-enabled no

# Specify syslog identity
# Syslog-ident redis

# Specify the syslog device. It must be a user or LOCAL0 ~ LOCAL7.
# Syslog-facility local0

# Set the number of databases. The default database is DB 0. You can use SELECT <dbid> WHERE dbid (0 ~ 'Databases'-1) to use different databases for each connection.
Databases 16

################################ Snapshot ####### ##########################

#
# Save the database to the disk:
#
# Save <seconds> <changes>
#
# The database will be written to the disk after the specified number of seconds and the number of data changes.
#
# The following example writes data to the disk:
# After 900 seconds (15 minutes), and at least one change
# After 300 seconds (5 minutes), and at least 10 Changes
#60 seconds later, and at least 10000 changes
#
# Note: if you do not want to write a disk, just comment out all the "save" settings.

Save 900 1
Save 300 10
Save 60 10000

# Whether to use LZF to compress string objects when exporting data to the. rdb database.
# The default value is "yes", so it almost always takes effect.
# If you want to save CPU, you can set this to "no", but if you have compress keys, the data file will be larger.
Rdbcompression yes

# Database file name
Dbfilename dump. rdb

# Working directory
#
# The database will write to this directory, and the file name is the value of "dbfilename" above.
#
# Add files here.
#
# Note that you must specify a directory instead of a file name.
Dir ./

################################# Synchronization ###### ###########################

#
# Master-slave Synchronization. Use slaveof configuration to back up Redis instances.
# Note: Data is locally copied from the remote end. That is to say, local hosts can have different database files, bind different IP addresses, and listen on different ports.
#
# Slaveof <masterip> <masterport>

# If the master has set a password (configured using the "requirepass" option below), slave must perform authentication before starting synchronization, otherwise its synchronization request will be rejected.
#
# Masterauth <master-password>

# When an slave loses its connection to the master or the synchronization is in progress, there are two possible slave actions:
#
#1) If slave-serve-stale-data is set to "yes" (default), slave will continue to respond to client requests, which may be normal data, it may also be that empty data has not yet been obtained.
#2) If slave-serve-stale-data is set to "no", slave will reply "synchronizing with master in progress" to process various requests, besides the INFO and SLAVEOF commands.
#
Slave-serve-stale-data yes

# Slave sends a ping request to the server based on the specified interval.
# The time interval can be set through repl_ping_slave_period.
# The default value is 10 seconds.
#
# Repl-ping-slave-period 10

# The options below set the expiration time for large data I/O, Data Request to master, and ping response.
# The default value is 60 seconds.
#
# Make sure that the value is greater than repl-ping-slave-period. Otherwise, the transmission expiration time between the master and slave is shorter than expected.
#
# Repl-timeout 60

################################## Security ##### ##############################

# The client is required to verify the identity and password when processing any command.
# This is useful when you cannot trust visitors.
#
# For backward compatibility, comment out this section. And most people do not need authentication (for example, they run on their own servers .)
#
# Warning: Because Redis is too fast, people with bad intentions can try a K password every second to crack the password.
# This means you need a strong password. Otherwise, it will be too easy to crack.
#
# Requirepass foobared

# Command rename
#
# In a shared environment, you can change the name of a dangerous command. For example, you can change a name that is not easy to guess for CONFIG, so that you can still use it on your own, but others cannot do bad things.
#
# Example:
#
# Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# You can even assign an empty string to the command to completely disable the command:
#
# Rename-command CONFIG ""

################################### Restrictions #### ################################

#
# Set the maximum number of connected clients at the same time.
# There is no limit by default. This is related to the number of file descriptors that can be opened by the Redis process.
# The special value "0" indicates no restriction.
# Once this limit is reached, Redis will close all new connections and send an error "reaching the maximum number of users (max number of clients reached )"
#
# Maxclients 128

# Do not use more memory than the configured upper limit. Once the memory usage reaches the upper limit, Redis will delete the key based on the selected recycle policy (see maxmemmory-policy.
#
# If Redis cannot delete the key due to deletion policy issues, or the policy is set to "noeviction", Redis will reply to the command with more memory errors.
# For example, SET and LPUSH. However, it will continue to respond to read-only commands properly, such as GET.
#
# This option is useful when Redis is used as the LRU cache or when the "noeviction" policy is set for the instance.
#
# Warning: when a bunch of slave instances are connected to the memory limit, the memory required to respond to the output cache required by slave is not counted in the memory usage.
# In this way, when a deleted key is requested, the network issue/re-synchronization event will not be triggered, and slave will receive a bunch of delete commands until the database is empty.
#
# In short, if you have a Server Load balancer instance connected to a master, we recommend that you set the master memory limit to a smaller value to ensure that enough system memory is used as the output cache.
# (It doesn't matter if the policy is set to "noeviction)
#
# Maxmemory <bytes>

# Memory Policy: if the memory limit is reached, how does Redis Delete the key. You can choose from the following five policies:
#
# Volatile-lru-> delete an instance based on the expiration time generated by the LRU algorithm.
# Allkeys-lru-> delete any key based on the LRU algorithm.
# Volatile-random-> Delete keys randomly based on expiration settings.
# Allkeys-> random deletion without difference.
# Volatile-ttl-> Delete (supplemented by TTL) based on the latest expiration time)
# Noeviction-> no one will delete it, and an error will be returned directly during the write operation.
#
# Note: For all policies, if Redis cannot find a suitable key that can be deleted, an error will be returned during the write operation.
#
# The command involved here: set setnx setex append
# Incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# Sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# Zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# Getset mset msetnx exec sort
#
# The default value is as follows:
#
# Maxmemory-policy volatile-lru

# The implementation of LRU and the minimum TTL algorithm is not very accurate, but very close (to save memory), so you can use the example for testing.
# For example, by default, Redis checks the three keys and obtains the oldest one. You can set the number of samples through the following configuration items.
#
# Maxmemory-samples 3

############################# Pure accumulate mode ####### ########################

# By default, Redis asynchronously exports data to the disk. In this case, when Redis crashes, the latest data is lost.
# If you do not want to lose any piece of data, you should use the pure accumulate mode: once this mode is enabled, Redis will write the data written each time to the appendonly. aof file after receiving it.
# Redis reads the data in this file into the memory every time it starts.
#
# Note: database files exported asynchronously and pure accumulated files can coexist (you must comment out all the above "save" Settings and disable the export mechanism ).
# If the pure accumulate mode is enabled, Redis will load the log file at startup and ignore the exported dump. rdb file.
#
# Important: View BGREWRITEAOF to learn how to reprocess the log file in the background when the log file is too large.

Appendonly no

# Add file name (default: "appendonly. aof ")
# Appendfilename appendonly. aof

# Fsync () requests the operating system to immediately write data to the disk. Do not wait.
# Some operating systems will actually fl data to the disk immediately; some of them need to be honed, but they will do so as soon as possible.
#
# Redis supports three different modes:
#
# No: do not click it immediately. You can click it again only when the operating system needs to click it. Fast.
# Always: Every write operation is immediately written to the aof file. Slow, but safest.
# Everysec: Write once per second. Compromise case.
#
# The default "everysec" is usually a good balance between speed and data security.
# If you really understand what this means, you can set "no" to achieve better performance (if data is lost, you can only get one snapshot that is not very new );
# On the contrary, you choose "always" to sacrifice speed to ensure data security and integrity.
#
# If you are not sure, use "everysec"

# Appendfsync always
Appendfsync everysec
# Appendfsync no

# If the AOF synchronization policy is set to "always" or "everysec", background Storage Processes (storing or writing AOF logs in the background) will incur a lot of disk I/O overhead.
# Some Linux configurations may cause Redis to be blocked for a long time due to fsync.
# Note: the current situation has not been perfectly corrected, and even fsync () of different threads will block our write (2) requests.
#
# Use the following option to alleviate this problem. It can block fsync () during BGSAVE or BGREWRITEAOF processing ().
#
# This means that if a sub-process is performing the save operation, Redis will be in the "non-synchronous" state.
# This means that in the worst case, 30 seconds of log data may be lost. (Default Linux settings)
#
# If you have latency problems, set this to "yes"; otherwise, keep "no", which is the safest way to save persistent data.
No-appendfsync-on-rewrite no

# Automatically override AOF files
#
# If the AOF log file is larger than the specified percentage, Redis can automatically rewrite the AOF log file through BGREWRITEAOF.
#
# Working principle: Redis remembers the size of the AOF log during the last rewrite (or if no write operation is performed after the restart, the AOF file will be used directly ),
# Compare the reference size with the current size. If the current size exceeds the specified proportion, the rewrite operation is triggered.
#
# You also need to specify the minimum size of the log to be rewritten, so as to avoid rewriting when the agreed percentage is reached but the size is still small.
#
# If the percentage is 0, the AOF automatic rewrite feature is disabled.

Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64 mb

################################## Slow query log ### ################################

# Redis slow query logs can record queries that have exceeded the specified time. The running time does not include various I/O times.
# For example, connect to the client and send response data. Only calculate the actual time of command running (this is the only scenario where the command running thread is blocked and cannot serve other requests at the same time)
#
# You can configure two parameters for slow query logs: one is the time exceeding the standard, and the Unit is subtle. commands that record the time exceeding the limit are used.
# The other is the length of the slow query log. When a new command is written into the log, the oldest record is deleted.
#
# The following time unit is microsecond, so 1000000 is 1 second. Note that the negative time will disable slow query logs, and 0 will force all commands to be logged.
Slowlog-log-slower-than 10000

# There is no limit on the length. You only need enough memory. You can use slowlog reset to release memory. (Note: logs are in the memory of Orz)
Slowlog-max-len 128

################################ Virtual memory ###### #########################

### Warning! Redis 2.4 exists in the virtual network.
### We strongly recommend that you use virtual memory !!

# The virtual memory allows Redis to save all data sequences in the memory when the memory is insufficient.
# To do this, the high-frequency key will be transferred to the memory, and the low-frequency key will be transferred to the swap file, just like the Memory Page used by the operating system.
#
# To use virtual memory, set "vm-enabled" to "yes" and set the following three virtual memory parameters as needed.

Vm-enabled no
# Vm-enabled yes

# This is the path of the swap file. As you may have guessed, the swap file cannot be shared among multiple Redis instances, so make sure that each Redis instance uses an independent swap file.
#
# SSD is the best medium for storing swap files (randomly accessed ).
#
# *** Warning ** if you use a shared host, it is not safe to put the default swap file under/tmp.
# Create a writable directory for Redis users and configure Redis to create swap files here.
Vm-swap-file/tmp/redis. swap

# "Vm-max-memory": configure the maximum available memory capacity of the virtual memory.
# If the swap file still has space, all excess parts will be placed in the swap file.
#
# If "vm-max-memory" is set to 0, the system will use all available memory.
# The default value is slightly different. It only means that you can use all the memory that you can use, and the remaining margin will be better.
# For example, set it to 60%-80% of the remaining memory.
Vm-max-memory 0

# Redis swap files are divided into multiple data pages.
# A stored object can be saved on multiple consecutive pages, but one data page cannot be shared by multiple objects.
# Therefore, if your data page is too large, small objects will waste a lot of space.
# If the data page is too small, there will be less swap space for storage (assuming you set the same number of Data Pages)
#
# If you use many small objects, it is recommended that the page size be 64 or 32 bytes.
# If you use many large objects, use a larger size.
# If you are not sure, use the default value :)
Vm-page-size 32

# Total number of data pages in the swap file.
# Based on the memory split page table (used/unused data page distribution), each 8 data pages on the disk consumes one byte of memory.
#
# Swap Zone capacity = vm-page-size * vm-pages
#
# Based on the default 32-byte data page size and 134217728 of the data page size, Redis data page files occupy 4 GB, while paging tables in the memory consume 16 MB of memory.
#
# Set the minimum and sufficient number for your fulfillment program. The default value below is too large in most cases.
Vm-pages 134217728

# Number of virtual memory I/O threads that can run simultaneously.
# These threads can read and write data from swap files, or process data interaction and encoding/decoding between memory and disk.
# More threads can improve the processing efficiency to a certain extent. Although I/O operations depend on physical devices, they do not increase the efficiency of a single read/write operation because of more threads.
#
# The special value 0 will disable thread-Level I/O and enable the blocking virtual memory mechanism.
Vm-max-threads 4

############################## Advanced Configuration ####### ########################

# When there is a large amount of data, it is suitable to use Hash encoding (more memory is required), and the maximum number of elements cannot exceed the given limit.
# You can set these limits using the following options:
Hhash-max-zipmap-entries 512
Hash-max-zipmap-value 64

# Similar to hash, if there are few data elements, you can use another method for encoding to save a lot of space.
# This method can be used only when the following restrictions are met:
List-max-ziplist-entries 512
List-max-ziplist-value 64

# There is also a special encoding: The data is a string consisting of 64-bit unsigned integer numbers.
# The following configuration item is used to limit the maximum value of this encoding in this case.
Set-max-intset-entries 512

# Similar to the first and second cases, ordered sequences can also be processed in a special encoding method, saving a lot of space.
# This encoding is only applicable to the ordered sequence with the length and elements meeting the following restrictions:
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64

# Hash Refresh: every 100 CPUs refresh the Redis master hash table (top-level key-value ing table) in milliseconds ).
# Implementation of the hash table used by redis (see dict. c) adopts the delayed hash refresh mechanism: the more you operate on a hash table, the more frequent the hash refresh operation;
# If the server is very inactive, the hash table is saved in the dot memory.
#
# By default, 10 hash table refreshes are performed every second to refresh the dictionary and release the memory as soon as possible.
#
# Suggestion:
# If you are concerned about latency, use "activerehashing no". The latency of each request is 2 milliseconds.
# If you do not care much about latency and want to release the memory as soon as possible, set "activerehashing yes ".
Activerehashing yes

##### ##############################

# Contains one or more other configuration files.
# This is useful when you have a standard configuration Template but each redis server requires personalized settings.
# The File Inclusion feature allows you to attract other configuration files, so make good use of it.
#
# Include/path/to/local. conf
# Include/path/to/other. conf

Redis details: click here
Redis: click here

Recommended reading:

Redis cluster details

Install Redis in Ubuntu 12.10 (graphic explanation) + Jedis to connect to Redis

Redis series-installation, deployment, and maintenance

Install Redis in CentOS 6.3

Redis. conf

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.