Redis installation configuration and persistence detailed

Source: Internet
Author: User
Tags allkeys delete key failover pings rehash strong password truncated redis cluster



I. Introduction of Redis



Second, Redis installation



Three, the Redis configuration file detailed



Four, Redis persistence detailed






1.redis Introduction



Redis is an open-source (BSD-licensed), in-memory data structure storage system that can be used as a database, cache, and message middleware. It supports multiple types of data structures such as strings (strings), hashes (hashes), lists (lists), collections (sets), ordered collections (sorted sets) and range queries, bitmaps, hyperloglogs and geospatial (GEOSP atial) index RADIUS query. Redis has built-in replication (replication), LUA scripting (LUA scripting), LRU driver events (LRU eviction), transactions (transactions), and different levels of disk persistence (persistence), and Redis Sentinel (Sentinel) and automatic partitioning (Cluster) provide high availability (HI availability).



In general, Redis is more like NoSQL because Redis stores data in memory, so Redis runs much faster than traditional databases, and according to industry testing, a better single can support hundreds of thousands of of concurrency.






2.redis Installation



Official website Download the latest stable version: http://redis.io/



Latest stable version: Redis-3.0.7.tar.gz

tar -xf redis-3.0.7.tar.gz
cd redis-3.0.7
make


Create directories and copy required files

mkdir -p / usr / local / redis / {conf, bin}
cp * .conf / usr / local / redis / conf /
cp runtest * / usr / local / redis /
cp mkreleasehdr.sh redis-benchmark redis-check-aof redis-check-dump redis-cli redis-sentinel redis-server redis-trib.rb / usr / local / redis / bin /
Create data file directory

mkdir -pv / data / redis / db


Create log path

mkdir / data / log / redis -pv
3. Detailed configuration file

vim /usr/local/redis/conf/redis.conf

daemonize yes
#redis Process in independent daemon mode

pidfile /var/run/redis.pid
#Specify the pid file path

port 6379
#Specify the port on which redis is running

tcp-backlog 511
#TCP The maximum number of listeners. In a high-concurrency environment, you need to increase this value to avoid slow client connections. The Linux kernel will silently reduce this value to the value corresponding to / proc / sys / net / core / somaxconn, so you must modify these two values to achieve your expectations.

# bind 127.0.0.1
#Specify the address that redis listens to, listen to all by default

# unixsocket /tmp/redis.sock
#Specify the socket file path of redis under Linux

timeout 0
#The client disconnects after how many seconds it is idle; the default is 0 for no disconnection.

tcp-keepalive 0
#If set to non-zero, use SO_KEEPALIVE to send tcp acks to the client when there is a lack of communication with the client. A reasonable value is recommended to be 60 seconds.

loglevel notice
#Log file level, providing debug, verbose, notice, warning four log levels.

logfile "/data/log/redis/redis.log"
#Specify the log file path

databases 16
#Specify the number of database instances. The default is 16. The default database used is DB 0.

#RDB Persistent Rules
# -------------------------------
save 900 1
# 900 seconds write to disk with a key change
save 300 10
# 300 seconds have 10 key changes, then write to disk
save 60 10000
# 60 seconds with 10,000 key changes, then write to disk
# -------------------------------

stop-writes-on-bgsave-error yes
By default, if the last background save of redis fails, redis will stop accepting write operations, so that the user knows that the data cannot be correctly persisted to disk in a hard way, otherwise no one will notice the disaster. If the background save process restarts, redis will automatically allow write operations. However, if you have reliable monitoring installed, you may not want redis to do so, then you should change to no.

rdbcompression yes
# Whether to use the LZF compressed string when dumping the .rdb database. The default is set to yes. If you want to save the child process to save CPU, you set it to no, but this data set may be relatively large.

rdbchecksum yes
#Whether to verify the RDB file

dbfilename dump.rdb
#RDBFile Save File Name

dir / data / redis / db
#DB file save path RDB and AOF files

# slaveof <masterip> <masterport>
#Enable redis master-slave replication settings

# masterauth <master-password>
#Authentication password required to connect to the main server

slave-serve-stale-data yes
# When a slave loses contact with the master or replication is in progress, the slave may behave in two ways:
# 1) If yes, the slave will still respond to client requests, but the data returned may be stale, or the data may be empty on the first synchronization
# 2) If no, the slave will return a "SYNC with master in progress" error when you execute commands other than info he salveof.

slave-read-only yes
#Set whether salve is read-only mode. After 2.6, redis default slaves are read-only.

repl-diskless-sync no
# Replication set synchronization strategy: disk or socket
# When the new slave connects or the old slave reconnects, it can't just receive the difference, you must do a full synchronization. A new RDB file needs to be dumped and then transferred from the master to the slave. There are two cases:
# 1) Disk-backed: The master creates a new process, dump RDB, which is passed from the parent process (that is, the main process) to slaves.
# 2) Based on socket (diskless): The master creates a new process to directly dump RDB to the socket of the slave without going through the main process and without going through the hard disk.
# Based on the hard disk, once the RDB file is created, once it is created, it can serve more slaves at the same time. Based on the socket, after the new slave comes, it must be queued (if it has not exceeded the repl-diskless-sync-delay), and finish the next one.
# When using diskless, the master waits for a repl-diskless-sync-delay seconds. If there is no slave, it will be passed directly. Otherwise, they can pass together.
# disk is slower and the network is faster, you can use diskless. (Disk-based by default)

repl-diskless-sync-delay 5
#When no hard disk backup is enabled, the server waits for a period of time before transmitting the RDB file to the slave via the socket. This waiting time is configurable. This is important because once the transmission has started, it is not possible to serve a newly arrived slave. The slave will have to wait in line for the next RDB transmission. So the server waits for a while for more slaves to arrive. The delay time is in seconds, and the default is 5 seconds. To turn this feature off, just set it to 0 seconds and the transmission will start immediately.

# repl-ping-slave-period 10
#The default value is 10, which specifies the period at which the slave periodically pings the master;

# repl-timeout 60
#The following sets the backup timeout:
# 1) From the perspective of the slave, the I / O of the batch transmission during synchronization
# 2) The master thinks that the slave timed out (data, ping)
# 3) Slave timeout (REPLCONF ACK pings) considered by master
#Confirm that these values are larger than the defined repl-ping-slave-period, otherwise every time the communication between the master and the slave is low, it will be detected as a timeout.

repl-disable-tcp-nodelay no
#Is TCP_NODELAY disabled on the slave after synchronization?
#If you choose yes, redis will use a smaller number of TCP packets and bandwidth to send data to the slave. But this will cause a little data delay at the slave. The Linux kernel has a maximum delay of 40 milliseconds in the default configuration.
#If you select no, the data delay of the slave station will not be so much, but the bandwidth required for backup will be relatively large.
#We optimize the latent factors by default, but it is a good idea to switch it to yes under high load conditions or when both the master and slave stations are jumping.


# repl-backlog-size 1mb
#Set the backup working reserve size. The working reserve is a buffer. When the slave is disconnected for a period of time, it receives the stored data for the slave. Therefore, when the slave reconnects, it is usually not necessary to perform a full backup, only a partial synchronization is required. Receive some data missed when the slave was disconnected.
#The larger the work reserve, the longer the disconnection time when the slave can disconnect and later perform a partial synchronization.
#As long as there is a slave connection, a working reserve will be allocated immediately.

# repl-backlog-ttl 3600
#The master station has not been connected to the slave station for a period of time, and the corresponding working reserve will be automatically released. The next option is to configure the number of seconds to wait before releasing, the seconds are counted from the moment of disconnection. A value of 0 means no release.

slave-priority 100
#Slave priority is an integer that can be found in the redis INFO command output. When the master does not work properly, redis sentinel uses it to select a slave and promote it to master.
#Low priority slaves are considered more suitable for promotion, so if there are three slaves with priority 10, 100, 25, sentinel will choose the slave with priority 10 because it has the lowest priority.
# However, a slave with a priority of 0 cannot perform the role of a master, so a slave with a priority of 0 will never be promoted by redis sentinel.
#The default priority is 100

# min-slaves-to-write 3
# min-slaves-max-lag 10
#The master station can stop accepting write requests. When there are less than N slave stations connected to it, the lag is less than M seconds.
#N slaves must be online.
#Delayed seconds must be <= the value defined. Delayed seconds are calculated from the last ping received from the slave. The ping is usually once per second.
#This option does not guarantee that N backups will accept write requests, but it will limit the loss of write operations due to insufficient slaves within the specified number of seconds.
#If you want at least 3 slaves and the delay is less than 10 seconds, the above configuration
#Set one to 0 to disable this feature.

# min-slaves-max-lag is set to 10.
#By default, default min-slaves-to-write is set to 0 (disabled) and min-slaves-max-lag is set to 10.

# requirepass foobared
#Password authentication slave is not required in most cases. At the same time, because the redis processing speed is too fast, the blasting rate can reach 150K / S. 100,000 / S. So if you want to set a password, you must set a super strong password.

# rename-command CONFIG ""
# Command renaming. In a shared environment, you can rename dangerous commands, such as CONFIG: you can also use an empty string to completely block this command.
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
# rename-command CONFIG ""
# The rename operation recorded in AOF or passed to the slave may cause problems ~.

# maxclients 10000
Set the maximum number of client connections. The default is 10,000 to 10,000.

# maxmemory <bytes>
#If the memory used by redis exceeds the set limit, first, start deleting data using the policy configured by maxmemory-policy, if it is configured as noeviction. All writes will be rejected, such as set, lpush, etc. All read requests are acceptable. It is mainly used to use redis in the LRU cache, or to use a memory-constrained strategy that cannot be deleted. If you have slaves, you should set the maximum memory too large, and leave some system memory for slave output buffers (if it is a noeviction strategy, you don't need to do this)

# Memory policy.
# volatile-lru-> delete key with ttl set by LRU
# allkeys-lru-> delete any key with LRU
# volatile-random-> randomly delete keys with ttl
# allkeys-random-> randomly delete any key
# volatile-ttl-> delete key that is about to expire ttl
# noeviction-> do not delete, error when write

# maxmemory-policy noeviction
#The default value is volatile-lru, which specifies the cleaning strategy.

# maxmemory-samples 5
Default value 3, LRU and minimum TTL The strategy is not a rigorous one, but a method of approximation, so sampling values can be selected for inspection.

appendonly no
#Whether AOF mode is enabled

appendfilename "appendonly.aof"
#Specify AOF file name

# appendfsync always
appendfsync everysec
# appendfsync no
#Call fsync () method to allow the operating system to write data to disk, data synchronization methods, the above several
#always Always write. Once the key changes, writing to disk will consume a lot of system resources.
#appendfsync everysec It is safer to write to disk every second.
# appendfsync no does not call fsync, the operating system determines when to synchronize, which is more efficient.

no-appendfsync-on-rewrite no
#Default value no. When the AOF fsync policy is set to always or everysec, the background save process will perform a large number of I / O operations. Redis may block too many fsync () calls in some Linux configurations.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# Automatically rewrite AOF
# When the size of AOF file reaches a certain ratio, BGREWRITEAOF is automatically implicitly called
#Procedure: redis remembers the aof file size at the last rewrite (if there is no rewrite after restarting, it is the size of the AOF file at startup), and if the ratio of the AOF size to the last time reaches a certain value, it will be rewritten. Also specify the minimum AOF size to prevent it from being doubled: overwriting at 1M.
# Changing percentage to 0 disables rewriting.

aof-load-truncated yes
# The AOF file may be incomplete at the end (the last system shutdown was problematic, especially when the data = ordered option was not added when mounting the ext4 file system. It only happens when os dies, and redis dies by itself will not be incomplete) Then there is a problem when redis loads into memory when restarting.
# When this happens, you can choose to start redis error reporting, or load as much normal data as possible.
# If aof-load-truncated is yes, it will automatically publish a log to the client and then load (default). If it is no, the user must manually redis-check-aof to repair the AOF file.

lua-time-limit 5000
# If the maximum time limit is reached (milliseconds), redis will log and return error.
# When a script exceeds the maximum time limit. Only SCRIPT KILL and SHUTDOWN NOSAVE are available. The first can kill things without the write command. If write has been called, it can only be killed with the second command.
# Set to 0 or negative value, the time limit is unlimited.

# cluster-enabled yes
#Whether the cluster function is enabled. The cluster is supported in versions after 3.0.

# cluster-config-file nodes-6379.conf
#Specify the cluster configuration file

# cluster-node-timeout 15000
#RedisCluster uses a quorum + heartbeat mechanism. From the perspective of the node, the node will periodically send pings to all other nodes. If no reply is received from the cluster-node-timeout (configurable, seconds), the unilaterally considers that the peer node is down, and Nodes are marked as PFAIL.


# cluster-slave-validity-factor 10
#If this is set to 0, failover will always be tried regardless of how long the slave node and master node lose contact (set to a positive number, the loss of contact is greater than a certain time (factor * node TimeOut), no longer failover). For example, if the node's timeout is set to 5 seconds and this item is set to 10, if the master loses contact with the slave for more than 50 seconds, the slave will not fail over its master (meaning it will not set the master to a suspended state). And replace it). Note: Any non-zero value may cause the master to suspend without failing over it, so the redis cluster is unavailable. In this case, only when the original master returns to the cluster can the cluster resume work.

# cluster-migration-barrier 1
#The minimum number of slaves a master can have. The function of this item is that when a master does not have any slaves, some master nodes with redundant slaves can automatically allocate a slave to it.

# cluster-require-full-coverage yes
#If this item is set to yes (the default is yes), when a certain percentage of the key space is not covered (that is, a part of the hash slot is gone, it may be temporarily suspended), the cluster stops processing any query hype. If this is set to no, then even if only a part of the keys in the request can be found, they can still be queried (but they may not be fully searched)

slowlog-log-slower-than 10000
#Thread blocking length of time that cannot serve other requests. Two parameters: the first is the duration (in microseconds !, which is one thousandth of a millisecond.). The second is the size of the log. If it exceeds, the previous log will be deleted.
# 1000000 is one second. Negative values log all requests! Below it is 0.10S. 100 milliseconds.

slowlog-max-len 128
# log length is unlimited. But memory is required.

latency-monitor-threshold 0
# Use LATENCY to print the time-consuming chart of the redis instance when running commands.
# Only record operations greater than or equal to the value set below. If it is 0, monitoring is turned off. Can be turned on dynamically. Run CONFIG SET latency-monitor-threshold <milliseconds> directly

notify-keyspace-events ""
# Can notify the pub / sub client about the change of key space. http://redis.io/topics/notifications
# For example, if the switch is on. A client performed a DEL operation on the "foo" key on database0. Two messages will be published via pub / sub
# PUBLISH [email protected] __: foo del
# PUBLISH [email protected] __: del foo
# Most people do not need this feature, and it also requires some overhead, so it is turned off by default.

hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# hash structure storage, use small arrays for data, large data for maps (encoding to save structure information)

list-max-ziplist-entries 512
list-max-ziplist-value 64
# Similar to hash, the list array that meets the conditions will also take a special way to save space.

set-max-intset-entries 512
# The default value is 512. When the data in the set type are all numeric types and the number of integer elements in the set does not exceed the specified value, a special encoding is used.

zset-max-ziplist-entries 128
zset-max-ziplist-value 64
#Similar to hash and list.

hll-sparse-max-bytes 3000
#HyperLogLog don't understand. Greater than 16000 is totally unacceptable! When the CPU is very strong, 10,000 is fine. The default is 3000.

activerehashing yes
# Active rehashing The more operations you enter into the table being rehashed, the more rehash steps need to be performed. If redis is idle, then the rehash operation can never be stopped, and more memory is also consumed.
# Use yes by default. If you want to release memory ASAP.

client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# client output buffer limit, can be used to forcibly close the client with slow transmission (for example, the redis pub has slower clients and cannot subscribe in time)
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
# class can be the following:
#
# normal-> normal clients including MONITOR clients
# slave-> slave clients
# pubsub-> clients subscribed to at least one pubsub channel or pattern
# When the hard limit is reached, the client will be shut down immediately. If the soft limit is reached, it will wait for soft seconds.
# For example, the hard limit is 32m, soft is 16m, and 10secs. It broke immediately at 32m, or stopped for 10secs above 16m.
# Set to 0 to disable.

hz 10
# redis The frequency of internal scheduling (clients that close timeout, delete expired keys, etc.). The greater the frequency, the higher the scheduling frequency. Setting it above 100 will put a lot of pressure on the CPU unless you have high requirements for online real-time. Can be between 1 and 500.

aof-rewrite-incremental-fsync yes
# When the child process is rewriting the AOF file, if this option is yes, then the file will write fsync () every 32MB. This is to ensure that incremental writes to the hard disk are prevented while I / O bursts while writing to the hard disk.
3.1. Configuration file example

daemonize no
pidfile /var/run/redis.pid
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile "/data/log/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir / data / redis / db
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 1024mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
3.2. Start the service

# / usr / local / redis / bin / redis-server /usr/local/redis/conf/redis.conf


View services

# ps -ef | grep redis
root 24596 23241 0 14:52 pts / 1 00:00:00 / usr / local / redis / bin / redis-server *: 6379
502 24646 21849 0 14:52 pts / 2 00:00:00 grep redis
4. Persistence

4.1. Two persistence instructions

Redis provides several different levels of persistence:

(1) RDB persistence can generate a point-in-time snapshot of a data set within a specified time interval.

(2) AOF persistently records all write operation commands executed by the server, and restores the data set by re-executing these commands when the server starts. All commands in the AOF file are saved in the Redis protocol format, and new commands are appended to the end of the file. Redis can also rewrite AOF files in the background so that the size of the AOF file does not exceed the actual size required to save the state of the dataset.

(3) Redis can also use AOF persistence and RDB persistence at the same time. In this case, when Redis restarts, it takes precedence

AOF file to restore the data set, because the data set saved by the AOF file is usually more complete than the data set saved by the RDB file.

(4) You can even turn off the persistence function, so that the data only exists when the server is running.

It is very important to understand the similarities and differences between RDB persistence and AOF persistence. The following sections will introduce these two persistence functions in detail.

Yes, and explain their similarities and differences.



4.2. Pros and cons of RDB

4.2.1. RDB is a bit

(1) RDB is a very compact file, which saves the data set of Redis at a certain point in time. This type of file is great for backups: for example, you can back up an RDB file every hour for the last 24 hours, and back up an RDB file every day of the month. This way, you can restore your dataset to a different version at any time, even if you encounter problems.

(2) RDB is very suitable for disaster recovery: it has only one file, and its content is very compact, and it can be transferred (after encryption) to another data center, or Amazon S3.

(3) RDB can maximize the performance of Redis: the only thing the parent process needs to do when saving the RDB file is to fork a child process, and then this child process will handle all the next save work, and the parent process does not need to perform any disk / O operation.

(4) The recovery speed of RDB is faster than the recovery speed of AOF.



4.2.2. Disadvantages of RDB

 (1) If you need to avoid data loss in the event of a server failure, RDB is not for you. Although Redis allows you to set different save points to control the frequency of saving RDB files, it is not an easy operation because RDB files need to save the state of the entire data set. So you may save the RDB file at least 5 minutes. In this case, you could lose several minutes of data in the event of a downtime.



 (2) Each time RDB is saved, Redis must fork () a child process, and the child process will perform the actual persistence work. When the data set is large, fork () may be very time-consuming, causing the server to stop processing the client within a certain millisecond; if the data set is very large and the CPU time is very tight, this stop time may even be long For a full second. Although AOF rewrite also requires fork (), no matter how long the execution interval of AOF rewrite is, there is no loss of data durability.



 4.3. AOF advantages and disadvantages

 4.3.1. AOF advantages

 (1) Using AOF persistence will make Redis much more durable: you can set different fsync policies, such as no fsync, fsync every second, or fsync every time a write command is executed. AOF's default policy is fsync once per second. In this configuration, Redis can still maintain good performance, and even if there is a downtime, it will only lose a maximum of one second of data (fsync will be executed in a background thread, so The main thread can continue to work hard to process the command request).

(2) The AOF file is an append-only log file, so writes to the AOF file do not need to be seek, even if the log contains some incomplete commands (such as write The disk is full at the time of entry, the writing is down, etc.), the redis-check-aof tool can also easily fix this problem.

(3) Redis can automatically rewrite AOF in the background when the AOF file becomes too large: the new AOF file after rewriting contains the minimum command set required to restore the current data set. The entire rewrite operation is absolutely safe, because Redis will continue to append commands to the existing AOF file during the process of creating a new AOF file, and the existing AOF file will not be lost even if the rewriting process is down . Once the new AOF file is created, Redis will switch from the old AOF file to the new AOF file and start appending the new AOF file.

(4) The AOF file sequentially stores all write operations performed on the database. These write operations are saved in the format of the Redis protocol. Therefore, the content of the AOF file is very easy for humans to understand. Parsing the file also Easy. Exporting AOF files is also very simple: for example, if you accidentally executed the FLUSHALL command, but as long as the AOF file is not rewritten, then just stop the server, remove the FLUSHALL command at the end of the AOF file, and restart Redis The data set can be restored to the state before FLUSHALL was executed.



 4.3.2. AOF disadvantages

 (1) For the same data set, the volume of the AOF file is usually larger than the volume of the RDB file.

 (2) Depending on the fsync strategy used, AOF may be slower than RDB. In general, the performance of fsync per second is still very high, and turning off fsync can make AOF as fast as RDB, even under high load. However, when dealing with huge write loads, RDB can provide a more guaranteed maximum latency.

 (3) AOF has had such a bug in the past: due to individual commands, when the AOF file is reloaded, the data set cannot be restored as it was when it was saved. (For example, the blocking command BRPOPLPUSH once caused such a bug.) Tests have been added to this case in the test suite: they automatically generate random and complex data sets and ensure that everything is reloaded normal. Although this kind of bug is not common in AOF files, in contrast, RDB is almost impossible to have this kind of bug.



Summary: As for what you want to use depends on your business, whether the data is important or performance, if it is for security, it is recommended to use the two together

This article comes from the "Deep Breath and Attack" blog, please keep this source http://ckl893.blog.51cto.com/8827818/1770766

Redis installation configuration and persistence

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.