Redis deletion mechanism, persistent Master/Slave

Source: Internet
Author: User
Tags allkeys redis server

The usage of redis is divided into two points:

Performance
As shown in, running SQL statements that take a long time and do not frequently change results are especially suitable for caching the running results. In this way, the subsequent requests are read in the cache so that the request canRapid response.

 

Concurrency
In the case of high concurrency, all requests directly access the database, and a connection exception occurs in the database. At this time, you need to use redis for a buffer operation to allow requests to access redis first, instead of directly accessing the database.

 

What are the disadvantages of using redis?

Analysis: You have used redis for so long. This problem must be understood. Basically, you will encounter some problems when using redis, which are also common.
Answer: Four Problems
(1) consistency between cache and database dual-write
(2) cache avalanche
(3) cache breakdown
(4) cache concurrency Competition

 

 

 

Redis expiration Policy and memory Elimination Mechanism

Analysis: This issue is actually very important. We can see whether redis is used to get home. For example, if your redis instance can only store 5 GB of data and you write 10 Gb of data during the lesson, 5 GB of data will be deleted. How can I delete it? Have you thought about it? Also, you have set the expiration time for your data, and the memory usage is still relatively high. Have you ever thought about it? Let me explain the following:

Redis adopts Regular deletion + inert deletion policies

 

Why not delete a policy regularly? :

Scheduled deletion: a timer is used to monitor the key. When the key expires, it is automatically deleted. Although the memory is released in time, CPU resources are greatly consumed, in a large concurrent request, the CPU should use the time to process the request as much as possible, instead of deleting the key. Therefore, this policy is not used.

 

How does Regular deletion + inert deletion work?

Regular deletion. redis checks for expired keys every MS by default, and deletes expired keys. It should be noted that redis does not check all keys every 100 ms, but is randomly selected for check (if all keys are checked every MS, redis will not be stuck ). Therefore, if you only use a regular policy, many keys are not deleted until the time.

That is to say, Regular deletion will lead to incomplete deletion, so the inert deletion will appear.

That is to say, when you get the key, redis will check whether the key has expired if the expiration time is set? If it expires, it will be deleted.

It's okay if regular deletion and inert deletion are adopted?

No. If the key is not deleted on a regular basis, and you do not request the key in time, this means that the inert deletion does not take effect. In this way, redis memory will become higher and higher, so we should adopt the memory elimination mechanism.

One configuration line in redis. conf

# maxmemory-policy volatile-lru

This configuration is configured with the memory elimination policy (what, you have not configured it? Reflect on yourself)
1) noeviction: When the memory is insufficient to accommodate newly written data, an error is reported during the new write operation.No one should use it.
2) allkeys-LRU: When the memory is insufficient to accommodate newly written data, remove the least recently used key from the key space.It is recommended. This is currently used in projects.
3) allkeys-random: When the memory is insufficient to accommodate newly written data, a key is randomly removed from the key space.There should be no one to use. If you do not delete at least keys, delete them randomly.
4) volatile-LRU: When the memory is insufficient to accommodate newly written data, remove the least recently used key from the key space with an expiration time.In this case, redis is generally used for both cache and persistent storage. Not recommended
5) volatile-random: When the memory is insufficient to accommodate newly written data, a key is randomly removed from the key space with an expiration time.Not recommended
6) volatile-TTL: When the memory is insufficient to accommodate newly written data, keys with an earlier expiration time are preferentially removed from the key space with an expiration time set.Not recommended
PS: If the expire key is not set and does not meet the prerequisites, the volatile-LRU, volatile-random and volatile-TTL policies, and noeviction (do not delete) basically consistent.

 

 

 

Redis persistence principle and configuration details

The powerful function of redis is largely because it stores all data in the memory. To ensure that data is not lost after redis is restarted, the data needs to be persisted to the hard disk in some form from the memory. Redis supports two persistence Methods: RDB and Aof. You can use either of the two methods. (Persistence stores data on disks. Data stored in memory is lost when the machine goes down or restarts)


Redis has two persistence Methods: Snapshot (RDBFiles) and append files (AOFFile ):

  • The RDB persistence mode saves a data snapshot at that time point at a specific interval.
  • The aof persistence method records the write operations received by each server. When the service starts, these recorded operations will be executed one by one to recreate the original data. The format of the write operation Command record is the same as that of the redis Protocol and is saved as an append.
  • Redis persistence can be disabled, that is, you can make the data life cycle only exist in the running time of the server.
  • The persistence of the two methods can exist at the same time. However, when redis is restarted, The aof file is preferentially used for Data Reconstruction.

RDB Overview:

RDB writes data to a temporary file at a certain time point. After the persistence ends, it replaces the last persistent file with this temporary file to recover data.
Advantage: the persistence of a single sub-process is used, and the main process does not perform any Io operations, ensuring the high performance of redis.
Disadvantage: RDB is persistent for a period of time. If redis fails during persistence, data will be lost. Therefore, this method is more suitable for scenarios where data requirements are not rigorous.
RDB is the default persistence method of redis, so RDB is enabled by default.

The specific configuration parameters in redis. conf are as follows;

# Dbfilename: The file dbfilename dump that stores persistent data locally. RDB # dir: the path where persistent data is stored locally. If it is redis-cli started under/redis/redis-3.0.6/src, the data is stored in the current src directory dir. /## when snapshot is triggered, save <seconds> <changes> ## after 900 seconds, at least one change operation will be performed before snapshot ## setting this value, exercise caution when evaluating the intensity of system change operations # You can use "save" to disable the snapshot function # Save time, the following respectively indicates that one key has been changed for persistent storage every 10000 s; 10 key300s have been changed for storage; and key60s have been changed for storage. Save 900 1 save 300 10 save 60 10000 # Whether to block the "change operation" of the client when an error occurs during snapshot ", "error" may be caused by Disk Full/disk failure/OS-level exception, etc. Stop-writes-on-bgsave-error yes # Whether to enable RDB File compression. The default value is "yes ", compression often means "Additional CPU consumption", and also means that the smaller file size and shorter network transmission time rdbcompression Yes

 

It is often said that the default persistence of redis is because it supports the RDB mode by default, and because RDB supports persistence, redis defaults to persistence, however, the content we write cannot meet this persistent condition, so we didn't store the data.

Disable redis snapshot:

 

If you want to disable the snapshot saving function, you can comment out all the "save" configurations, or add the following configuration after the last "save" Configuration:

save ""

 

 

Snapshots are not very reliable. If your computer suddenly goes down, the power is disconnected, or the process is accidentally killed, the latest data will be lost. The aof file provides a more reliable persistence mode. Whenever redis accepts the command to modify the dataset, it will append the command to the aof file. When you restart redis, the command in aof will be re-executed and re-built.

Aof Overview

The aof persistence policy of redis records every command sent to the redis server and stores it in the aof file on the hard disk. You can use the appendonly parameter to set whether to enable Aof. The location of the aof file is the same as that of the RDB. Both are set using the Dir parameter. The default file name is appendonly. aof, which can be modified using the appendfilename parameter.

Append "Operation + Data" to the end of the operation log file as a formatting command. After the append operation returns (it has been written to a file or is about to be written ), when the server needs data recovery, it can directly replay this log file to restore all operation processes. Aof is relatively reliable

 

Advantages:

  • Reliable than RDB. You can develop different fsync policies: fsync is not performed, fsync is performed once per second, and fsync is performed for each query. The default value is fsync per second. This means that you can lose up to one second of data.
  • Aof log file is a pure append file. Even in the case of a sudden power failure, there will be no log locating or damage issues. Even if, for some reason (for example, if the disk is full), the command is only half written to the log file, we can useredis-check-aofThis tool is easy to fix.
  • When the aof file is too large, redis will automatically rewrite it in the background. Rewriting is safe because it is performed on a new file, and redis will continue to append data to the old file. The new file will write a set of minimum operation commands that can reconstruct the current dataset. After the new file is overwritten, redis switches the new and old files and writes the data to the new file.
  • Aof saves the operation commands one by one in a simple and easy-to-understand format in a file, which is easily exported for data recovery. For example, we accidentally useFLUSHALLCommand to fl all the data. As long as the file is not overwritten, we can stop the service, delete the last command, and restart the service so that the data can be restored.

Disadvantages:

  • In the same dataset, the size of the aof file is generally larger than that of the RDB file.
  • In some fsync policies, aof is slower than RDB. Generally, fsync can achieve high performance once per second, and the speed can reach the RDB level when fsync is disabled.

 

Aof is disabled by default. To enable this function, modify the configuration file reds. conf: appendonly yes.

# This option is the aof function switch. The default value is "no". You can enable the aof function through "yes" # Only under "yes, aof rewrite/file synchronization features will take effect only appendonly yes # specify the aof file name appendfilename appendonly. aof # specify the file sync policy in aof operations. There are three valid values: Always everysec No. The default value is everysec appendfsync everysec # Whether appendfsync suspends file sync during aof-rewrite, "No" indicates "not suspended", "yes" indicates "suspended ", the default value is "no" no-appendfsync-on-Rewrite no # Minimum File size (MB, GB) triggered by aof file rewrite ), rewrite is triggered only when the size of the aof file is greater than this size. The default value is "64 MB". We recommend "MB" auto-Aof-rewrite-Min- Size 64 MB # The percentage of aof files to be increased when this rewrite is triggered relative to the previous rewrite. # After each rewrite, redis records the size of the new aof file (for example, a). When the aof file grows to a * (1 + p) then ## trigger the next rewrite. Every time an aof record is added, the size of the current aof file is checked. Auto-Aof-rewrite-percentage 100

Aof is a file operation. For the server with intensive change operations, the disk Io load will inevitably increase. In addition, Linux uses the "delayed write" Method for file operations, that is, not every write operation triggers the actual disk operation, but enters the buffer. When the buffer data reaches the threshold, the actual write is triggered (there are other opportunities ), this is Linux's optimization of the file system, but this may cause potential risks. If the buffer is not refreshed to the disk, the physical machine becomes invalid (such as power failure ), this may lead to the loss of the last or multiple aof records. Through the above configuration file, we can know that redis provides the aof record synchronization option in 3:

Always: Every aof record is synchronized to a file immediately. This is the safest way. I think that more disk operations and blocking latency are high Io costs. Everysec: synchronization is performed once per second, and performance and security are both moderate. It is also recommended by redis. If a physical server failure occurs, the aof record in the last second may be lost (which may be partially lost ). No: redis does not directly call file synchronization. Instead, it is handed over to the operating system for processing. The operating system can trigger Synchronization Based on the buffer filling or channel idle time; this is a common file operation method. Good performance. When a physical server fails, data loss may be caused by OS configuration.

In fact, we can choose too few. everysec is the best choice. If you are very concerned that every data is extremely reliable, we suggest you choose a "Relational Database.

 

 

Master/Slave:
Through the persistence function, redis ensures that no (or a small amount of) data will be lost even when the server is restarted, because persistence will save the data in the memory to the hard disk, data is loaded from the hard disk after restart. . However, because the data is stored on a server, if the server suffers a hard disk failure or other problems, it will also lead to data loss. To avoid single point of failure, the common practice is to copy multiple copies of the database to deploy on different servers, so that even if one server fails, other servers can continue to provide services. To this end, redis provides the replication function to automatically synchronize the updated data to other databases after the data in a database is updated.

Overview
1. redis replication supports data synchronization between multiple databases. One is the primary database (master) and the other is the slave database (slave). The primary database can perform read and write operations. When a write operation occurs, the data is automatically synchronized to the slave database, the slave database is generally read-only and receives data synchronized from the master database. One master database can have multiple slave databases, and one slave database can have only one master database.

2. The redis replication function can effectively implement database read/write splitting and improve server load capabilities. The primary database mainly performs write operations, while the standby database is responsible for read operations.

Master-slave replication process:

 

 

Process:

1: When a slave database is started, the sync command will be sent to the master database,

2: After the master database receives the sync command, it will start to save the snapshot (execute the RDB operation) in the background, and cache the commands indirectly received during the retention period.

3: After the snapshot is complete, redis will send the snapshot file and all cached commands to the slave database.

4: After receiving the message from the database, the snapshot file is loaded and the cache command is executed.

Master-slave replication is optimistic. When the client sends a write command to the master, the master immediately returns the result to the client after execution, and asynchronously sends the command to the slave without affecting the performance. You can also set the number of slave masters that can be synchronized to at least. No hard disk copy: if the hard disk efficiency is low, the replication performance will be affected. After 2.8, you can set no hard disk copy, REPL-diskless-sync Yes

 

Sentinel:
When the primary database encounters an exception or service interruption, the developer can manually select a slave database to upgrade to the primary database so that the system can continue to provide services.
However, the entire process is relatively difficult and requires manual intervention, making it difficult to automate. To this end, redis 2.6 provides the stable sensen 2.8, which implements automated system monitoring and fault recovery. The role of Sentinel is to monitor whether the redis master database and slave database are operating normally. If the master database fails, it will automatically switch from the database to the master database.

As the name suggests, the role of Sentel is to monitor the running status of the redis system. Its functions include the following two.

 

(1) Monitor the normal operation of the master database and slave database.
(2) When the primary database fails, it will automatically switch from the database to the primary database.

 

 

The Sentinel is used to monitor whether your primary database has a problem. If the primary database has a problem, it will upgrade from the primary database to the primary database. You can also set multiple sentiners to monitor the master and slave databases, then, when multiple sentry nodes are monitored, voting is used to monitor the master and slave nodes,

If the primary database crashes, you need to have multiple sentiners to vote. When multiple votes are elected, the change from database to master will be realized,When there is a sentinel mechanism, it is used to connect to the Sentinel, and then the Sentinel connects to the database.

 

Redis deletion mechanism, persistent Master/Slave

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.