The master-slave replication configuration of apsaradb for redis is quite simple. Only two sentences are required. Through master-slave replication, multiple server Load balancer instances can have the same database copies as the master server.
The advantage of this is that it can increase the IO rate and the efficiency of master-slave communication is extremely high. According to your own understanding, you can see in the configuration file that allows master-slave communication, in this way, the master will start a background process, save the data snapshot to the file, and there will also be a process to collect new commands written to the master, and then the master will send it to slave, slave stores files on the disk and loads them into the memory.
During the experiment, I configured it directly in slave:
- # Slaveof<Masterip> <Masterport>
- Slaveof 10.5.110.239 6379
- # If the master is password protected (using the requirepass configuration
- # Directive below) it is possible to tell the slave to authenticate before
- # Starting the replication synchronization process, otherwise the master will
- # Refuse the slave request.
- #
- # Masterauth<Master-password>
- Masterauth chen
Then, start the server of slave and view all the data items in the master, and then view the log file:
- [18343] 08 Jul 19:36:19-Accepted 10.5.110.234: 41968
- [18343] 08 Jul 19:36:19 * Slave ask for synchronization
- [18343] 08 Jul 19:36:19 * Starting BGSAVE for SYNC
- [18343] 08 Jul 19:36:19 * Background saving started by pid 22405
- [22405] 08 Jul 19:36:19 * DB saved on disk
- [18343] 08 Jul 19:36:19 * Background saving terminated with success
- [18343] 08 Jul 19:36:19 * Synchronization with slave succeeded
- [18343] 08 Jul 19:36:23-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:36:23-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:36:23-1 clients connected (1 slaves), 565440 bytes in use
- [18343] 08 Jul 19:36:28-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:36:28-DB 1: 6 keys (0 volatile) in 8 slots HT.
I feel that after slave is started, I will take the initiative to contact the master and actively pull data, just like the master-slave replication in mysql. I/O thread will find the binlog log file of the master, then write your own binlog. Then write the disk in SQL thread. Here is what I just started the slave server, and the master server has reflected it here.
Here I carefully read the log file and found that the master will look for any action or connection every five seconds, here, I deleted a data log file from the master client as follows:
- [18343] 08 Jul 19:53:44-0 clients connected (1 slaves), 557064 bytes in use
- [18343] 08 Jul 19:53:45-Accepted 127.0.0.1: 21599
- [18343] 08 Jul 19:53:49-DB 0: 6 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:53:49-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:53:49-1 clients connected (1 slaves), 565520 bytes in use
- [18343] 08 Jul 19:53:54-DB 0: 6 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:53:54-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:53:54-1 clients connected (1 slaves), 565520 bytes in use
- [18343] 08 Jul 19:53:59-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:53:59-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [18343] 08 Jul 19:53:59-1 clients connected (1 slaves), 565424 bytes in use
We can see clearly that we have detected a piece of data every five seconds. We found that at 19:53:59, we deleted a piece of data, but we already have a connection. Then I will take a look at the subsequent situation, it is found that the connection is zero again at 19:59:59. That is to say, no matter what action is taken from the master database of the slave database, it will initiate the connection action to ensure data consistency. The connection time can be about 5 minutes, and will be automatically disconnected without any action. Here we are doing an experiment:
I added a data entry to the slave database to check the changes in the master database log and slave Database log:
Adding or deleting a data master database from the database does not change, and the connection does not exist. The principle is as follows:
- [2539] 08 Jul 20:13:23-Accepted 127.0.0.1: 44902
- [2539] 08 Jul 20:13:23-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:23-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:23-2 clients connected (0 slaves), 565440 bytes in use
- [2539] 08 Jul 20:13:28-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:28-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:28-2 clients connected (0 slaves), 565440 bytes in use
- [2539] 08 Jul 20:13:33-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:33-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:33-2 clients connected (0 slaves), 565440 bytes in use
- [2539] 08 Jul 20:13:38-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:38-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:38-2 clients connected (0 slaves), 565440 bytes in use
- [2539] 08 Jul 20:13:43-DB 0: 5 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:43-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:43-2 clients connected (0 slaves), 565440 bytes in use
- [2539] 08 Jul 20:13:48-DB 0: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:48-DB 1: 6 keys (0 volatile) in 8 slots HT.
- [2539] 08 Jul 20:13:48-2 clients connected (0 slaves), 565512 bytes in use
A lot of things you don't understand. Take a good look at this principle.