Redis implements master-slave replication-stand-alone testing

Source: Internet
Author: User
Tags failover redis cluster

First, Redis implementation master-slave replication-stand-alone test
1. Installing Redis
TAR-ZXVF redis-2.8.4.tar.gz
CD redis-2.8.4
Make && make install
2, configure the master-slave relationship
Need to be configured in the redis.conf of the slave server
Slaveof 192.168.1.1 6379 #指定master的ip和端口
See below for specific configuration:
CP redis.conf redis-master-6379.conf
Vi2 redis-master-6379.conf
LogFile "/appcom/redis/redis-2.8.4/redis-master-6379.log"

CP redis.conf redis-master-6389.conf
Vi2 redis-master-6379.conf
Port 6389
slaveof localhost 6379
LogFile "/appcom/redis/redis-2.8.4/redis-slave-6389.log"

3. Start master server and slave server
./src/redis-server Redis-master-6379.conf &
[19810] 14:18:55.825 * The server is now a ready-to-accept connections on port 6379
[19810] 14:23:19.918 * Slave asks for synchronization
[19810] 14:23:19.919 * Full resync requested by Slave.
[19810] 14:23:19.919 * Starting BGSAVE for SYNC
[19810] 14:23:19.928 * Background saving started by PID 22336
[22336] Jan 14:23:19.947 * DB saved on disk
[22336] 14:23:19.948 * rdb:6 MB of memory used by Copy-on-write
[19810] 14:23:19.985 * Background saving terminated with success
[19810] 14:23:19.986 * Synchronization with Slave succeeded
[19810] 14:23:21.038 # Connection with slave:: 1:6389 lost.
[19810] 14:23:25.159 * Slave asks for synchronization
[19810] 14:23:25.159 * Full resync requested by Slave.
[19810] 14:23:25.159 * Starting BGSAVE for SYNC
[19810] 14:23:25.163 * Background saving started by PID 22399
[22399] Jan 14:23:25.177 * DB saved on disk
[22399] 14:23:25.178 * rdb:6 MB of memory used by Copy-on-write
[19810] 14:23:25.210 * Background saving terminated with success
[19810] 14:23:25.210 * Synchronization with Slave succeeded

./src/redis-server Redis-slave-6389.conf &
[22327] 14:23:18.915 * The server is now a ready-to-accept connections on port 6389
[22327] 14:23:19.913 * Connecting to MASTER localhost:6379
[22327] Jan 14:23:19.915 * MASTER <-> SLAVE Sync started
[22327] 14:23:19.915 * Non blocking Connect for SYNC fired the event.
[22327] 14:23:19.916 * Master replied to PING, replication can continue ...
[22327] Jan 14:23:19.917 * Partial resynchronization not possible (no cached Master)

Data can be used in slave after master shutdown, but the following information appears in the background log and does not convert slave to master
[7084] 14:04:59.940 * Connecting to MASTER localhost:6379
[7084] Jan 14:04:59.941 * MASTER <-> SLAVE Sync started
[7084] Jan 14:04:59.941 # Error condition on socket for Sync:connection refused

Second, using Redis-sentinel to realize the recovery of Redis cluster-stand-alone test
1, redis installation
master localhost 6379
slave1 localhost 6389
slave2 localhost 6399
master-sentinel:localhost 26379
slave1-sentinel:localhost 26389
Slave2-sentinel:localhost 26399
2, redis configuration
Master configuration
CP redis.conf redis-master-6379.conf
Vi2 redis-master-6379.conf
Port 6379
Requirepass rd123
Masterauth rd123
#rename-command
appendonly Yes//open aof
Save "
Slave-read-only Yes
logfile "/appcom/redis/redis-2.8.4/redis-master-6379.log"

Vi2 sentinel-6379.conf
Port 26379
Sentinel Monitor MyMaster 127.0.0.1 6379 2//sentinel master information required for monitoring:<mastername> <masterIP> <masterport > <quorum>. <quorum> should be less than the number of slave in the cluster, and only if at least <quorum> Sentinel instance submits "Master failure" will the master be considered Odwon ("objective" invalidation).
Sentinel Auth-pass MyMaster rd123
Sentinel Down-after-milliseconds MyMaster 30000//master by the current Sentinel instance as "invalid" (Sdown) time interval
Sentinel Parallel-syncs MyMaster 1//slave number of simultaneous "slaveof" to the new master and synchronous replication when the new master is generated.
Sentinel failover-timeout MyMaster 180000//failover expiration time, and when failover starts, no failover operation is triggered during this time, Current Sentinel will consider this failoer to be a failure

SLAVE1 Configuration
CP redis-master-6379.conf redis-slave-6389.conf
Vi2 redis-slave-6389.conf
Prot 6389
slaveof localhost 6379
LogFile "/appcom/redis/redis-2.8.4/redis-master-6389.log"

CP sentinel-6379.conf sentinel-6389.conf
Vi2 sentinel-6389.conf
Prot 26389

Slave2 Configuration
CP redis-master-6379.conf redis-slave-6399.conf
Vi2 redis-slave-6399.conf
Prot 6399
slaveof localhost 6379
LogFile "/appcom/redis/redis-2.8.4/redis-master-6399.log"

CP sentinel-6379.conf sentinel-6399.conf
Vi2 sentinel-6399.conf
Prot 26399

3. Start
Start master server and Master Sentinel first
./src/redis-server--include Redis-master-6379.conf &
./src/redis-sentinel sentinel-6379.conf > Sentinel-6379.log &
Start slave1 Server and Sentinel
./src/redis-server--include Redis-slave-6389.conf &
./src/redis-sentinel sentinel-6389.conf > Sentinel-6389.log &
Start slave1 Server and Sentinel
./src/redis-server--include Redis-slave-6399.conf &
./src/redis-sentinel sentinel-6399.conf > Sentinel-6399.log &

[45564] 15:03:37.444 * +slave slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
[45564] 15:03:37.444 * +slave slave 127.0.0.1:6399 127.0.0.1 6399 @ mymaster 127.0.0.1 6379
[45564] 15:04:02.364 * +sentinel Sentinel 127.0.0.1:26389 127.0.0.1 26389 @ mymaster 127.0.0.1 6379
[45564] 15:04:19.711 * +sentinel Sentinel 127.0.0.1:26399 127.0.0.1 26399 @ mymaster 127.0.0.1 6379

To view the status of master:

#./src/redis-cli-h 127.0.0.1-p 6379-a rd123
localhost:6379> Info Replication
# Replication
Role:master
Connected_slaves:2
Slave0:ip=127.0.0.1,port=6389,state=online,offset=54505,lag=0
Slave1:ip=127.0.0.1,port=6399,state=online,offset=54505,lag=1
master_repl_offset:54505
Repl_backlog_active:1
repl_backlog_size:1048576
Repl_backlog_first_byte_offset:2
repl_backlog_histlen:54504

View the status of Slave1:
#./src/redis-cli-h localhost-p 6389-a rd123
localhost:6389> info Replication
# Replication
Role:slave
master_host:127.0.0.1
master_port:6379
Master_link_status:up
Master_last _io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:59720
slave_priority:100
Slave_read_ Only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

To view the status of Slave2:
#./src/redis-cli-h localhost-p 6399-a rd123
localhost:6399> Info Replication
# Replication
Role:slave
master_host:127.0.0.1
master_port:6379
Master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:68701
slave_priority:100
Slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

4. Testing
(1) Scene one, slave1 downtime
localhost:6389> shutdown
In Sentinel
[45794] Jan 15:12:10.335 # +sdown slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379

#./src/redis-cli-h localhost-p 6379-a rd123
localhost:6379> Info Replication
# Replication
Role:master
Connected_slaves:1
Slave0:ip=127.0.0.1,port=6399,state=online,offset=120536,lag=1
master_repl_offset:120669
Repl_backlog_active:1
repl_backlog_size:1048576
Repl_backlog_first_byte_offset:2
repl_backlog_histlen:120668

(2) scene two, slave recovery
Restart Slave1
./src/redis-server--include Redis-slave-6389.conf &
[3] 52287

[45794] 15:15:19.726 * +reboot slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
[45794] Jan 15:15:19.874 #-sdown slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379

localhost:6379> Info Replication
# Replication
Role:master
Connected_slaves:2
Slave0:ip=127.0.0.1,port=6399,state=online,offset=197860,lag=1
Slave1:ip=127.0.0.1,port=6389,state=online,offset=197727,lag=1
master_repl_offset:198126
Repl_backlog_active:1
repl_backlog_size:1048576
Repl_backlog_first_byte_offset:2
repl_backlog_histlen:198125

(3) Scenario three, master outage
localhost:6379> shutdown
localhost:6379> Info Replication

[45564] Jan 15:36:37.710 # +sdown Master mymaster 127.0.0.1 6379
[45564] Jan 15:36:37.967 # +new-epoch 1
[4 5564] 15:36:37.968 # +vote-for-leader 1f6f588c7c28a2176c2886e540a638ce92033e65 1
[45564] to Jan 15:36:38.892 # + Odown Master MyMaster 127.0.0.1 6379 #quorum 3/2
[45564]-Jan 15:36:39.178 # +switch-master MyMaster 127.0.0.1 6379 1 27.0.0.1 6399
[45564]-15:36:39.178 * +slave slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6399
[45 564] 15:36:39.180 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
[45564] Jan 15:37:09 .193 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399

Master converted to Slave2
localhost:6399> Info Replication
# Replication
Role:master
Connected_slaves:1
Slave0:ip=127.0.0.1,port=6389,state=online,offset=21724,lag=1
master_repl_offset:21990
Repl_backlog_active:1
repl_backlog_size:1048576
Repl_backlog_first_byte_offset:2
repl_backlog_histlen:21989

(4) Scene four, master recovery
./src/redis-server--include Redis-master-6379.conf &
[1] 67400

[45564] Jan 15:41:47.608 #-sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
[45564] 15:41:57.513 * +reboot slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399

The original master automatically switches to slave and does not automatically revert to master

localhost:6379> Info Replication
# Replication
Role:slave
master_host:127.0.0.1
master_port:6399
Master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:70642
slave_priority:100
Slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

localhost:6399> Info Replication
# Replication
Role:master
Connected_slaves:2
Slave0:ip=127.0.0.1,port=6389,state=online,offset=93539,lag=0
Slave1:ip=127.0.0.1,port=6379,state=online,offset=93539,lag=0
master_repl_offset:93553
Repl_backlog_active:1
repl_backlog_size:1048576
Repl_backlog_first_byte_offset:2
repl_backlog_histlen:93552

Third, Redis cluster construction
1. Download the latest development version of redis,https://codeload.github.com/antirez/redis/zip/unstable from GitHub
2. Installing Redis
Node1 10.25.22.185 6379
Node2 10.25.22.186 6379
Node3 10.25.22.187 6379
3. Modify the Configuration
cluster-enabled Yes
Cluster-config-file nodes-6379.conf
Cluster-node-timeout 15000
LogFile "/appcom/redis/redis-unstable/redis.log"

./src/redis-server Redis.conf &
[1] 6856
./src/redis-server Redis.conf &
[1] 43951
./src/redis-server Redis.conf &
[1] 80642

View the status of a cluster in Node1
#./src/redis-cli
127.0.0.1:6379> cluster Nodes
af6224cbc9ce9b66e21b90af442678ba096989d9:0 myself,master-0 0 0 connected
127.0.0.1:6379> Cluster Info
Cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
Cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0

Associating cluster servers with the cluster meet command
127.0.0.1:6379> Cluster Meet 10.25.22.186 6379
Ok
127.0.0.1:6379> Cluster Meet 10.25.22.187 6379
Ok
127.0.0.1:6379> cluster Nodes
ED85B32AA566511BF917E8ECDC6150DF7449DCF2 10.25.22.187:6379 master-0 1390897200350 0 connected
af6224cbc9ce9b66e21b90af442678ba096989d9:0 myself,master-0 0 0 connected
918FC015490599A93E680893C7E387336DAC35BC 10.25.22.186:6379 master-0 1390897199347 0 connected
127.0.0.1:6379> Cluster Info
Cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
Cluster_known_nodes:3
cluster_size:0
cluster_current_epoch:0
Cluster_stats_messages_sent:23
Cluster_stats_messages_received:23

Assign hash slots to each server in the cluster
Redis Cluster uses a hash slot to partition data according to the primary key, so a key-value data is automatically mapped to a hash slot based on the algorithm.
However, a hash slot stored on which Redis node is not automatically mapped, it needs to be assigned by the cluster Manager.
According to source know a total of 16,384 hash slots

Modify the node-conf file, keep the myself record, and delete the remaining records.
Node1 change to: af6224cbc9ce9b66e21b90af442678ba096989d9:0 myself,master-0 0 0 Connected 0-5000

Node2 change to: 918fc015490599a93e680893c7e387336dac35bc:0 myself,master-0 0 0 Connected 5001-10000

Node3 change to: ed85b32aa566511bf917e8ecdc6150df7449dcf2:0 myself,master-0 0 0 Connected 10001-16383

Then restart the server

Re-use the cluster meet command to associate each server node

127.0.0.1:6379> Cluster Meet 10.25.22.186 6379
Ok
127.0.0.1:6379> Cluster Meet 10.25.22.187 6379
Ok
127.0.0.1:6379> Cluster Info
Cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
Cluster_known_nodes:3
Cluster_size:3
cluster_current_epoch:0
Cluster_stats_messages_sent:29
Cluster_stats_messages_received:29
127.0.0.1:6379>

[Email protected] redis-unstable]#./src/redis-cli
127.0.0.1:6379> set name "make"
(Error) MOVED 5798 10.25.22.186:6379
[Email protected] redis-unstable]#./src/redis-cli
127.0.0.1:6379> set name "make"
Ok
127.0.0.1:6379> Get Name
"Make"

Redis implements master-slave replication-stand-alone testing

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.