Redis cluster Construction
From the official redis website, we can see that the latest stable version has been updated to 3.2.6. Here I will add a previous 3.2.0 cluster configuration.
Note:
Redis 3.0 and later versions support the cluster mode. You can check the principles and functions online. For O & M personnel, the most important thing is to remove the proxy layer and avoid spof, however, redis cluster requires at least three master nodes and three slave nodes. Each master-slave node has a set of relationships. Therefore, three servers are required, with two instances on each server. Because one master corresponds to one slave, one server cannot be both master and slave. The three servers must be separated from the master to ensure normal service operation. I deploy it on one server for demonstration convenience. Note: The production environment must be three machines. Otherwise, high availability will not be achieved.
Deployment environment:
Host Name |
IP address |
Operating system version |
Purpose |
Test01 |
192.168.2.9 |
CentOS6.3 (64-bit) |
Instance 7000/7001 |
Test02 |
192.168.2.10 |
CentOS 6.3 (64-bit) |
Instance 7002/7003 |
Test03 |
192.168.2.7 |
CentOS 6.3 (64-bit) |
Instance 7004/7005 |
Deployment steps:
1. Compile and install redis.
Put the installation package in/data/and decompress it and compile it.
- [root@test01 data]# tar -zxf redis-3.2.0.tar.gz
- [root@test01 redis]# cd redis-3.2.0
- [root@test01 redis-3.2.0]# make && make install
- [root@test01 redis]# ln -s /data/redis/redis-3.2.0 /usr/local/redis
2. Create a redis cluster node
- [root@test01 local]# mkdir redis_cluster
- [root@test01 local]# cd redis_cluster/
- [root@test01 redis_cluster]# mkdir 7000 7001
3. Copy the default node configuration file to the cluster node.
- [root@test01 redis_cluster]# cp /usr/local/redis/redis.conf ./7000
4. Modify the default configuration file
- [Root @ test01 7000] # vim redis. conf
- Daemonize yes // redis running in the background
- Pidfile/var/run/redis_7000.pid // pidfile corresponds to 7000
- Port 7000 // port 7000
- Cluster-enabled yes // enable the cluster to remove the comment #
- Cluster-config-file nodes. conf // automatically generated when the configuration file of the cluster is started for the first time
- Cluster-node-timeout 6000 // request timeout
- Appendonly yes // log enabled, environment test enabled, and formal environment is recommended
- Bind 192.168.2.9 // modify the listening address to the local address
5. Copy the configuration file to 7001 and modify the corresponding port.
- [root@test01 7000]# cp redis.conf../7001/
6. Repeat the above configurations for the other two servers.
7. Start each node. Be sure to go to the port directory to start it. Otherwise, some cannot start it.
- [root@test01 7000]# /usr/local/redis/src/redis-server redis.conf
- [root@test01 7000]# cd ../7001/
- [root@test01 7001]# /usr/local/redis/src/redis-server redis.conf
View the Startup Process
- [root@test01 7001]# ps -ef|grep redis
- root 8858 1 0 11:38 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.2.9:7000 [cluster]
- root 8865 1 0 11:39 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.2.9:7001 [cluster]
- root 8870 27799 0 11:39 pts/1 00:00:00 grep redis
Start other server nodes separately.
8. All three servers must execute the software required to create the cluster.
- [root@test01 7000]# yum -y install ruby ruby-devel rubygems rpm-build
- [root@test01 7000]# ruby -v
- ruby 1.8.7 (2013-06-27 patchlevel 374) [x86_64-linux]
- [root@redis 7000]# rpm -qa|grep rubyge
- rubygems-1.3.7-5.el6.noarch
- [root@redis 7000]# gem install redis
- Successfully installed redis-3.3.0
- 1 gem installed
- Installing ri documentation for redis-3.3.0...
- Installing RDoc documentation for redis-3.3.0.
9. confirm that all nodes are started, and then create with the create Parameter
- [root@test01 7000]# /usr/local/redis/src/redis-trib.rb create --replicas 1 192.168.2.9:7000 192.168.2.10:7002 192.168.2.7:7004 192.168.2.10:7003 192.168.2.7:7005 192.168.2.9:7001
- >>> Creating cluster
- >>> Performing hash slots allocation on 6 nodes...
- Using 3 masters:
- 192.168.2.7:7004
- 192.168.2.9:7000
- 192.168.2.10:7002
- Adding replica 192.168.2.9:7001 to 192.168.2.7:7004
- Adding replica 192.168.2.7:7005 to 192.168.2.9:7000
- Adding replica 192.168.2.10:7003 to 192.168.2.10:7002
- M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
- slots:5461-10922 (5462 slots) master
- M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
- slots:10923-16383 (5461 slots) master
- M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
- slots:0-5460 (5461 slots) master
- S: 6f13ca12a9be3b0c093d02c81fed337307f295af 192.168.2.10:7003
- replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
- S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
- replicates bede5b72dfbb5274e52a1f0c5f6b43170afec8af
- S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
- replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
- Can I set the above configuration? (type 'yes' to accept): yes
- >>> Nodes configuration updated
- >>> Assign a different config epoch to each node
- >>> Sending CLUSTER MEET messages to join the cluster
- Waiting for the cluster to join...
- >>> Performing Cluster Check (using node 192.168.2.9:7000)
- M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
- slots:5461-10922 (5462 slots) master
- M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
- slots:10923-16383 (5461 slots) master
- M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
- slots:0-5460 (5461 slots) master
- M: 6f13ca12a9be3b0c093d02c81fed337307f295af 192.168.2.10:7003
- slots: (0 slots) master
- replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
- M: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
- slots: (0 slots) master
- replicates bede5b72dfbb5274e52a1f0c5f6b43170afec8af
- M: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
- slots: (0 slots) master
- replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
- [OK] All nodes agree about slots configuration.
- >>> Check for open slots...
- >>> Check slots coverage...
- [OK] All 16384 slots covered.
-- Replicas 1: Create a cluster and specify a slave node.
Because the master and slave nodes cannot be specified during cluster creation, they are random. Nodes added to the cluster are in different order. If a server has a master node and a slave node, and they are in a group, you must kill the master and slave nodes on the same server, switch to nodes of different servers.
From the above, we can see that the disallow 7003 and 7002 are allocated to a server. After testing, a server usually has Master nodes and slave nodes in the same group.
Master-slave relationship diagram:
192.168.2.10: 7002 192.168.2.10: 7003
192.168.2.9: 7000 192.168.2.7: 7005
192.168.2.7: 7004 192.168.2.9: 7001
10. Replace 192.168.2.10: 7003 and 192.168.2.7: 7005.
Delete node 7003
- [root@test01 7000]# /usr/local/redis/src/redis-trib.rb del-node 192.168.2.10:7003 6f13ca12a9be3b0c093d02c81fed337307f295af
- >>> Removing node 6f13ca12a9be3b0c093d02c81fed337307f295af from cluster 192.168.2.10:7003
- >>> Sending CLUSTER FORGET messages to the cluster...
- >>> SHUTDOWN the node.
Del-node: Delete the node and the host and corresponding port, and the instance id. In this way, you can delete it.
11. Change the Master Address of 192.168.2.7: 7005 to 192.168.2.10: 7002.
Go to redis 192.168.2.7: 7005
- [Root @ test02 redis_cluster] #/usr/local/redis/src/redis-cli-c-p 7005-h 192.168.2.7
- 192.168.2.7: 7005> cluster replicate 01594f84df9e743a74a47f9aaa58fa41402dfe25 # id of the new master
- OK
View Current Master/Slave status
- [root@test02 redis_cluster]# /usr/local/redis/src/redis-trib.rb check 192.168.2.10:7002
- >>> Performing Cluster Check (using node 192.168.2.10:7002)
- M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
- slots:10923-16383 (5461 slots) master
- 1 additional replica(s)
- M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
- slots:5461-10922 (5462 slots) master
- 0 additional replica(s)
- M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
- slots:0-5460 (5461 slots) master
- 1 additional replica(s)
- S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
- slots: (0 slots) slave
- replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
- S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
- slots: (0 slots) slave
- replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
- [OK] All nodes agree about slots configuration.
- >>> Check for open slots...
- >>> Check slots coverage...
- [OK] All 16384 slots covered.
You can see that the master-slave relationship has changed.
12. Add the deleted 7003 nodes.
- [root@test02 redis_cluster]# ps -ef|grep redis
- root 4441 1 0 11:45 ? 00:00:13 /usr/local/redis/src/redis-server 192.168.2.10:7002 [cluster]
- root 4673 4275 0 15:02 pts/0 00:00:00 grep redis
- [root@test02 redis_cluster]# cd 7003/
- [root@test02 7003]# ls
- appendonly.aof dump.rdb nodes.conf redis.conf
- [root@test02 7003]# rm -f appendonly.aof dump.rdb nodes.conf
- [root@test02 7003]# /usr/local/redis/src/redis-server redis.conf
- [root@test02 7003]# ps -ef|grep redis
- root 4441 1 0 11:45 ? 00:00:14 /usr/local/redis/src/redis-server 192.168.2.10:7002 [cluster]
- root 4677 1 0 15:02 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.2.10:7003 [cluster]
- root 4681 4275 0 15:02 pts/0 00:00:00 grep redis
13. Run the following command to add a slave node.
- [root@test02 7003]# /usr/local/redis/src/redis-trib.rb add-node --slave --master-id bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.10:7003 192.168.2.9:7000
- >>> Adding node 192.168.2.7:7005 to cluster 192.168.2.9:7000
- >>> Performing Cluster Check (using node 192.168.2.9:7000)
- M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
- slots:5461-10922 (5462 slots) master
- 0 additional replica(s)
- S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
- slots: (0 slots) slave
- replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
- S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
- slots: (0 slots) slave
- replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
- M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
- slots:0-5460 (5461 slots) master
- 1 additional replica(s)
- M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
- slots:10923-16383 (5461 slots) master
- 1 additional replica(s)
- [OK] All nodes agree about slots configuration.
- >>> Check for open slots...
- >>> Check slots coverage...
- [OK] All 16384 slots covered.
- >>> Send CLUSTER MEET to node 192.168.2.10:7003 to make it join the cluster.
- Waiting for the cluster to join.
- >>> Configure node as replica of 192.168.2.9:7000.
- [OK] New node added correctly.
The slave node with 7003 nodes as 7000 is displayed on the top.
14. view the current Master/Slave status
- [root@test02 7003]# /usr/local/redis/src/redis-trib.rb check 192.168.2.10:7002
- >>> Performing Cluster Check (using node 192.168.2.10:7002)
- M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
- slots:10923-16383 (5461 slots) master
- 1 additional replica(s)
- M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
- slots:5461-10922 (5462 slots) master
- 1 additional replica(s)
- S: 2e26f92f8dca2dfb13f159a59b6260f106bc8cb4 192.168.2.10:7003
- slots: (0 slots) slave
- replicates bede5b72dfbb5274e52a1f0c5f6b43170afec8af
- M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
- slots:0-5460 (5461 slots) master
- 1 additional replica(s)
- S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
- slots: (0 slots) slave
- replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
- S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
- slots: (0 slots) slave
- replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
- [OK] All nodes agree about slots configuration.
- >>> Check for open slots...
- >>> Check slots coverage...
- [OK] All 16384 slots covered
This is a normal cluster ~~ O & M personnel will no longer suffer from downtime or redis failures!