Redis cluster Construction

Source: Internet
Author: User
Tags redis cluster install redis

Redis cluster Construction
From the official redis website, we can see that the latest stable version has been updated to 3.2.6. Here I will add a previous 3.2.0 cluster configuration.
Note:

Redis 3.0 and later versions support the cluster mode. You can check the principles and functions online. For O & M personnel, the most important thing is to remove the proxy layer and avoid spof, however, redis cluster requires at least three master nodes and three slave nodes. Each master-slave node has a set of relationships. Therefore, three servers are required, with two instances on each server. Because one master corresponds to one slave, one server cannot be both master and slave. The three servers must be separated from the master to ensure normal service operation. I deploy it on one server for demonstration convenience. Note: The production environment must be three machines. Otherwise, high availability will not be achieved.

Deployment environment:

Host Name

IP address

Operating system version

Purpose

Test01

192.168.2.9

CentOS6.3 (64-bit)

Instance 7000/7001

Test02

192.168.2.10

CentOS 6.3 (64-bit)

Instance 7002/7003

Test03

192.168.2.7

CentOS 6.3 (64-bit)

Instance 7004/7005

Deployment steps:

1. Compile and install redis.

Put the installation package in/data/and decompress it and compile it.

 
 
  1. [root@test01 data]# tar -zxf redis-3.2.0.tar.gz
  2. [root@test01 redis]# cd redis-3.2.0
  3. [root@test01 redis-3.2.0]# make && make install
  4. [root@test01 redis]# ln -s /data/redis/redis-3.2.0 /usr/local/redis
2. Create a redis cluster node
 
 
  1. [root@test01 local]# mkdir redis_cluster
  2. [root@test01 local]# cd redis_cluster/
  3. [root@test01 redis_cluster]# mkdir 7000 7001
3. Copy the default node configuration file to the cluster node.
 
 
  1. [root@test01 redis_cluster]# cp /usr/local/redis/redis.conf ./7000

4. Modify the default configuration file

 
 
  1. [Root @ test01 7000] # vim redis. conf
  2. Daemonize yes // redis running in the background
  3. Pidfile/var/run/redis_7000.pid // pidfile corresponds to 7000
  4. Port 7000 // port 7000
  5. Cluster-enabled yes // enable the cluster to remove the comment #
  6. Cluster-config-file nodes. conf // automatically generated when the configuration file of the cluster is started for the first time
  7. Cluster-node-timeout 6000 // request timeout
  8. Appendonly yes // log enabled, environment test enabled, and formal environment is recommended
  9. Bind 192.168.2.9 // modify the listening address to the local address

5. Copy the configuration file to 7001 and modify the corresponding port.

 
 
  1. [root@test01 7000]# cp redis.conf../7001/

6. Repeat the above configurations for the other two servers.

7. Start each node. Be sure to go to the port directory to start it. Otherwise, some cannot start it.

 
 
  1. [root@test01 7000]# /usr/local/redis/src/redis-server redis.conf
  2. [root@test01 7000]# cd ../7001/
  3. [root@test01 7001]# /usr/local/redis/src/redis-server redis.conf

View the Startup Process

 
 
  1. [root@test01 7001]# ps -ef|grep redis
  2. root 8858 1 0 11:38 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.2.9:7000 [cluster]
  3. root 8865 1 0 11:39 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.2.9:7001 [cluster]
  4. root 8870 27799 0 11:39 pts/1 00:00:00 grep redis

Start other server nodes separately.

8. All three servers must execute the software required to create the cluster.

 
 
  1. [root@test01 7000]# yum -y install ruby ruby-devel rubygems rpm-build
  2. [root@test01 7000]# ruby -v
  3. ruby 1.8.7 (2013-06-27 patchlevel 374) [x86_64-linux]
  4. [root@redis 7000]# rpm -qa|grep rubyge
  5. rubygems-1.3.7-5.el6.noarch
  6. [root@redis 7000]# gem install redis
  7. Successfully installed redis-3.3.0
  8. 1 gem installed
  9. Installing ri documentation for redis-3.3.0...
  10. Installing RDoc documentation for redis-3.3.0.
9. confirm that all nodes are started, and then create with the create Parameter
 
 
  1. [root@test01 7000]# /usr/local/redis/src/redis-trib.rb create --replicas 1 192.168.2.9:7000 192.168.2.10:7002 192.168.2.7:7004 192.168.2.10:7003 192.168.2.7:7005 192.168.2.9:7001
  2. >>> Creating cluster
  3. >>> Performing hash slots allocation on 6 nodes...
  4. Using 3 masters:
  5. 192.168.2.7:7004
  6. 192.168.2.9:7000
  7. 192.168.2.10:7002
  8. Adding replica 192.168.2.9:7001 to 192.168.2.7:7004
  9. Adding replica 192.168.2.7:7005 to 192.168.2.9:7000
  10. Adding replica 192.168.2.10:7003 to 192.168.2.10:7002
  11. M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
  12. slots:5461-10922 (5462 slots) master
  13. M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
  14. slots:10923-16383 (5461 slots) master
  15. M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
  16. slots:0-5460 (5461 slots) master
  17. S: 6f13ca12a9be3b0c093d02c81fed337307f295af 192.168.2.10:7003
  18. replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
  19. S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
  20. replicates bede5b72dfbb5274e52a1f0c5f6b43170afec8af
  21. S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
  22. replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
  23. Can I set the above configuration? (type 'yes' to accept): yes
  24. >>> Nodes configuration updated
  25. >>> Assign a different config epoch to each node
  26. >>> Sending CLUSTER MEET messages to join the cluster
  27. Waiting for the cluster to join...
  28. >>> Performing Cluster Check (using node 192.168.2.9:7000)
  29. M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
  30. slots:5461-10922 (5462 slots) master
  31. M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
  32. slots:10923-16383 (5461 slots) master
  33. M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
  34. slots:0-5460 (5461 slots) master
  35. M: 6f13ca12a9be3b0c093d02c81fed337307f295af 192.168.2.10:7003
  36. slots: (0 slots) master
  37. replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
  38. M: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
  39. slots: (0 slots) master
  40. replicates bede5b72dfbb5274e52a1f0c5f6b43170afec8af
  41. M: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
  42. slots: (0 slots) master
  43. replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
  44. [OK] All nodes agree about slots configuration.
  45. >>> Check for open slots...
  46. >>> Check slots coverage...
  47. [OK] All 16384 slots covered.

-- Replicas 1: Create a cluster and specify a slave node.

Because the master and slave nodes cannot be specified during cluster creation, they are random. Nodes added to the cluster are in different order. If a server has a master node and a slave node, and they are in a group, you must kill the master and slave nodes on the same server, switch to nodes of different servers.
From the above, we can see that the disallow 7003 and 7002 are allocated to a server. After testing, a server usually has Master nodes and slave nodes in the same group.
Master-slave relationship diagram:
192.168.2.10: 7002 192.168.2.10: 7003
192.168.2.9: 7000 192.168.2.7: 7005
192.168.2.7: 7004 192.168.2.9: 7001

10. Replace 192.168.2.10: 7003 and 192.168.2.7: 7005.

Delete node 7003

 
 
  1. [root@test01 7000]# /usr/local/redis/src/redis-trib.rb del-node 192.168.2.10:7003 6f13ca12a9be3b0c093d02c81fed337307f295af
  2. >>> Removing node 6f13ca12a9be3b0c093d02c81fed337307f295af from cluster 192.168.2.10:7003
  3. >>> Sending CLUSTER FORGET messages to the cluster...
  4. >>> SHUTDOWN the node.

Del-node: Delete the node and the host and corresponding port, and the instance id. In this way, you can delete it.

11. Change the Master Address of 192.168.2.7: 7005 to 192.168.2.10: 7002.

Go to redis 192.168.2.7: 7005

 
 
  1. [Root @ test02 redis_cluster] #/usr/local/redis/src/redis-cli-c-p 7005-h 192.168.2.7
  2. 192.168.2.7: 7005> cluster replicate 01594f84df9e743a74a47f9aaa58fa41402dfe25 # id of the new master
  3. OK

View Current Master/Slave status

 
 
  1. [root@test02 redis_cluster]# /usr/local/redis/src/redis-trib.rb check 192.168.2.10:7002
  2. >>> Performing Cluster Check (using node 192.168.2.10:7002)
  3. M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
  4. slots:10923-16383 (5461 slots) master
  5. 1 additional replica(s)
  6. M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
  7. slots:5461-10922 (5462 slots) master
  8. 0 additional replica(s)
  9. M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
  10. slots:0-5460 (5461 slots) master
  11. 1 additional replica(s)
  12. S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
  13. slots: (0 slots) slave
  14. replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
  15. S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
  16. slots: (0 slots) slave
  17. replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
  18. [OK] All nodes agree about slots configuration.
  19. >>> Check for open slots...
  20. >>> Check slots coverage...
  21. [OK] All 16384 slots covered.

You can see that the master-slave relationship has changed.

12. Add the deleted 7003 nodes.

 
 
  1. [root@test02 redis_cluster]# ps -ef|grep redis
  2. root 4441 1 0 11:45 ? 00:00:13 /usr/local/redis/src/redis-server 192.168.2.10:7002 [cluster]
  3. root 4673 4275 0 15:02 pts/0 00:00:00 grep redis
  4. [root@test02 redis_cluster]# cd 7003/
  5. [root@test02 7003]# ls
  6. appendonly.aof dump.rdb nodes.conf redis.conf
  7. [root@test02 7003]# rm -f appendonly.aof dump.rdb nodes.conf
  8. [root@test02 7003]# /usr/local/redis/src/redis-server redis.conf
  9. [root@test02 7003]# ps -ef|grep redis
  10. root 4441 1 0 11:45 ? 00:00:14 /usr/local/redis/src/redis-server 192.168.2.10:7002 [cluster]
  11. root 4677 1 0 15:02 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.2.10:7003 [cluster]
  12. root 4681 4275 0 15:02 pts/0 00:00:00 grep redis

13. Run the following command to add a slave node.

 
 
  1. [root@test02 7003]# /usr/local/redis/src/redis-trib.rb add-node --slave --master-id bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.10:7003 192.168.2.9:7000
  2. >>> Adding node 192.168.2.7:7005 to cluster 192.168.2.9:7000
  3. >>> Performing Cluster Check (using node 192.168.2.9:7000)
  4. M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
  5. slots:5461-10922 (5462 slots) master
  6. 0 additional replica(s)
  7. S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
  8. slots: (0 slots) slave
  9. replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
  10. S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
  11. slots: (0 slots) slave
  12. replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
  13. M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
  14. slots:0-5460 (5461 slots) master
  15. 1 additional replica(s)
  16. M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
  17. slots:10923-16383 (5461 slots) master
  18. 1 additional replica(s)
  19. [OK] All nodes agree about slots configuration.
  20. >>> Check for open slots...
  21. >>> Check slots coverage...
  22. [OK] All 16384 slots covered.
  23. >>> Send CLUSTER MEET to node 192.168.2.10:7003 to make it join the cluster.
  24. Waiting for the cluster to join.
  25. >>> Configure node as replica of 192.168.2.9:7000.
  26. [OK] New node added correctly.

The slave node with 7003 nodes as 7000 is displayed on the top.

14. view the current Master/Slave status

 
 
  1. [root@test02 7003]# /usr/local/redis/src/redis-trib.rb check 192.168.2.10:7002
  2. >>> Performing Cluster Check (using node 192.168.2.10:7002)
  3. M: 01594f84df9e743a74a47f9aaa58fa41402dfe25 192.168.2.10:7002
  4. slots:10923-16383 (5461 slots) master
  5. 1 additional replica(s)
  6. M: bede5b72dfbb5274e52a1f0c5f6b43170afec8af 192.168.2.9:7000
  7. slots:5461-10922 (5462 slots) master
  8. 1 additional replica(s)
  9. S: 2e26f92f8dca2dfb13f159a59b6260f106bc8cb4 192.168.2.10:7003
  10. slots: (0 slots) slave
  11. replicates bede5b72dfbb5274e52a1f0c5f6b43170afec8af
  12. M: 27cedfdc0a648b9141736f156a4d89828d7bf695 192.168.2.7:7004
  13. slots:0-5460 (5461 slots) master
  14. 1 additional replica(s)
  15. S: f167b98d8f78bfdb4c1823c0d6be7f1a12aff194 192.168.2.9:7001
  16. slots: (0 slots) slave
  17. replicates 27cedfdc0a648b9141736f156a4d89828d7bf695
  18. S: 00333fa0ac74863e86c3108f6040abe1183a2b9b 192.168.2.7:7005
  19. slots: (0 slots) slave
  20. replicates 01594f84df9e743a74a47f9aaa58fa41402dfe25
  21. [OK] All nodes agree about slots configuration.
  22. >>> Check for open slots...
  23. >>> Check slots coverage...
  24. [OK] All 16384 slots covered
This is a normal cluster ~~ O & M personnel will no longer suffer from downtime or redis failures!













Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.