Redis cluster practice

Source: Internet
Author: User
Tags redis version redis cluster install redis
I. Note that the redis3.0 cluster has been available for some time. Currently, the latest stable version is 3.0.5. I have learned that many Internet companies are using it in production environments, such as weipinhui and Meituan, just as the company has a new project, the estimated volume of Single-host redis cannot be met, and the development does not want to split it at the code level, so we recommend them try redis.

I. It has been some time since the redis 3.0 cluster function was available. The latest stable version is 3.0.5. I have learned that many Internet companies are using it in production environments, such as weipinhui and Meituan, just as the company has a new project, the estimated volume of Single-host redis cannot be met, and the development does not want to split it at the code level, so we recommend them try redis.

I. Description

The redis 3.0 cluster function has been available for some time. The latest stable version is 3.0.5. I have learned that many Internet companies are using it in the production environment, such as weipinhui and Meituan, just as the company has a new project, the estimated volume of Single-host redis cannot be met, and the development does not want to split it at the code level. Therefore, we recommend that you try the redis cluster and take some notes below, for future use

II. Environment

1. redis Node

10.10.2.70: 6300 10.10.2.70: 6301 Master/Slave 10.10.2.71: 6300 10.10.2.71: 6301 Master/Slave 10.10.2.85: 6300 10.10.2.85: 6301 Master/Slave

2. redis version

Redis version 3.0.5

Iii. installation and configuration

1. Install redis

wget http://download.redis.io/releases/redis-3.0.5.tar.gztar -zxvf redis-3.0.5.tar.gzcd redis-3.0.5makecp redis-3.0.5/src/redis-trib.rb /bin/cp redis-3.0.5/src/redis-server /bin/cp redis-3.0.5/src/redis-cli    /bin/

2. Install the redis module of ruby and ruby

yum -y install ruby rubygemsgem install redis --version 3.0.5

3. kernel Optimization

echo never > /sys/kernel/mm/transparent_hugepage/enabledecho "vm.overcommit_memory = 1" >> /etc/sysctl.conf  sysctl -p

4. Create a directory

mkdir /data/redis/6300 -pmkdir /data/redis/6301

5. Write the redis configuration file (change the port in the cp configuration file)

Vim/etc/redis_6300.confdaemonize yesport 6300tcp-backlog 511 timeout defaults 0 loglevel noticemaxmemory 10 gbdatabases 16dir/data/redis/6300slave-serve-stale-data yesloglevel noticelogfile "/data/redis/6300/latest" # slave read-Only slave-read -only yes # not use defaultrepl-disable-tcp-nodelay yesslave-priority 100 # enable aof persistent appendonly yes # aof write appendfsync everysec once per second # disable aof rewrite for new perform the fsyncno-appendfsync-on-rewrite yesauto-aof-rewrite-min-size 64mblua-time-limit 5000 # Open the redis cluster-enabled yescluster-config-file/data/redis/6300/ nodes-6300.conf # threshold for node interconnect timeout (MS) cluster-node-timeout 15000 # When a master node has many good slave nodes, it is necessary to cutover a slave node to the cluster-migration-of the master node that has not been removed from the slave node or the slave node- barrier 1 # if some key spaces are not overwritten by any node in the cluster, the most common is that a node fails. The cluster stops writing data to the redis instance deployed on the same machine in cluster-require-full-coverage no #, and switches off auto-aof-rewrite, prevents all redis processes in fork from performing rewrite in an instant, A large amount of memory is occupied. auto-aof-rewrite-percentage contains too many slowlog-max-len 128366y-keyspace-events "" hash-max-ziplist-entries contains too many concurrent threads 64 activerehashing yesclient-output-buffer-li 0client-output-buffer-limit slave 256 mb 64 mb 60client-output-buffer-limit pubsub 32 mb 8 mb 60 hz 10aof-rewrite-incremental-fsync yes

6. Start the service

redis-server /etc/redis_6300.confredis-server /etc/redis_6301.confecho "redis-server /etc/redis_6300.conf" >> /etc/rc.localecho "redis-server /etc/redis_6301.conf" >> /etc/rc.local

7. initialize the Cluster

# Node roles are determined by the order, first master and then slave, in this article 6300 is the master, 6301 is the slaveredis-trib.rb create -- replicas 1 10.10.2.70: 6300 10.10.2.71: 6300 10.10.2.85: 6300 10.10.2.70: 6301 10.10.2.71: 6301 10.10.2.85: 6301

8. view the cluster status

redis-trib.rb check 10.10.2.70:6300

PS:

Redis-trib.rb is a ruby tool, encapsulated redis cluster of some commands, using this tool to operate the cluster is very convenient, such as the cluster initialization above, view the cluster status, and add, delete nodes, migration slot and other functions

Iv. redis cluster maintenance

A. Scenario 1

The online cluster has a bottleneck and needs to be resized. For example, we have prepared a master node and a slave node (10.10.2.85: 6302, 10.10.2.85: 6303), as shown below:

1. Add a master node

[root@yw_0_0 ~]# redis-trib.rb add-node 10.10.2.85:6302 10.10.2.70:6300>>> Adding node 10.10.2.85:6302 to cluster 10.10.2.70:6300Connecting to node 10.10.2.70:6300: OKConnecting to node 10.10.2.85:6300: OKConnecting to node 10.10.2.85:6301: OKConnecting to node 10.10.2.71:6300: OKConnecting to node 10.10.2.70:6301: OKConnecting to node 10.10.2.71:6301: OK>>> Performing Cluster Check (using node 10.10.2.70:6300)S: cd1f2c1f348bb4359337e7462c1e21dc82f1551b 10.10.2.70:6300   slots: (0 slots) slave   replicates 85412cf3d8e69354115fc0991f470b32b9213cd7M: 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013 10.10.2.85:6300   slots:0-5460 (5461 slots) master   1 additional replica(s)S: a74642c0fbc98f921be477eabcdd22eccd89891f 10.10.2.85:6301   slots: (0 slots) slave   replicates 2568dbd91fffa16ff93ea8db19275fd7ec8af41aM: 2568dbd91fffa16ff93ea8db19275fd7ec8af41a 10.10.2.71:6300   slots:5461-10922 (5462 slots) master   1 additional replica(s)M: 85412cf3d8e69354115fc0991f470b32b9213cd7 10.10.2.70:6301   slots:10923-16383 (5461 slots) master   1 additional replica(s)S: 22d2dec483824b84571a60e8c037fff957615552 10.10.2.71:6301   slots: (0 slots) slave   replicates 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.Connecting to node 10.10.2.85:6302: OK>>> Send CLUSTER MEET to node 10.10.2.85:6302 to make it join the cluster.[OK] New node added correctly.

10.10.2.85: 6302 is the new node to be added, and 10.10.2.70: 6300 is an existing node in the cluster.

2. Add slave nodes to the master node

[root@yw_0_0 ~]# redis-trib.rb add-node --slave --master-id 5ef18f95f75756891aa948ea1f200044f1d3947c 10.10.2.85:6303 10.10.2.70:6300>>> Adding node 10.10.2.85:6303 to cluster 10.10.2.70:6300Connecting to node 10.10.2.70:6300: OKConnecting to node 10.10.2.85:6300: OKConnecting to node 10.10.2.85:6302: OKConnecting to node 10.10.2.85:6301: OKConnecting to node 10.10.2.71:6300: OKConnecting to node 10.10.2.70:6301: OKConnecting to node 10.10.2.71:6301: OK>>> Performing Cluster Check (using node 10.10.2.70:6300)S: cd1f2c1f348bb4359337e7462c1e21dc82f1551b 10.10.2.70:6300   slots: (0 slots) slave   replicates 85412cf3d8e69354115fc0991f470b32b9213cd7M: 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013 10.10.2.85:6300   slots:0-5460 (5461 slots) master   1 additional replica(s)M: 5ef18f95f75756891aa948ea1f200044f1d3947c 10.10.2.85:6302   slots: (0 slots) master   0 additional replica(s)S: a74642c0fbc98f921be477eabcdd22eccd89891f 10.10.2.85:6301   slots: (0 slots) slave   replicates 2568dbd91fffa16ff93ea8db19275fd7ec8af41aM: 2568dbd91fffa16ff93ea8db19275fd7ec8af41a 10.10.2.71:6300   slots:5461-10922 (5462 slots) master   1 additional replica(s)M: 85412cf3d8e69354115fc0991f470b32b9213cd7 10.10.2.70:6301   slots:10923-16383 (5461 slots) master   1 additional replica(s)S: 22d2dec483824b84571a60e8c037fff957615552 10.10.2.71:6301   slots: (0 slots) slave   replicates 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.Connecting to node 10.10.2.85:6303: OK>>> Send CLUSTER MEET to node 10.10.2.85:6303 to make it join the cluster.Waiting for the cluster to join.>>> Configure node as replica of 10.10.2.85:6302.[OK] New node added correctly.

-- Slave specifies the slave node to be added, -- master-id specifies the master node ID of the slave node, 10.10.2.85: 6303 is the slave node to be added, 10.10.2.70: 6300 is an existing node of the cluster.

3. migrate some slots to new nodes

[Root @ yw_0_0 ~] # Redis-trib.rb reshard 10.10.2.70: 6300 Connecting to node 10.10.2.70: 6300: OKConnecting to node 10.10.2.85: 6300: OKConnecting to node 10.10.2.85: 6303: OKConnecting to node 10.10.2.85: 6302: OKConnecting to node 10.10.10.2.85: 6301: OKConnecting to node 10.10.2.71: 6300: OKConnecting to node 10.10.2.70: 6301: OKConnecting to node 10.10.2.71: 6301: OK >>> specify Ming Cluster Check (using node 10.10.2.70: 6300) S: Lost 10.10.2.70: 6300 slots: (0 slots) slave replicates failed: Lost 10.10.2.85: 6300 slots: 0-5460 (5461 slots) master 1 additional replica (s) S: Lost 10.10.2.85: 6303 slots: (0 slots) slave replicates 5ef18f95f75756891aa948ea1f200044f1d 3947cM: 5ef18f95f75756 Failed 10.10.2.85: 6302 slots: (0 slots) master 1 additional replica (s) S: Failed 10.10.2.85: 6301 slots: (0 slots) slave replicates failed: Failed 10.10.2.71: 6300 slots: 5461-10922 (5462 slots) master 1 additional replica (s) M: 8108cf3d8e69354115fc0991f470b32b9213cd7 10.10.2.70: 6 301 slots: 10923-16383 (5461 slots) master 1 additional replica (s) S: 22d2dec483824b84571a60e8c037fff957615552 10.10.2.71: 6301 slots: (0 slots) slave replicates 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013 [OK] All nodes agree about slots configuration. >>> check for open slots... >>> check slots coverage... [OK] All 16384 slots covered. how many slots do you want to move (from 1 to 16384 )? 3000 # Set the 3000 slots to be moved. What is the working node ID? 5ef18f95f75756891aa948ea1f200044f1d3947c # Set the ID of the node that receives the 3000 slots, that is, the newly added IDPlease enter all the source node IDs. type 'all' to use all the nodes as source nodes for the hash slots. type 'done' once you entered all the source nodes IDs. source node #1: 8127cf3d8e69354115fc0991f470b32b9213cd7 # Set the Source ID of the 3000slot. Here I will retrieve some slotSource nodes from the three nodes before the cluster #2: worker 3 # Set the source ID of the 3000slot. Here I will retrieve some slotSource nodes from the three previous nodes of the cluster #3: 2568dbd91fffa16ff93ea8db19275fd7ec8af41a # Set the source ID of the 3000slot, here I will retrieve part of slotSource node #4 from the three previous nodes of the cluster: done # input done to start initialization. Do you want to proceed with the proposed reshard plan (yes/no) is omitted here )? Yes, enter yes to confirm the start of migration slot

B. Scenario 2

In the preceding example, the cluster needs to be scaled down due to various reasons. The following example deactivates the node to be scaled up as follows:

1. migrate slots of this node to other nodes (nodes with slots cannot be directly deprecated)

[Root @ yw_0_0 ~] # Redis-trib.rb reshard 10.10.2.70: 6300 Connecting to node 10.10.2.70: 6300: OKConnecting to node 10.10.2.85: 6300: OKConnecting to node 10.10.2.85: 6303: OKConnecting to node 10.10.2.85: 6302: OKConnecting to node 10.10.10.2.85: 6301: OKConnecting to node 10.10.2.71: 6300: OKConnecting to node 10.10.2.70: 6301: OKConnecting to node 10.10.2.71: 6301: OK >>> specify Ming Cluster Check (using node 10.10.2.70: 6300) S: Lost 10.10.2.70: 6300 slots: (0 slots) slave replicates failed: Lost 10.10.2.85: 6300 slots: 999-5460 (4462 slots) master 1 additional replica (s) S: Lost 10.10.2.85: 6303 slots: (0 slots) slave replicates 5ef18f95f75756891aa948ea1f200044f1d 3947cM: 5ef18f95f757 Listen 10.10.2.85: 6302 slots: 0-998,5461-6461,42523-11921 (2999 slots) master 1 additional replica (s) S: Listen 10.10.2.85: 6301 slots: (0 slots) slave replicates shards: 2568dbd91fffa16ff93ea8db19275fd7ec8af41a 10.10.2.71: 6300 slots: 6462-10922 (4461 slots) master 1 additional replica (s) M: 8127cf3d8e69354115fc 0991f470b32b9213cd7 10.10.2.70: 6301 slots: 11922-16383 (4462 slots) master 1 additional replica (s) S: ipv10.10.2.71: 6301 slots: (0 slots) slave replicates 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013 [OK] All nodes agree about slots configuration. >>> check for open slots... >>> check slots coverage... [OK] All 16384 slots covered. how many slots do you want to move (fro M 1 to 16384 )? 3000 # I migrated 3000 slots to this node, so I chose to move 3000 slotWhat is the same ing node ID? 8127cf3d8e69354115fc0991f470b32b9213cd7 # receive the master IDPlease OF the 3000slot node enter all the source node IDs. type 'all' to use all the nodes as source nodes for the hash slots. type 'done' once you entered all the source nodes IDs. source node #1 )? Yes

2. Check that there is no slot on the maser 10.10.2.85: 6302.

10.10.2.71:6300> cluster nodes85412cf3d8e69354115fc0991f470b32b9213cd7 10.10.2.70:6301 master - 0 1445853133399 12 connected 0-999 6462-7460 10923-1638322d2dec483824b84571a60e8c037fff957615552 10.10.2.71:6301 slave 6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013 0 1445853132898 10 connected6bea6afa2ee8dfb0cc3c96f804eb3fa77ce98013 10.10.2.85:6300 master - 0 1445853134400 10 connected 1000-54612568dbd91fffa16ff93ea8db19275fd7ec8af41a 10.10.2.71:6300 myself,master - 0 0 11 connected 5462-6461 7461-10922cd1f2c1f348bb4359337e7462c1e21dc82f1551b 10.10.2.70:6300 slave 85412cf3d8e69354115fc0991f470b32b9213cd7 0 1445853131395 12 connectedfc90d090fae909fd4f962752941c039d081d3854 10.10.2.85:6303 slave 5ef18f95f75756891aa948ea1f200044f1d3947c 0 1445853133899 8 connecteda74642c0fbc98f921be477eabcdd22eccd89891f 10.10.2.85:6301 slave 2568dbd91fffa16ff93ea8db19275fd7ec8af41a 0 1445853129394 11 connected5ef18f95f75756891aa948ea1f200044f1d3947c 10.10.2.85:6302 master - 0 1445853132397 8 connected

3. deprecate the slave Node

[root@yw_0_0 ~]# redis-trib.rb del-node 10.10.2.85:6303 fc90d090fae909fd4f962752941c039d081d3854>>> Removing node fc90d090fae909fd4f962752941c039d081d3854 from cluster 10.10.2.85:6303Connecting to node 10.10.2.85:6303: OKConnecting to node 10.10.2.85:6301: OKConnecting to node 10.10.2.85:6302: OKConnecting to node 10.10.2.85:6300: OKConnecting to node 10.10.2.70:6300: OKConnecting to node 10.10.2.71:6301: OKConnecting to node 10.10.2.70:6301: OKConnecting to node 10.10.2.71:6300: OK>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.

4. deprecate the master node

redis-trib.rb del-node 10.10.2.70:6301 5ef18f95f75756891aa948ea1f200044f1d3947c>>> Removing node 5ef18f95f75756891aa948ea1f200044f1d3947c from cluster 10.10.2.70:6301Connecting to node 10.10.2.70:6301: OKConnecting to node 10.10.2.71:6300: OKConnecting to node 10.10.2.85:6301: OKConnecting to node 10.10.2.71:6301: OKConnecting to node 10.10.2.85:6302: OKConnecting to node 10.10.2.70:6300: OKConnecting to node 10.10.2.85:6300: OK>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.

C. Scenario 3

The master node of a node in the cluster fails, and the master node is upgraded from the node to the master node. The master node has not been urgently added to the new master node, this node in the cluster is completely unavailable. To solve this problem, we must at least ensure that there are more than two slave nodes under the maser of each node, the required memory resources or server resources are doubled. Is there any way to fold them? The answer is yes. The cluster-migration-barrier parameter in the configuration file above the node is not, we only need to mount multiple slave databases to the master of one of the nodes in the cluster. When there is no slave database available under the master of other nodes, A master with multiple slave databases cutovers an slave to ensure the availability of the entire cluster.

1. Add a slave database 10.10.2.85: 6300 to 10.10.2.70: 6301.

[root@yw_0_0 ~]# redis-trib.rb add-node --slave --master-id cd1f2c1f348bb4359337e7462c1e21dc82f1551b 10.10.2.85:6302 10.10.2.70:6300>>> Adding node 10.10.2.85:6302 to cluster 10.10.2.70:6300Connecting to node 10.10.2.70:6300: OKConnecting to node 10.10.2.85:6300: OKConnecting to node 10.10.2.71:6300: OKConnecting to node 10.10.2.70:6301: OKConnecting to node 10.10.2.85:6301: OKConnecting to node 10.10.2.71:6301: OK>>> Performing Cluster Check (using node 10.10.2.70:6300)M: cd1f2c1f348bb4359337e7462c1e21dc82f1551b 10.10.2.70:6300   slots:3000-5461,6462-7460,10923-16383 (8922 slots) master   1 additional replica(s)M: e36cdef7a26ed59e8d9db2cf1dbc1997bfc9dfde 10.10.2.85:6300   slots:0-2999 (3000 slots) master   1 additional replica(s)M: 2568dbd91fffa16ff93ea8db19275fd7ec8af41a 10.10.2.71:6300   slots:5462-6461,7461-10922 (4462 slots) master   1 additional replica(s)S: 85412cf3d8e69354115fc0991f470b32b9213cd7 10.10.2.70:6301   slots: (0 slots) slave   replicates cd1f2c1f348bb4359337e7462c1e21dc82f1551bS: 89fcc4994a99ed2fe9bbb908c58dfda2cf31e7d2 10.10.2.85:6301   slots: (0 slots) slave   replicates e36cdef7a26ed59e8d9db2cf1dbc1997bfc9dfdeS: 1f3ea36eacbe005a4b9ac52aeef6d83337dac051 10.10.2.71:6301   slots: (0 slots) slave   replicates 2568dbd91fffa16ff93ea8db19275fd7ec8af41a[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.Connecting to node 10.10.2.85:6302: OK>>> Send CLUSTER MEET to node 10.10.2.85:6302 to make it join the cluster.Waiting for the cluster to join.>>> Configure node as replica of 10.10.2.70:6300.[OK] New node added correctly.

2. Stop the slave nodes in 10.10.2.71: 6300 10.10.2.71: 6301.

redis-cli -h 10.10.2.71 -p 6301 shutdown

3. Check whether the node 10.10.2.85: 6302 is a slave database of 10.10.2.71: 6300.

10.10.2.71: 6300> CLUSTER route 10.10.2.70: 6301 slave Route 0 1445911596844 17 route 10.10.2.85: 6301 slave e36cdef7a26ed59e8d9db2cf1dbc1997bfc9dfde 0 1445911594841 20 route 10.10.2.71: 6300 myself, master-0 0 11 connected 5462-6461 slave 10.10.2.70: 6300 master-0 1445911593839 17 connected 3000-5461 6462-7460 10923-163832b34532cd6937063d1da26cd3162881b73d97a06 10.10.2.85: 6302 slave failed 0 1445911592838 17 connected # It has been successfully mounted to 10.10.2.71: 6300 under 10.10.10.2.71: 6301 slave, fail 1445911561982 1445911559778 6300 11 fail 10.10.2.85: 1445911595843 master-0 2999 20 connected 0-

V. cluster commands

Cluster info prints information about the CLUSTER. cluster nodes lists all the NODES currently known to the CLUSTER and their information. Node CLUSTER MEET
  
  
   
Add the node specified by the ip address and port to the cluster to make it part of the cluster. CLUSTER FORGET
   
    
Remove the node specified by node_id from the cluster. CLUSTER REPLICATE
    
     
Set the current node as the slave node of the node specified by node_id. Cluster saveconfig saves the node configuration file to the hard disk. Slot CLUSTER ADDSLOTS
     
      
[Slot...] assigns one or more slots (assigns) to the current node. CLUSTER DELSLOTS
      
        [Slot...] removes the assignment of one or more slots to the current node. The cluster flushslots removes all slots assigned to the current node, turning the current node into a node without any slots. CLUSTER SETSLOT
       
         NODE
        
          Assign the slot to the node specified by node_id. If the slot has been assigned to another node, delete the slot> from the other node before assigning the slot. CLUSTER SETSLOT
         
           MIGRATING
          
            Migrate the slot of the current node to the node specified by node_id. CLUSTER SETSLOT
           
             IMPORTING
            
              Import slot from the node specified by node_id to the current node. CLUSTER SETSLOT
             
               STABLE cancels the import or migrate of slot ). Key CLUSTER KEYSLOT
              
                The slot in which the computing key should be placed. CLUSTER COUNTKEYSINSLOT
               
                 Returns the number of key-value pairs currently contained in the slot. CLUSTER GETKEYSINSLOT
                 
                 
                   Returns the keys in the count slot.
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   
  
 

References:

Http://www.redis.cn/topics/cluster-tutorial.html

Http://www.redis.cn/topics/cluster-spec.html

Http://redisdoc.com/topic/cluster-tutorial.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.