Redis Cluster finally out of the stable, which makes people very excited, and so stable for a long time, so still play first.
I. Cluster simple concept.
A Redis cluster is a facility (installation) that can share data between multiple Redis nodes .
Redis clusters do not support REDIS commands that require processing multiple keys at the same time, because executing these commands requires data to be moved between multiple Redis nodes, and under high load, these commands degrade the performance of the Redis cluster and lead to unpredictable behavior.
Redis clusters provide a degree of availability (availability) through partitioning (partition ): Clusters can continue to process command requests even if some of the nodes in the cluster fail or are unable to communicate.
The Redis cluster offers the following two benefits:
- The ability to automatically slice (split) data to multiple nodes.
- When a subset of nodes in a cluster fail or are unable to communicate, the ability to continue processing command requests is still possible.
Redis clusters are implemented using data shards (sharding) rather than consistent hashing (consistency hashing): A Redis cluster contains16384Hash slot (hash slot) where each key in the database belongs to this16384One of the hash slots, using a formula for the cluster CRC16 (key) % 16384 to calculate which slot the key belongs to, where CRC16 (Key) The CRC16 statement is used to calculate The checksum of key keys .
Each node in the cluster is responsible for processing a portion of the hash slot. For example, a cluster can have three hash slots, where:
- Node A is responsible for processing 0 to 5500 hash slots.
- Node B is responsible for processing 5501 to 11000 hash slots.
- Node C is responsible for processing 11001 to 16384 hash slots.
This method of distributing hash slots to different nodes makes it easy for users to add or remove nodes to the cluster. For example:
- If the user adds the new node D to the cluster, the cluster simply moves some slots in node A, B, and C to node D.
- Similarly, if a user wants to remove node A from the cluster, the cluster simply moves all the hash slots in node A to node B and node C, and then removes the empty (no hash slot) Node A.
Because moving a hash slot from one node to another does not cause the node to block, either adding a new node or removing an existing node, or changing the number of hash slots that a node contains, does not cause the cluster to go offline.
The Redis cluster uses the master-slave replication function for the node in order to enable the cluster to operate normally in the case of a subset of nodes that are offline or unable to communicate with most (majority) nodes of the cluster: Each node in the cluster has 1 to N replicas (replica), where one replica is the primary node (master), while the remaining N-1 replicas are from nodes (slave).
In the previous examples of nodes A, B, C, if node B went offline, the cluster would not function properly because the cluster could not find a node to handle the hash slot 5501 to 11000 .
On the other hand, if at the time of creating the cluster (or at least before Node B), we add the slave node B1 to the main node B, then when the primary node B is down, the cluster will set the B1 as the new master node and let it take the place of the main node B of the downline and continue processing 5501 The hash slot of 4> to 11000 so that the cluster does not function properly due to the downline of primary node B.
However, if both node B and B1 are offline, the Redis cluster will stop functioning.
The composition of the Redis-cluster frame is as follows:
Architectural Details:
(1) All Redis nodes are interconnected (ping-pong mechanism), using binary protocols internally to optimize transmission speed and bandwidth.
(2) The fail of the node is effective only when the detection of more than half of the nodes in the cluster fails.
(3) The client is directly connected to the Redis node and does not require an intermediate proxy layer. The client does not need to connect to all nodes in the cluster and connects to any of the available nodes in the cluster
(4) Redis-cluster maps all the physical nodes to the [0-16383]slot, cluster is responsible for maintaining Node<->slot<->value
Two. Redis Cluster build and use
To make the cluster work at least 3 primary nodes, here we will create 6 Redis nodes, of which three are the primary node, three are the slave nodes, and the corresponding IP and port correspondence of the Redis node are as follows (for a simple demonstration all on the same machine)
127.0.0.1:7000
127.0.0.1:7001
127.0.0.1:7002
127.0.0.1:7003
127.0.0.1:7004
127.0.0.1:7005
1. Download the latest version of Redis.
wget http://download.redis.io/releases/redis-3.0.0.tar.gz
2. Unzip, install
Tar XF redis-3.0.0.tar.gz CD redis-3.0.0make && make install
3. Create a directory that holds multiple instances
Mkdir/data/cluster-pcd/data/clustermkdir 7000 7001 7002 7003 7004 7005
4. Modify the configuration file
CP Redis-3.0.0/redis.conf/data/cluster/7000/
Modify the following options in the configuration file
Port 7000
Daemonize Yes
cluster-enabled Yes
Cluster-config-file nodes.conf
Cluster-node-timeout 5000
AppendOnly Yes
The cluster-enabled option in the file is used to open the cluster mode of the instance, while the cluster-conf-file option sets the path to save the node configuration file, which is the default value Nodes.conf . Other parameters It is believed that children's shoes are known. The node profile is not human-modified, it is created by the Redis cluster at startup, and is automatically updated if necessary.
After the modification is completed, the modified redis.conf is copied to the 7001-7005 directory, and the port is modified to correspond to the folder.
5. Start 6 Redis instances separately.
Cd/data/cluster/7000redis-server Redis.confcd/data/cluster/7001redis-server redis.confcd/data/cluster/ 7002redis-server Redis.confcd/data/cluster/7003redis-server Redis.confcd/data/cluster/7004redis-server Redis.confcd/data/cluster/7005redis-server redis.conf
See if the process exists.
[Email protected] 7005]# Ps-ef | grep redisroot 4168 1 0 11:49? 00:00:00 redis-server *:7000 [cluster]root 4176 1 0 11:49? 00:00:00 redis-server *:7001 [cluster]root 4186 1 0 11:50? 00:00:00 redis-server *:7002 [cluster]root 4194 1 0 11:50? 00:00:00 redis-server *:7003 [cluster]root 4202 1 0 11:50? 00:00:00 redis-server *:7004 [cluster]root 4210 1 0 11:50? 00:00:00 redis-server *:7005 [cluster]root 4219 4075 0 11:50 pts/2 00:00:00 grep redis
6. Execute the command to create the cluster, install the dependencies first, or create a cluster failure.
Yum install ruby Rubygems-y
Installing Gem-redis
: https://rubygems.org/gems/redis/versions/3.0.0
Gem Install-l Redis-3.0.0.gem
Copy the cluster management program to/usr/local/bin
To create a cluster:
Redis-trib Create--replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
The meaning of the command is as follows:
- The command for a given redis-trib.rb program is create , which means we want to create a new cluster.
- Option --replicas 1 means that we want to create a slave node for each master node in the cluster.
- The other parameters followed are the address lists of the instances, and we want the program to use the instance indicated by these addresses to create a new cluster.
Simply put, the above command means that the redis-trib program creates a cluster that contains three primary nodes and three slave nodes.
Next, Redis-trib will print out a desired configuration to show you, if you feel no problem, you can enter Yes , redis-trib This configuration will be applied to the cluster:
>>> Creating clusterconnecting to Node 127.0.0.1:7000:okconnecting to node 127.0.0.1:7001:okconnecting to node 127.0.0.1:7002:okconnecting to node 127.0.0.1:7003:okconnecting to node 127.0.0.1:7004:okconnecting to node 127.0.0.1:7 005:ok>>> performing hash slots allocation on 6 nodes ... Using 3 masters:127.0.0.1:7000127.0.0.1:7001127.0.0.1:7002adding replica 127.0.0.1:7003 to 127.0.0.1:7000adding Replica 127.0.0.1:7004 to 127.0.0.1:7001adding replica 127.0.0.1:7005 to 127.0.0.1:7002m: 2774F156AF482B4F76A5C0BDA8EC561A8A1719C2 127.0.0.1:7000 slots:0-5460 (5461 slots) Masterm: 2d03b862083ee1b1785dba5db2987739cf3a80eb 127.0.0.1:7001 slots:5461-10922 (5462 slots) Masterm: 0456869A2C2359C3E06E065A09DE86DF2E3135AC 127.0.0.1:7002 slots:10923-16383 (5461 slots) MasterS: 37b251500385929d5c54a005809377681b95ca90 127.0.0.1:7003 Replicates 2774f156af482b4f76a5c0bda8ec561a8a1719c2s:e2e2e692c40fc34f700762d1fe3a8df94816a062 127.0.0.1:7004 RepliCates 2d03b862083ee1b1785dba5db2987739cf3a80ebs:9923235f8f2b2587407350b1d8b887a7a59de8db 127.0.0.1:7005 Replicates 0456869a2c2359c3e06e065a09de86df2e3135accan I Set the above configuration? (Type ' yes ' to accept):
After you enter Yes and press ENTER to confirm, the cluster applies the configuration to each node and joins each node--that is, the nodes begin to communicate with each other:
Can I Set the above configuration? (Type ' yes ' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each no De>>> sending CLUSTER MEET messages to join the clusterwaiting for the CLUSTER to join......>>> Performi ng Cluster Check (using node 127.0.0.1:7000) m:2774f156af482b4f76a5c0bda8ec561a8a1719c2 127.0.0.1:7000 slots:0- 5460 (5461 slots) Masterm:2d03b862083ee1b1785dba5db2987739cf3a80eb 127.0.0.1:7001 slots:5461-10922 (5462 slots) MASTERM:0456869A2C2359C3E06E065A09DE86DF2E3135AC 127.0.0.1:7002 slots:10923-16383 (5461 slots) MasterM: 37b251500385929d5c54a005809377681b95ca90 127.0.0.1:7003 Slots: (0 slots) Master replicates 2774f156af482b4f76a5c0bda8ec561a8a1719c2m:e2e2e692c40fc34f700762d1fe3a8df94816a062 127.0.0.1:7004 Slots: (0 Slots) Master replicates 2d03b862083ee1b1785dba5db2987739cf3a80ebm:9923235f8f2b2587407350b1d8b887a7a59de8db 127.0.0.1:7005 Slots: (0 SLOTS) Master replicates 0456869a2c2359c3e06e065a09de86df2e3135ac[ok] All nodes agree about slots configuration. >>> Check for open slots...>>> Check slots coverage ... [OK] All 16384 slots covered.
All normal output The following information:
[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage ... [OK] All 16384 slots covered.
Client for the cluster
The simple way to test a Redis cluster is to use redis-rb-cluster or redis-cli , and then we'll use the redis-cli As an example to demonstrate:
We can see what other commands are available:
[[email protected] ~]# redis-trib helpusage:redis-trib <command> <options> <arguments ...> set-timeout Host:port milliseconds add-node new_host:new_port Existing_host:existing_port --master-id <arg> --slave fix Host:port help &NBS P (show this help) del-node host:port node_id import &NBSP ; host:port --from <arg> check Host:port call host:port command arg. Arg. Arg create host1:port1 ... hostn:portn & nbsp --replicas <arg> reshard Host:port --yes &N Bsp --to <arg>--from <arg> --slots <arg>for Check, fix, Reshard, Del-node, set-timeout can specify the host and port of any working node In the cluster. [[email protected] ~]#
Can see Add-node, do not think, it must be added node. Then Del-node is to delete the node. And check must be checking the status.
[Email protected] ~]#
You can see that 7000-7002 is master,7003-7005 is slave.
Fail-over test:
You can see that 7001 is normal, and get to the Key,value, now kill 7000 instances, and then query.
[Email protected] ~]# Ps-ef | grep 7000root 4168 1 0 11:49? 00:00:03 redis-server *:7000 [cluster]root 4385 4361 0 12:39 pts/3 00:00:00 grep 7000[[email protected] ~ ]# kill 4168[[email Protected] ~]# Ps-ef | grep 7000root 4387 4361 0 12:39 PTS/3
Can get to value normally, now look at the state.
[Email protected] ~]# redis-cli-c-P 7001 cluster Nodes2d03b862083ee1b1785dba5db2987739cf3a80eb 127.0.0.1:7001 myself, master-0 0 2 Connected 5461-109220456869a2c2359c3e06e065a09de86df2e3135ac 127.0.0.1:7002 master-0 1428295271619 3 conn ected 10923-1638337b251500385929d5c54a005809377681b95ca90 127.0.0.1:7003 master-0 1428295270603 7 connected 0-5460e2e2e692c40fc34f700762d1fe3a8df94816a062 127.0.0.1:7004 Slave 2d03b862083ee1b1785dba5db2987739cf3a80eb 0 1428295272642 5 connected127.0.0.1:7000 master,fail-1428295159553 1428295157205 1 disconnected
The original 7000 port instance already shows fail, the original 7003 is slave, and is now automatically promoted to master.
For more online add nodes, delete nodes, and re-shard the cluster, refer to the official documentation.
Summarize:
Redis-cluster is a good thing, just stable come out soon, definitely pit slightly more, and now use less people, the early understanding of learning is possible, production environment must be carefully considered. and rigorous testing is required. Redis clusters in production environments can consider using Twitter's open source Twemproxy, as well as the Pea pod Open source Codis, which are all two projects that are more mature and now use a lot of companies. has been confirmed to friends in the industry. The following blog will also introduce Twemproxy and Codis.
Deployment records for Redis clustered environments