Redis Shards:
Why shards: As Redis stores more and more data, the performance of Redis is getting worse!
Current methods of sharding:
1. Client ShardAt the application level of sharding, the program specifies what data is stored in the Redis advantage: more flexible disadvantage: Add a node expansion is very laborious 2, Agent Proxy shard Third party twemproxy use agent shortcomings, you agent what performance, then your whole redis performance is what kind of! 3. Redis cluster4, Codis (pea pod) Open source
Redis cluster:Excerpt here: Http://redisdoc.com/topic/cluster-tutorial.html#id2 cluster Shard:
Redis clusters are implemented using data sharding (sharding) instead of a consistent hash (consistency hashing): A Redis cluster contains 16,384 hash slots (hash slots), and each key in the database belongs to one of these 16,384 hash slots , the cluster uses the formula CRC16 (key)% 16384来 to calculate which slot the key key belongs to, where the CRC16 (key) statement is used to calculate the CRC16 checksum of key keys. Each node in the cluster is responsible for processing a portion of the hash slot. For example, a cluster can have three hash slots, where:
* Node A is responsible for handling hash slots No. 0 to No. 5500.
* Node B is responsible for handling hash slots No. 5501 to 11,000th.
* Node C is responsible for handling hash slots 11,001th to 16,384th.
This method of distributing hash slots to different nodes makes it easy for users to add or remove nodes to the cluster. For example:
* If the user adds the new node D to the cluster, the cluster simply moves some slots in nodes A, B, C to Node D.
* Similarly, if a user wants to remove node A from the cluster, the cluster simply moves all the hash slots in node A to node B and node C, and then removes the empty (without any hash slots) Node A.
Because moving a hash slot from one node to another does not cause the node to block, either adding a new node or removing an existing node, or changing the number of hash slots that a node contains, does not cause the cluster to go offline.
Master-slave replication in a Redis cluster
In order for the cluster to be able to function properly in the case of a subset of nodes that are offline or unable to communicate with most (majority) nodes of the cluster, the Redis cluster uses the master-slave replication function for the nodes: each node in the cluster has 1 to N replicas (replica). One of the replicas is the primary node (master), while the rest of the N-1 replicas are from the node (slave).
In the previous examples of nodes A, B, C, if node B went offline, the cluster would not function properly because the cluster could not find a node to handle hash slots No. 5501 to 11,000th.
On the other hand, if at the time of creating the cluster (or at least before Node B), we add the slave node B1 to the main node B, then when the primary node B is down, the cluster will set the B1 as the new master node and let it replace the main node B of the downline and continue processing No. 5501 to 110. Hash slot for number No. 00 so that the cluster does not function properly because of the downline of the primary node B.
However, if both node B and B1 are offline, the Redis cluster will stop functioning.
Consistency assurance for Redis clusters (guarantee)
Redis clusters do not guarantee strong data consistency (strong consistency): Under certain conditions, a Redis cluster may lose a write command that has already been executed.
Using asynchronous replication (asynchronous replication) is one of the reasons that a Redis cluster may lose write commands. Consider the following example of a write command:
* The client sends a write command to master node B.
* Master Node B executes the Write command and returns the command reply to the client.
* Master Node B copies the write command just executed to its slave nodes B1, B2, and B3.
As you can see, the primary node's copy of the command occurs after returning a command reply, because if the command request is processed every time it needs to wait for the copy operation to complete, the master node can handle the command request much less quickly-we have to make a tradeoff between performance and consistency.
If it is really necessary, the Redis cluster may provide a way to execute the Write command synchronously (Synchronou) in the future.
Redis Cluster Another scenario where the command might be lost is that the cluster has a network partition, and a client is orphaned from a few (minority) instances, including at least one master node.
For example, suppose the cluster contains a, B, C, A1, B1, C1 six nodes, where a, B, C are the primary nodes, and A1, B1, C1 are the slave nodes of three master nodes, and there is also a client Z1.
Assuming a network split in the cluster, the cluster may split into two, with most (majority) parties containing nodes A, C, A1, B1, and C1, while a minority (minority) party contains node B and client Z1.
During a network split, master node B will still accept the write command sent by Z1:
* If the network split occurs in a short time, then the cluster will continue to function normally;
* However, if the network split occurs long enough that most of the parties will be set from node B1 to the new master node and use B1 instead of the original primary node B, then the write command Z1 sent to master Node B will be lost.
Note that the maximum time that client Z1 can send write commands to primary node B during a network split is limited, a time limit called node timeout, which is an important configuration option for Redis clusters:
* For most Parties, if a primary node fails to reconnect to the cluster within the time limit set by the node timeout, the cluster will consider the primary node as offline and use the slave node instead of the master node to continue working.
* For a few, if a primary node fails to reconnect to the cluster within the time limit set by the node timeout, it will stop processing the write command and report an error to the client.
Redis cluster Installation:
1. Installation environment: First ensure that Redis is installed
cd /opt/
mkdir `seq 7001 7008`
cp /etc/redis/6379.conf ./
The relevant information is unifiedly modified to: 6379 (port, log file, storage dir persistent)sed ‘s/6379/7001/g‘ 6379.conf > 7001/redis.conf
sed ‘s/6379/7002/g‘ 6379.conf > 7002/redis.conf
sed ‘s/6379/7003/g‘ 6379.conf > 7003/redis.conf
sed ‘s/6379/7004/g‘ 6379.conf > 7004/redis.conf
sed ‘s/6379/7005/g‘ 6379.conf > 7005/redis.conf
sed ‘s/6379/7006/g‘ 6379.conf > 7006/redis.conf
sed ‘s/6379/7007/g‘ 6379.conf > 7007/redis.conf
sed ‘s/6379/7008/g‘ 6379.conf > 7008/redis.conf
for i in `seq 7001 7009`;do cd /opt/$i && /usr/local/bin/redis-server redis.conf ; done
2, installation management tools, the source comes with a Management cluster cluster tool is written in Ruby, so need to install Ruby
yum -y install ruby rubygems
Install ruby management tool redis
gem install redis
3. Replication Management Tools
Cp/opt/redis-3.0.4/src/redis-trib.rb/usr/local/bin/redis-trib View Redis-trib Help Redis-trib
4. Create a cluster 7001-7006 6 Redis for cluster node 7007-7008 "2 Redis for Back node"
[[email protected]] $ redis-trib create --replicas 1 192.168.0.201:7001 192.168.0.201:7002 192.168.0.201:7003 192.168.0.201:7004 192.168.0.201:7005 192.168.0.201:7006
>>> Creating cluster
Connecting to node 192.168.0.201:7001: OK
Connecting to node 192.168.0.201:7002: OK
Connecting to node 192.168.0.201:7003: OK
Connecting to node 192.168.0.201:7004: OK
Connecting to node 192.168.0.201:7005: OK
Connecting to node 192.168.0.201:7006: OK
>>> Performing hash slots allocation on 6 nodes ...
Using 3 masters:
192.168.0.201:7001
192.168.0.201:7002
192.168.0.201:7003
Adding replica 192.168.0.201:7004 to 192.168.0.201:7001
Adding replica 192.168.0.201:7005 to 192.168.0.201:7002
Adding replica 192.168.0.201:7006 to 192.168.0.201:7003
M: 699f318027f87f3c49d48e44116820e673bd306a 192.168.0.201:7001
slots: 0-5460 (5461 slots) master
M: 96892fd3f51292e922383ddb6e8018e2f772deed 192.168.0.201:7002
slots: 5461-10922 (5462 slots) master
M: f702fd03c1e3643db7e385915842533ba5aab98d 192.168.0.201:7003
slots: 10923-16383 (5461 slots) master
S: d0994ce7ef68c0834030334afcd60013773f2e77 192.168.0.201:7004
replicates 699f318027f87f3c49d48e44116820e673bd306a
S: d880581504caff4a002242b2b259d5242b8569fc 192.168.0.201:7005
replicates 96892fd3f51292e922383ddb6e8018e2f772deed
S: a77b16c4f140c0f5c17c907ce7ee5e42ee2a7b02 192.168.0.201:7006
replicates f702fd03c1e3643db7e385915842533ba5aab98d
Can I set the above configuration? (Type ‘yes’ to accept): YES
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join ...
>>> Performing Cluster Check (using node 192.168.0.201:7001)
M: 699f318027f87f3c49d48e44116820e673bd306a 192.168.0.201:7001
slots: 0-5460 (5461 slots) master
M: 96892fd3f51292e922383ddb6e8018e2f772deed 192.168.0.201:7002
slots: 5461-10922 (5462 slots) master
M: f702fd03c1e3643db7e385915842533ba5aab98d 192.168.0.201:7003
slots: 10923-16383 (5461 slots) master
M: d0994ce7ef68c0834030334afcd60013773f2e77 192.168.0.201:7004
slots: (0 slots) master
replicates 699f318027f87f3c49d48e44116820e673bd306a
M: d880581504caff4a002242b2b259d5242b8569fc 192.168.0.201:7005
slots: (0 slots) master
replicates 96892fd3f51292e922383ddb6e8018e2f772deed
M: a77b16c4f140c0f5c17c907ce7ee5e42ee2a7b02 192.168.0.201:7006
slots: (0 slots) master
replicates f702fd03c1e3643db7e385915842533ba5aab98d
[OK] All nodes agree about slots configuration.
>>> Check for open slots ...
>>> Check slots coverage ...
[OK] All 16384 slots covered.
#create --replicas 1 Here --replicas 1 is to specify how many copies are made, which is equivalent to several slaves per master
#redis cluaster requires a minimum of 3 masters
#master definition host1: port host2: port host3: port If --replicas 1 then:
# host1: port == master host2: port is: host1: port is host1: port from
#If --replicas 2 then:
# host1: port == master host2: port & host3: port is a slave of host1: port
M: This is an ID automatically generated by clusterer. The cluster uses this ID to distinguish it during communication.
4, connect cluster, (connect to any server in the cluster cluster)
Redis-cli -c -h 192.168.0.201 -p 7001 need to add -c parameters can connect to any node in the cluster!
192.168.0.201:7001> cluster nodes View cluster nodes
f702fd03c1e3643db7e385915842533ba5aab98d 192.168.0.201:7003 master-0 1444813870405 3 connected 10923-16383
699f318027f87f3c49d48e44116820e673bd306a 192.168.0.201:7001 myself, master-0 0 1 connected 0-5460
d0994ce7ef68c0834030334afcd60013773f2e77 192.168.0.201:7004 slave 699f318027f87f3c49d48e44116820e673bd306a 0 1444813870105 4 connected
a77b16c4f140c0f5c17c907ce7ee5e42ee2a7b02 192.168.0.201:7006 slave f702fd03c1e3643db7e385915842533ba5aab98d 0 1444813868605 6 connected
96892fd3f51292e922383ddb6e8018e2f772deed 192.168.0.201:7002 master-0 1444813869405 2 connected 5461-10922
d880581504caff4a002242b2b259d5242b8569fc 192.168.0.201:7005 slave 96892fd3f51292e922383ddb6e8018e2f772deed 0 1444813869105 5 connected
192.168.0.201:7001> cluster info View cluster information
cluster_state: ok
cluster_slots_assigned: 16384
cluster_slots_ok: 16384
cluster_slots_pfail: 0
cluster_slots_fail: 0
cluster_known_nodes: 6
cluster_size: 3
cluster_current_epoch: 6
cluster_my_epoch: 1
cluster_stats_messages_sent: 1809
cluster_stats_messages_received: 1809
5, Cluster expansion
redis-trib add-node 192.168.0.201:7007 192.168.0.201:7001
Command explanation:
redis-trib add-node node and port to be added
View the results after adding:
192.168.0.201:7001> cluster info
cluster_state: ok
cluster_slots_assigned: 16384
cluster_slots_ok: 16384
cluster_slots_pfail: 0
cluster_slots_fail: 0
cluster_known_nodes: 7
cluster_size: 3
cluster_current_epoch: 6
cluster_my_epoch: 1
cluster_stats_messages_sent: 2503
cluster_stats_messages_received: 2503
192.168.0.201:7001> cluster nodes
f702fd03c1e3643db7e385915842533ba5aab98d 192.168.0.201:7003 master-0 1444814061587 3 connected 10923-16383
699f318027f87f3c49d48e44116820e673bd306a 192.168.0.201:7001 myself, master-0 0 1 connected 0-5460
d0994ce7ef68c0834030334afcd60013773f2e77 192.168.0.201:7004 slave 699f318027f87f3c49d48e44116820e673bd306a 0 1444814062087 4 connected
a77b16c4f140c0f5c17c907ce7ee5e42ee2a7b02 192.168.0.201:7006 slave f702fd03c1e3643db7e385915842533ba5aab98d 0 1444814061087 6 connected
a1301a9e1fd24099cd8dc49c47f2263e3124e4d6 192.168.0.201:7007 master-0 1444814063089 0 connected
96892fd3f51292e922383ddb6e8018e2f772deed 192.168.0.201:7002 master-0 1444814062589 2 connected 5461-10922
d880581504caff4a002242b2b259d5242b8569fc 192.168.0.201:7005 slave 96892fd3f51292e922383ddb6e8018e2f772deed 0 1444814061587 5 connected
192.168.0.201:7001>
6, new plus no data-and no slots, we can use commands to let him re-shard (Shard)
Redis-trib Reshard 192.168.0.201:7007
7, in adding a server to do from
After adding a 7008 let him do the 7008 slave
[[email protected]] $ redis-trib add-node 192.168.0.201:7008 192.168.0.201:7001
After adding it, it is mater by default, but he does not have any slots.
192.168.0.201:7001> cluster nodes
f702fd03c1e3643db7e385915842533ba5aab98d 192.168.0.201:7003 master-0 1444814915795 3 connected 11089-16383
699f318027f87f3c49d48e44116820e673bd306a 192.168.0.201:7001 myself, master-0 0 1 connected 166-5460
d0994ce7ef68c0834030334afcd60013773f2e77 192.168.0.201:7004 slave 699f318027f87f3c49d48e44116820e673bd306a 0 1444814917298 4 connected
a77b16c4f140c0f5c17c907ce7ee5e42ee2a7b02 192.168.0.201:7006 slave f702fd03c1e3643db7e385915842533ba5aab98d 0 1444814916297 6 connected
a02a66e0286ee2f0a9b5380f7584b9b20dc032ff 192.168.0.201:7008 master-0 1444814915796 0 connected
a1301a9e1fd24099cd8dc49c47f2263e3124e4d6 192.168.0.201:7007 master-0 1444814915295 7 connected 0-165 5461-5627 10923-11088
96892fd3f51292e922383ddb6e8018e2f772deed 192.168.0.201:7002 master-0 1444814916898 2 connected 5628-10922
d880581504caff4a002242b2b259d5242b8569fc 192.168.0.201:7005 slave 96892fd3f51292e922383ddb6e8018e2f772deed 0 1444814916798 5 connected
Then connect to this redis instance of 7008, and then copy the ID of 7007
192.168.0.201:7008> cluster replicate a1301a9e1fd24099cd8dc49c47f2263e3124e4d6
OK
Then look at:
192.168.0.201:7008> cluster nodes
699f318027f87f3c49d48e44116820e673bd306a 192.168.0.201:7001 master-0 1444815074072 1 connected 166-5460
a1301a9e1fd24099cd8dc49c47f2263e3124e4d6 192.168.0.201:7007 master-0 1444815073071 7 connected 0-165 5461-5627 10923-11088
96892fd3f51292e922383ddb6e8018e2f772deed 192.168.0.201:7002 master-0 1444815073671 2 connected 5628-10922
a77b16c4f140c0f5c17c907ce7ee5e42ee2a7b02 192.168.0.201:7006 slave f702fd03c1e3643db7e385915842533ba5aab98d 0 1444815073571 3 connected
f702fd03c1e3643db7e385915842533ba5aab98d 192.168.0.201:7003 master-0 1444815072571 3 connected 11089-16383
d0994ce7ef68c0834030334afcd60013773f2e77 192.168.0.201:7004 slave 699f318027f87f3c49d48e44116820e673bd306a 0 1444815073071 1 connected
d880581504caff4a002242b2b259d5242b8569fc 192.168.0.201:7005 slave 96892fd3f51292e922383ddb6e8018e2f772deed 0 1444815073871 2 connected
a02a66e0286ee2f0a9b5380f7584b9b20dc032ff 192.168.0.201:7008 myself, slave a1301a9e1fd24099cd8dc49c47f2263e3124e4d6 0 0 0 connected
192.168.0.201:7008>
8, cluster the master-slave copy of this operation is to send a request to master, and then master to do a bgsave and then take over the reload! If Master is particularly large, you need to have more than one Redis instance, each of which is stored as part of it.
Also need to note that involves multi-key operation! It doesn't work because you have different keys in different places!
192.168.7.107:7002> set key101 shuaige
-> Redirected to slot [1601] located at 192.168.7.107:7001
OK
192.168.7.107:7001> set key102 shuaige
-> Redirected to slot [13858] located at 192.168.7.107:7003
OK
192.168.7.107:7003> set key103 shuaige
-> Redirected to slot [9731] located at 192.168.7.107:7002
OK
192.168.7.107:7002> set key104 shuaige
-> Redirected to slot [5860] located at 192.168.7.107:7007
OK
192.168.7.107:7007> set key105 shuaige
-> Redirected to slot [1733] located at 192.168.7.107:7001
OK
192.168.7.107:7001>
Redis-cluster cluster "Fourth": Redis-cluster cluster configuration