Source: Http://www.jianshu.com/p/e8fb89bb3a61 Consistency hash is used for distributed cache system, the key value is mapped to the specific machine IP, and the increase and deletion of 1 machine data movement is small, the impact on the current network is small. Basic Scenario
For example, if you have n cache server (hereafter referred to as cache), then how to map an object to n cache, you are likely to use a common method like the following to calculate the hash value of object, and then map evenly to the n cache;
Redundancy algorithm: Hash (object)%N
Everything is running normally, consider the following two cases;
11 Cache server M down (this must be considered in the actual application) so that all objects mapped to the cache m will be invalidated, what to do, need to remove the cache m from the cache, when the cache is N-1, the mapping formula becomes HA Sh (object)% (N-1);
2 because of the access aggravating, need to add the cache, this time the cache is n+1, mapping formula into a hash (object)% (n+1);
What does 1 and 2 mean? This means that suddenly almost all of the caches are dead. For the server, this is a disaster, flood-like access will be directly rushed back to the server;
Consider the third problem, because the hardware capabilities are getting stronger, you may want to add more nodes to do more work, obviously the above hash algorithm can not be done.
Is there any way to change this situation, this is consistent hashing ...
hash algorithm and monotonicity
A measure of the Hash algorithm is monotonicity (monotonicity), which is defined as follows:
Monotonicity refers to the addition of a new buffer to the system if some content has been allocated to the corresponding buffer by hashing. The result of the hash should be to ensure that the original allocated content can be mapped to a new buffer without being mapped to another buffer in the old buffer collection.
Easy to see, above the simple redundancy algorithm hash (object)%N difficult to meet the requirements of monotonicity.
The principle of consistent Hashing consistent hash
Consistent hashing is a hash algorithm, in a nutshell, when removing/adding a cache, it can change the existing key mappings as small as possible, and satisfy the monotonic requirements as much as necessary.
1. Ring Hash Space
Consider that the usual hash algorithm is to map value to a key value of 32, which is the value space of the 0~2^32-1; we can think of this space as a ring with a first (0) tail (2^32-1), as shown in Figure 1 below.
Circle Space2. Map content (objects) that need to be cached to the hash space
Next consider 4 objects Object1~object4, the hash function calculated by the hash value of key on the ring distribution 2 is shown.
Hash (object1) = Key1;
... ...
Hash (OBJECT4) = Key4;
Object3. Mapping the server (node) to the hash space
The basic idea of consistent hashing is to map both the object and the cache to the same hash value space, and use the same hash algorithm.
Assuming that there are currently 3 servers (nodes) of a, B and C, then the mapping results are shown in 3, and they are arranged in hash space with corresponding hash values.
The general method can use the server (node) machine's IP address or machine name as a hash input.
Hash (cache a) = key A;
... ...
Hash (cache c) = key C;
Cache4. Mapping objects to the cache
Now that both the cache and the object have been mapped to the hash value space using the same hash algorithm, the next thing to consider is how to map the object to the cache.
In this annular space, if you start from the object's key value in a clockwise direction until you meet a cache, the object is stored on the cache because the hash value of the object and cache is fixed, so the cache must be unique and deterministic. Did you find the mapping method for the object and cache?!
Continue to the above example, then according to the above method, the object Object1 will be stored on the cache A; Object2 and object3 correspond to cache C; Object4 corresponds to cache B;
5. Review the changes to the cache
Said before, through the hash and then the method of redundancy is the biggest problem is not to meet the monotony, when the cache changes, the cache will fail, and then the background server caused a huge impact, now to analyze and analyze the consistent hashing algorithm.
5.1 Removing the cache
Consider the assumption that cache B hangs up, and according to the mapping method described above, the only objects that will be affected are those that go counterclockwise through cache B until the next cache (cache C), which is the object mapped to cache B.
So here you only need to change the object Object4 and remap it to cache C; see Figure 4.
Figure 4 Cache Map after cache B has been removed
5.2 Adding the cache
Consider the case of adding a new cache D, assuming that in this ring hash space, cache D is mapped between the object Object2 and Object3. The only things that will be affected are those objects that traverse the cache D counterclockwise until the next cache (cache B), which is also mapped to a portion of the object on cache C, to remap the objects to cache d.
So here you only need to change the object object2 and remap it to cache D; see Figure 5.
Figure 5 Mapping relationship after adding cache D 6. Virtual node
Another indicator for considering the Hash algorithm is the balance (Balance), which is defined as follows:
Balance of
Balance means that the result of the hash can be distributed to all buffers as much as possible, thus allowing all buffer space to be exploited.
Hash algorithm is not guaranteed absolute balance, if the cache is less, the object can not be evenly mapped to the cache, such as in the above example, only the deployment of cache A and cache C, in 4 objects, cache a only stored object1, Cache C Stores Object2, Object3, and Object4, and the distributions are very uneven.
To solve this situation, consistent hashing introduces the concept of "virtual node", which can be defined as follows:
Virtual node is the actual node in the hash space of the replica (replica), a real node corresponding to a number of "virtual node", the corresponding number has become "Replication Number", "Virtual node" in the hash space in the hash value.
In the case of deploying only cache A and cache C, we have seen in Figure 4 that the cache distribution is not uniform. Now we introduce the virtual node, and set the "number of copies" to 2, which means there will be 4 "virtual nodes", Cache A1, cache A2 represents the cache A; Cache C1, Cache C2 represents the cache C; Suppose a more ideal situation , see Figure 6.
Figure 6 Mapping relationship after the introduction of "Virtual Node"
At this point, the mapping of the object to the virtual node is:
Objec1->cache A2; objec2->cache A1; Objec3->cache C1; Objec4->cache C2;
So objects Object1 and Object2 are mapped to cache a, and object3 and Object4 are mapped to cache C; The balance has improved a lot.
After the "Virtual node" is introduced, the mapping relationship is transformed from {object---node} to {Object-and-virtual node}. The mapping relationship 7 is shown when querying the cache of an object.
Figure 7 The cache where the object is queried
The hash calculation of "virtual node" can be based on the IP address of the corresponding node plus the number suffix. For example, assume that the IP address of Cache A is 202.168.14.241.
Before introducing "Virtual node", calculate the hash value of cache A:
Hash ("202.168.14.241");
After introducing "virtual node", compute the hash value of the "virtual section" point cache A1 and cache A2:
Hash ("202.168.14.241#1"); Cache A1
Hash ("202.168.14.241#2"); Cache A2
The following time code implementation demo
#!/usr/bin/env python#-*-Coding:utf-8-*-From zlibImport CRC32Import MemcacheClassHashconsistency(object):Def__init__(Self, Nodes=none, replicas=5):# virtual node corresponds to real node self.nodes_map = []# Real node and virtual node dictionary mapping Self.nodes_replicas = {}# real Node Self.nodes = nodes# The number of virtual nodes created per real node Self.replicas = ReplicasIf Self.nodes:For nodeIn Self.nodes:self._add_nodes_map (node) self._sort_nodes ()DefGet_node(Self, key):"" "according to the hash value of the key value, the corresponding node algorithm is returned: Return the first node larger than Key_hash" "" Key_hash = ABS (CRC32 (key))#print ' (%s '% Key_hashFor nodeIn Self.nodes_map:If Key_hash > node[1][ContinueReturn nodeReturnNoneDefAdd_node(Self, node):# Add Nodes self._add_nodes_map (node) self._sort_nodes ()DefRemove_node(Self, node):# Delete NodeIF nodeNotIn Self.nodes_replicas.keys ():Pass discard_rep_nodes = Self.nodes_replicas[node] Self.nodes_map = Filter (Lambda x:x[0]NotIn Discard_rep_nodes, Self.nodes_map)Def_add_nodes_map(Self, node):# Add Virtual node to Nodes_map list nodes_reps = []For IIn Xrange (self.replicas): Rep_node ='%s_%d '% (node, i) Node_hash = ABS (CRC32 (Rep_node)) Self.nodes_map.append ((Node_hash, node)) Nodes_reps.append (Node_ Hash# Real node and virtual node dictionary mapping Self.nodes_replicas[node] = Nodes_repsDef_sort_nodes(self):# The virtual nodes are sorted sequentially Self.nodes_map = sorted (Self.nodes_map, key=Lambda x:x[0]) Memcache_servers = [' 127.0.0.1:7001 ',' 127.0.0.1:7002 ',' 127.0.0.1:7003 ', ' 127.0.0.1:7004 ',]h = hashconsistency (memcache_servers) for K in h.nodes_map: print kmc_servers_dict = {}for ms in MEMCACHE_SERVERS:MC = memcache. Client ([Ms], Debug=0) mc_servers_dict[ms] = Mc# cycle 10 This gives Memcache Add key, where a consistent hash is used, then key will fall to the corresponding virtual node based on the hash value for i in xrange (10): Key = ' key_%s '% i print key Server = H.get_node (key) [1] mc = mc_servers_dict[server] Mc.set (key, i) print server:%s '% server print MC
Consistent hash Algorithm