A distributed hash (DHT) implementation algorithm, proposed by MIT in 1997, was designed to address hot spot problems in the Internet, with a similar intent to carp. The consistent hash corrects the problem caused by the simple hashing algorithm used by carp, so that distributed hashing (DHT) can be really applied in the peer-to-peer environment.
The consistency hash algorithm proposes four definitions for determining the good or bad hash algorithm in a dynamically changing cache environment:
2. Monotonicity (monotonicity): monotonicity means that if some content has been allocated to the corresponding buffer by hashing, a new buffer is added to the system. The result of the hash should be to ensure that the original allocated content can be mapped to an existing or new buffer without being mapped to another buffer in the old buffer collection.
3, Dispersion (Spread): In a distributed environment, the terminal may not see all the buffers, but only to see part of it. The end result is that the same content is mapped to different buffers by different endpoints when the terminal wants the content to be mapped to buffering through a hashing process, because the buffer range seen by different terminals may be different, resulting in inconsistent results for the hash. This is obviously something that should be avoided because it causes the same content to be stored in different buffers, reducing the efficiency of the system's storage. The definition of dispersion is the severity of the above-mentioned situation. A good hashing algorithm should be able to avoid inconsistencies as far as possible, that is, to minimize dispersion.
In a distributed cluster, adding or removing machines, or automatically leaving the cluster after a machine failure, is the most basic function of distributed cluster management. If the use of commonly used hash (object)%n algorithm, then after the machine is added or deleted, many of the original data can not be found, which seriously violates the monotony principle. Next, we will explain how the consistent hashing algorithm is designed to hash the corresponding key into a space with a 2^32 bucket, i.e. 0~ (2^32)-1 in the digital space, according to the usual hash algorithm. Now we can connect the numbers to each other and think of them as a closed loop. e.g. &N Bsp , &NB Sp Now we will Object1, Object2, Object3, Object4 four objects through a specific hash function to calculate the corresponding key value, and then hash to the hash ring. such as: hash (object1) = key1; hash (object2) = key2; hash (OBJECT3) = key3; H Ash (OBJECT4) = key4; &NBS P in a distributed cluster with a consistent hashing algorithm The new machine is added by using the same hash algorithm as the object store to map the machine to the ring (in general, the machine's hash is calculated using the machine's IP or the machine's unique alias as the input value, and then calculates it in a clockwise direction, storing all the objects in the machine closest to them. Suppose now have node1,node2,node3 three machines, through the Hash algorithm to get the corresponding key value, mapped to the ring, which is as follows: hash (NODE1) = KEY1; Hash (NODE2) = KEY2; Hash (NODE3) = key3; &NBSP ; to see Objects and machines The Object1 is in the same hash space, so the clockwise rotation of the device is stored in NODE1, object3 is stored in NODE2, object2 and Object4 are stored in NODE3. In such a deployment environment, the hash ring is not changed, so, by calculating the object's hash value can be quickly positioned to the corresponding machine, so that the object can find the real storage location.
Ordinary hash algorithm is the most inappropriate place is in the addition or deletion of the machine will be taken as a large number of object storage location invalidation, so it is not satisfied with the monotony of the big. The following is an analysis of how the consistent hashing algorithm is handled. 1. Node (machine) deletion with the above distribution as an example, if the NODE2 failure is deleted, then according to the method of clockwise migration, OBJECT3 will be migrated to NODE3, so that only the OBJECT3 mapping location has changed, the other objects do not have any changes. such as: 2. Add a node (machine) If you add a new node NODE4 to the cluster, get KEY4 by the corresponding hash algorithm and map to the ring, such as: By moving the rules clockwise, the Object2 is migrated to the NODE4, and the other objects maintain the original storage location. Through the analysis of the addition and deletion of the nodes, the consistency hashing algorithm keeps the monotonicity while the data is migrated to a minimum, so the algorithm is suitable for the distributed cluster, avoids the large amount of data migration, and reduces the pressure of the server.
According to the above diagram analysis, the consistency hashing algorithm satisfies the characteristics of monotonic and load balancing and the dispersion of the general hash algorithm, but it is not considered as a widely used original, because it lacks the balance. The following will analyze how the consistent hashing algorithm is balanced. Hash algorithms are not guaranteed to be balanced, such as the case where only NODE1 and NODE3 are deployed (NODE2 deleted), object1 are stored in NODE1, Object2, OBJECT3, object4 are stored in NODE3, This is a very unbalanced state. In the consistent hashing algorithm, the virtual node is introduced in order to satisfy the balance as much as possible. --"Virtual node" is the actual node (machine) in the hash space of the replica (replica), a real node (machine) corresponding to a number of "virtual node", the corresponding number has become "Replication Number", "Virtual node" in Hash spaces are listed as hash values. As an example of the above only deployed NODE1 and NODE3 (NODE2 deleted diagram), the previous objects are unevenly distributed on the machine, now we take 2 copies (copy number) as an example, so that there are 4 virtual nodes in the entire hash ring, and the graph of the final object mapping is as follows: , &NB Sp based on the mapping of known objects: Obje Ct1->node1-1,object2->node1-2,object3->node3-2,object4->node3-1. Through the introduction of virtual nodes, the distribution of objects is more balanced. So how does a real object query work in real-world operations? Conversion of objects from hash to virtual node to actual node such as: &N Bsp "Virtual node" hash calculation can take the corresponding node's IP address plus a digital suffix way. For example, assume that the IP address of the NODE1 is 192.168.1.100. Before introducing "Virtual node", calculate the hash value of cache A: hash ("192.168.1.100"), after introducing "virtual node", calculate the hash value of "virtual section" point Node1-1 and Node1-2: hash ("192.168.1.100#1") ; Node1-1hash ("192.168.1.100#2"); Node1-2
Progress a little bit every day--five minutes to understand the consistent hashing algorithm (consistent hashing)