A distributed hash (DHT) implementation algorithm, proposed by MIT in 1997, was designed to address hot spot problems in the Internet, with a similar intent to carp. The consistent hash corrects the problem caused by the simple hashing algorithm used by carp, so that distributed hashing (DHT) can be really applied in the peer-to-peer environment. The consistency hash algorithm proposes four definitions for determining the good or bad hash algorithm in a dynamically changing cache environment:

1, Balance (Balance): The balance is that the result of the hash can be distributed as far as possible in all buffers, so that all buffer space can be exploited. Many hashing algorithms can satisfy this condition.

2. Monotonicity (monotonicity): monotonicity means that if some content has been allocated to the corresponding buffer by hashing, a new buffer is added to the system. The result of the hash should be to ensure that the original allocated content can be mapped to an existing or new buffer without being mapped to another buffer in the old buffer collection.

3, Dispersion (Spread): In a distributed environment, the terminal may not see all the buffers, but only to see part of it. The end result is that the same content is mapped to different buffers by different endpoints when the terminal wants the content to be mapped to buffering through a hashing process, because the buffer range seen by different terminals may be different, resulting in inconsistent results for the hash. This is obviously something that should be avoided because it causes the same content to be stored in different buffers, reducing the efficiency of the system's storage. The definition of dispersion is the severity of the above-mentioned situation. A good hashing algorithm should be able to avoid inconsistencies as far as possible, that is, to minimize dispersion.

4. Load: The load problem is actually looking at the dispersion problem from another perspective. Since different terminals may map the same content to different buffers, it is possible for a particular buffer to be mapped to different content by different users. As with dispersion, this situation should also be avoided, so a good hashing algorithm should be able to minimize the buffering load.

In distributed cluster, it is the most basic function of distributed cluster Management to add or remove machine, or automatically leave the cluster after machine failure. If you use a commonly used

**hash (object)%n algorithm**, after the machine has been added or deleted, a lot of the original data can not be found, such a serious violation of the monotony principle.

**The ****data is mapped to the ring after processing it through a certain hash algorithm.**Now we will Object1, Object2, Object3, Object4 four objects through a specific hash function to calculate the corresponding key value, and then hash to the hash ring. such as: hash (object1) = Key1; hash (object2) = Key2; hash (object3) = Key3; hash (OBJECT4) = Key4;

**mapping a machine to a ring through a hash algorithm**A new machine is added to a distributed cluster with a consistent hashing algorithm, and the principle is to map the machine to the ring by using the same hash algorithm as the object store (in general, the machine's hash is calculated using the machine's IP or machine's unique alias as the input value), and then it is calculated in the clockwise direction. Store all objects in the machine closest to you. Suppose now have node1,node2,node3 three machines, through the Hash algorithm to get the corresponding key value, mapped to the ring, which is as follows: hash (NODE1) = KEY1; Hash (NODE2) = KEY2; Hash (NODE3) = KEY3;

It can be seen that the object is in the **same hash space** as the machine, so that the object1 is stored in the NODE1 in a **clockwise rotation** , object3 is stored in NODE2, object2 and Object4 are stored in NODE3. In such a deployment environment, the hash ring is not changed, so, by calculating the object's hash value can be quickly positioned to the corresponding machine, so that the object can find the real storage location.

**Removal and addition of machines**Ordinary hash algorithm is the most inappropriate place is in the addition or deletion of the machine will be taken as a large number of object storage location invalidation, so it is not satisfied with the monotony of the big. The following is an analysis of how the consistent hashing algorithm is handled. 1. Node (machine) deletion with the above distribution as an example, if the NODE2 failure is deleted, then according to the method of clockwise migration, OBJECT3 will be migrated to NODE3, so that only the OBJECT3 mapping location has changed, the other objects do not have any changes. such as: 2. Add a node (machine) If you add a new node NODE4 to the cluster, get KEY4 by the corresponding hash algorithm and map to the ring, such as:

By moving the rules clockwise, the Object2 is migrated to the NODE4, and the other objects maintain the original storage location. Through the analysis of the addition and deletion of the nodes, the consistency hashing algorithm keeps the monotonicity while the data is migrated to a minimum, so the algorithm is suitable for the distributed cluster, avoids the large amount of data migration, and reduces the pressure of the server.

**Balance of**According to the above diagram analysis, the consistency hashing algorithm satisfies the characteristics of monotonic and load balancing and the dispersion of the general hash algorithm, but it is not considered as a widely used original, because it lacks the balance. The following will analyze how the consistent hashing algorithm is balanced. Hash algorithms are not guaranteed to be balanced, such as the case where only NODE1 and NODE3 are deployed (NODE2 deleted), object1 are stored in NODE1, Object2, OBJECT3, object4 are stored in NODE3, This is a very unbalanced state. In the consistent hashing algorithm, the virtual node is introduced in order to satisfy the balance as much as possible. "Virtual node" is the actual node (machine) in the hash space of the replica (replica), a real node (machine) corresponding to a number of "virtual node", this corresponding number also become "Replication Number", "Virtual node" in the hash space in hash value arrangement. As an example of the above only deployed NODE1 and NODE3 (NODE2 deleted diagram), the previous objects are unevenly distributed on the machine, now we take 2 copies (copy number) as an example, so that there are 4 virtual nodes in the entire hash ring, and the graph of the final object mapping is as follows:

Based on the known object mapping relationship: Object1->node1-1,object2->node1-2,object3->node3-2,object4->node3-1. Through the introduction of virtual nodes, the distribution of objects is more balanced. So how does a real object query work in real-world operations? Conversions of objects from hash to virtual node to actual node such as:

The hash calculation of "virtual node" can be based on the IP address of the corresponding node plus the number suffix. For example, assume that the IP address of the NODE1 is 192.168.1.100. Before introducing "Virtual node", calculate the hash value of cache A: hash ("192.168.1.100"), after introducing "virtual node", calculate the hash value of "virtual section" point Node1-1 and Node1-2: hash ("192.168.1.100#1") ; Node1-1hash ("192.168.1.100#2"); Node1-2

Consistent hashing algorithm (consistent hashing)