Consistent hashAlgorithmKarger and others at the Massachusetts Institute of Technology proposed in 1997 to solve the distributed cache. The design goal was to solve the hot spot issue on the Internet. The original intention was very similar to carp. Consistent hash corrected the problems caused by the simple hash algorithm used by carp, so that DHT can be truly applied in P2P environments.
However, consistent hash algorithms are also widely used in Distributed Systems. People who have studied memcached cache databases know that the memcached server itself does not provide distributed cache consistency, but is provided by clients, follow these steps to calculate consistent hash:
- First, obtain the hash value of the memcached server (node) and configure it to 0 ~ 232 of the circle (continuum.
- Then, use the same method to obtain the hash value of the key for storing the data and map it to the same circle.
- Search clockwise from the location where the data is mapped, and save the data to the first server. If more than 232 still cannot find the server, it will be saved to the first memcached server.
Add a memcached server from the status. The residual number distributed algorithm affects the cache hit rate due to a huge change in the key-saving server. However, in consistent hashing) the keys on the first server that is added to the server in the counterclockwise direction will be affected, as shown in:
Consistent hash Properties
Considering that each node in the distributed system may fail and new nodes may be dynamically added, how can we ensure that the system can still provide good services when the number of nodes in the system changes, this is worth considering, especially when designing a distributed cache system, if a server fails, if the system does not adopt appropriate algorithms to ensure consistency, all data cached in the system may become invalid (that is, because the number of system nodes is small, when a client requests an object, it needs to re-calculate its hash value (usually related to the number of nodes in the system). Because the hash value has changed, therefore, it is very likely that the server node storing the object cannot be found. Therefore, consistent hash is crucial. The consistent hash algorithm in a good distributed cahce system should meet the following requirements:
Balance means that the hash results can be distributed to all the buffers as much as possible, so that all the buffer spaces can be used. Many hash algorithms can meet this condition.
Monotonicity means that if some content has been hashed to the corresponding buffer and a new buffer has been added to the system, the hash result should ensure that the original allocated content can be mapped to the new buffer, instead of other buffers in the old buffer set. Simple hash algorithms often cannot meet the requirements of monotonicity, such as the simplest linear hash: x = (ax + B) mod (P), in the above formula, P indicates the size of all buffers. It is not hard to see that when the buffer size changes (from P1 to P2), all original hash results will change and thus do not meet the monotonicity requirements. Changing the hash result means that when the buffer space changes, all mappings must be updated in the system. In P2P systems, the buffer changes are equivalent to adding or exiting peer systems. This situation occurs frequently in P2P systems, resulting in great computing and transmission load. Monotonic means that the hash algorithm is required to cope with this situation.
In a distributed environment, terminals may not be able to see all the buffers, but only some of them. When the terminal wants to map content to the buffer through the hash process, the buffering range seen by different terminals may be different, resulting in inconsistent hash results, the final result is that the same content is mapped to different buffers by different terminals. This situation should be avoided because it causes the same content to be stored in different buffers, reducing the system storage efficiency. Dispersion is defined as the severity of the above situation. A good hash algorithm should be able to avoid inconsistencies as much as possible, that is, to minimize dispersion.
The load problem is actually a problem of decentralization from another perspective. Since different terminals may map the same content to different buffer zones, different users may map different content to a specific buffer zone. This situation should also be avoided like dispersibility. Therefore, a good hash algorithm should be able to reduce the buffer load as much as possible.
Smoothness means that the number of cache servers changes smoothly and the number of cached objects changes smoothly.
Basic Concept
Consistent hashing was first proposed in the paper consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web. In short, consistent hashing organizes the entire hash value space into a virtual ring, for example, if the space of a hash function H is 0-2 ^ 32-1 (that is, the hash value is a 32-bit unsigned integer), the entire hash space ring is as follows:
The entire space is organized clockwise. 0 and 232-1 coincide in the middle of the zero point.
In the next step, each server uses hash for a hash. You can select the IP address or Host Name of the server as the key word for hash, so that each machine can determine its position in the hash ring, here we assume that the preceding four servers are hashed by IP addresses and the location in the ring space is as follows:
Next, use the following algorithm to locate data access to the corresponding server: Calculate the hash value of the data key using the same function hash, and determine the location of the data on the ring, the first server is the server to be located.
For example, we have four data objects: Object A, object B, Object C, and Object D. After hash calculation, the location in the ring space is as follows:
According to the consistent hash algorithm, data a is set to node A, data B is set to Node B, data C is set to node C, and data D is set to node D.
The following describes the fault tolerance and scalability of the consistent hash algorithm. Assuming that node C is unfortunately down, we can see that objects a, B, and D are not affected at this time, and only objects C are relocated to node D. Generally, in a consistent hash algorithm, if a server is unavailable, then, the affected data is only the data between the server and the first server in the ring space (that is, the first server encountered walking in a counterclockwise direction). Other data will not be affected.
In another case, if a server node X is added to the system, as shown in:
Objects A, B, and D are not affected. Only Object C needs to be relocated to the new node X. Generally, in a consistent hash algorithm, if a server is added, the affected data is only sent from the new server to the first server in the ring space (that is, the first server encountered walking in a counterclockwise direction) and other data will not be affected.
To sum up, the consistent hash algorithm only needs to relocate a small part of data in the ring space for node increase and decrease, which has good fault tolerance and scalability.
In addition, the consistency hash algorithm is too small for service nodes, which may cause data skew due to uneven node segments. For example, there are only two servers in the system, and the ring distribution is as follows,
At this time, a large amount of data will inevitably be concentrated on node A, and only a very small amount will be located on Node B. To solve this data skew problem, the consistent hash algorithm introduces a virtual node mechanism, that is, multiple hashing is calculated for each service node, and one service node is placed for each computing result location, it is called a virtual node. You can add a number after the Server IP address or host name. In the preceding example, three virtual nodes can be calculated for each server, therefore, you can calculate "node A #1", "node A #2", "node A #3", "Node B #1", "Node B #2"," hash Value of Node B #3, as a result, six virtual nodes are formed:
At the same time, the data locating algorithm remains unchanged, but there is only one step more ing between virtual nodes and actual nodes, for example, data from the three virtual nodes "node A #1", "node A #2", and "node A #3" are located on node. This solves the problem of data skew when there are few service nodes. In practical applications, the number of virtual nodes is usually set to 32 or greater, so even a few service nodes can achieve relatively even data distribution.
Implementation
- Consistent hashing implementation in C ++
- Consistent hashing implementation in Erlang
- Consistent hashing implementation in C #
- Consistent hashing implementation in Java
- Consistent hashing implementation in C
References
[1].D. darger, E. lehman, T. leighton, M. levine, D. lewin and R. panigrahy. consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web. ACM Symposium on Theory of computing, 1997. 1997: 654-663.
[2].Consistent hash. http://baike.baidu.com/view/1588037.htm
[3]. Memcached comprehensive analysis-4. memcached distributed algorithm-http://tech.idv2.com/2008/07/24/memcached-004/
[4]Consistency Hash Algorithm and Its Application in Distributed System. http://www.codinglabs.org/html/consistent-hashing.html
[5].Consistent hashing. http://en.wikipedia.org/wiki/Consistent_hashing
[6].Http://www.lexemetech.com/2007/11/consistent-hashing.html