Distributed Hash and consistent hash are two concepts described in Distributed Storage and P2P networks. There are many papers introduced. Here is an introduction to the nature of the entry.
Distributed Hash (DHT)
Two key points: each node maintains only one part of the route, and each node stores only one part of the data. This enables addressing and storage throughout the network.
DHT is just a concept and proposes such a network model. It also shows that it is good for distributed storage. But how to implement it is not the scope of DHT.
Consistent hash:
An Implementation of DHT. In essence, it is still a hash algorithm. Recall that we do load balancing in normal times. It is the simplest and most commonly used algorithm to modulo backend nodes by querystring signature, but the problems caused by addition and deletion of nodes are obvious, almost all original requests are stored on the same machine. The optimization point is the carp algorithm (hash with the machine IP address and querystring, and the smallest hash value is selected), which only affects 1/N of data.
Consistent hash seems to be first proposed in distributed cache, which minimizes the impact of node fluctuation to improve the hit rate of distributed cache. But now more applications are in Distributed Storage and P2P systems.
Consistent hashing only raises four concepts and principles, and does not mention specific implementation:
1. Balance: Hash results are evenly distributed to each node as much as possible, so that each node can be fully utilized.
2. monotonicity: As mentioned above, if the signature modulo algorithm is used, node changes will change the ing relationship of the entire network. If it is Carp, The ing relationship of 1/N is changed. The goal of consistent hash is to change nodes without changing the network ing relationship.
3. Spread: the same data is stored on different nodes, that is, system redundancy. Consistent hash is designed to reduce system redundancy.
4. Load: load is distributed, which is similar to balance. However, load balancing refers to data storage and access balancing.
Chord algorithm:
Consistent Hash has multiple implementation algorithms. The key issue is how to define a data segmentation policy and fast node query.
Chord is the most classic implementation. The DHT in Cassandra is a simplified version of chord.
Each node in the network is assigned a unique ID, which can be sha1 through the MAC address of the machine. This is the basis for network discovery.
Assume that the entire network has n nodes and the network is in a ring. The distance between two nodes is defined as the subscript difference between nodes. Each node will store a route table (finger table), which is clockwise from the current node 2, 4, 8, 16, 32 ....... The 2I distance is selected from the IP address of another log2n node to record, mainly for query acceleration.
Storage: data is cut according to certain rules, and each copy of data also has an independent ID (query Key), and is the same as the value range of the node ID. Search for a node. If the node ID is the same as the data ID, the data is stored on the node. If the node ID does not exist, it is stored on the node closest to the data ID. In addition, to ensure data reliability, K redundant nodes are located clockwise to store the data. Generally, K = 3 is required.
This section briefly describes the deployment of a chord network. The green node is the machine and the code is the hash value. The finger table of the N0 node shows the routing Rules of The N0 node. Other nodes also have similar finger tables. The blue node is the data, and finds and stores the nearest Node Based on the hash value. The dotted line indicates redundant storage.
Query: first, from your routing table, find a node closest to the data ID and surviving in the network next. If the node ID is the same as the data ID, congratulations. If not, go to next for recursive search. Generally, the node where the data is located can be found only after multiple queries, and the number of times can be proved to be less than or equal to log2n.
In this query process, the advantages of route table selection are embodied. In fact, a binary search is implemented, and the network is observed from each node. The network is divided into log2n blocks, the largest one contains n/2 nodes. The route table records the first node of each segment. In this way, at least half of the nodes are excluded for each query. Ensure that the target node is found within the log2n time.
This section describes how to query data of an n21 node from The N0 node. The data is redirected to the destination through the finger table after 2.
Add a new node I: You need to know a surviving node J in the network in advance, and then update the route table between yourself and other nodes by interacting with node J. In addition, you need to copy the data from the nearest node to provide data services.
Loss of one node: the routing algorithm automatically skips the node and relies on data redundancy to continuously provide services.
Kad algorithm (kademlia)
The KAD algorithm is actually optimized on chord. There are two main points:
1. Binary (32/64/128) is used to represent the ID of a node. the IDs of two nodes are exclusive or to obtain the distance between nodes.
2. Each node maintains a richer set of routing information. Similarly, the entire network is divided into two parts: log2n. In chord, log2n routing nodes are maintained, but in Kad, yes. log2n queues are saved. Each queue is configured with a length of K. multiple nodes in the corresponding node area in the network are recorded, and these nodes are swapped in and out based on the active time.
The first point is to facilitate network division. A binary tree is built for a node based on each bit's 0 or 1 in binary.
The second point is to make node query faster. From the split, we can know that the worst case is not worse than chord, but saving more nodes makes the Hit Probability higher. In addition, the queue is swapped in and out based on the active time, which is more conducive to quickly finding effective nodes in P2P networks with frequent node changes.
For more information about Kad, see wenku.baidu.com/view/ee91580216fc700abb68fcae.html.