Many Server Load balancer algorithms are available for application scenarios, including Round Robin and HASH) least Connection, Response Time, and Weighted. Hash algorithms are the most commonly used algorithms. A typical application scenario is that there are N servers providing cache services. Load Balancing needs to be performed on the servers, requests are evenly distributed to each server, and each machine is responsible for 1/N of services. The common algorithm is to obtain the remainder of the hash result (hash () mod N): To machine number from 0 to N-1, according to the custom hash () algorithm, perform N modulo on the hash () value of each request to obtain the remainder I, and then distribute the request to the machine numbered I. However, such an algorithm has a fatal problem. If a machine goes down, requests that fall into the machine cannot be processed correctly, in this case, the server to be removed from the algorithm, there will be (N-1)/N of the server cache data needs to be re-computed; if a new machine, the cache data of a server with N/(N + 1) needs to be recalculated. This is usually unacceptable for Systems (because it means that a large amount of cache is invalid or data needs to be transferred ). So how to design a load balancing policy to minimize the number of affected requests? The Consistent Hashing algorithm is used in Memcached, Key-Value Store, Bittorrent DHT, and LVS. It can be said that Consistent Hashing is the preferred algorithm for distributed system load balancing. For example, if you have N cache servers (hereinafter referred to as cache), how can you map an object to N caches, you are likely to use a method similar to the following to calculate the object's hash value, and then uniformly map it to N caches; hash (object) % N everything runs normally, consider the following two cases: A cache Server m is down (this situation must be taken into account in actual applications), so that all objects mapped to the cache m will become invalid. What should I do, the cache m needs to be removed from the cache, the cache is a N-1, The ing formula is changed to hash (object) % (N-1); due to increased access, you need to add cache, at this time, the cache is N + 1, and the ing formula is changed to hash (object) % (N + 1). What does 1 and 2 mean? This means that almost all of the cache suddenly becomes invalid. For servers, this is a disaster, and flood-like access will directly rush to the backend servers. The third problem is that the hardware capability is getting stronger and stronger, you may want the nodes added later to do more work, but the hash algorithm above is obviously not enough. Is there any way to change this situation? This is consistent hashing. A metric of the hash algorithm and the monotonic Hash algorithm is Monotonicity. It is defined as follows: Monotonicity means that if some content has been hashed to the corresponding buffer, A new buffer is added to the system. The hash result should ensure that the original allocated content can be mapped to the new buffer instead of other buffers in the old buffer set. It is easy to see that the preceding simple hash algorithm hash (object) % N is difficult to meet the monotonic requirement. The principle of the consistent hashing algorithm consistent hashing is a hash algorithm. To put it simply, when removing/adding a cache, it can change the existing key ing relationship as little as possible, try to satisfy the monotonic requirement. The following describes the basic principles of the consistent hashing algorithm in five steps. In the ring hash space, the common hash algorithm maps the value to a 32-bit key value, that is, 0 ~ 2 ^ 32-1 Power numeric space; we can think of this space as a ring connected to the first (0) end (2 ^ 32-1, as shown in figure 1 below. Figure 1 The Circular hash space maps objects to the hash space. Next, four objects are considered as object1 ~ Object4: Distribution of hash key values calculated by the hash function on the Ring 2. Hash (object1) = key1 ;...... Hash (object4) = key4; figure 2 the key value distribution of four objects maps the cache to the hash space Consistent hashing. The basic idea is to map the object and cache to the same hash value space, and use the same hash algorithm. Assume that there are currently three caches A, B, and C in total. The ing result is as follows: 3. They are arranged in the hash space with corresponding hash values. Hash (cache A) = key ;...... Hash (cache C) = key C; Figure 3 Distribution of cache and object key values here, by the way, the hash calculation of cache is mentioned, generally, the IP address or machine name of the cache machine can be used as the hash input. By ing an object to the cache, both the cache and the object have been mapped to the hash value space through the same hash algorithm. Next, we need to consider how to map the object to the cache. In this circular space, if you start from the key value of the object clockwise until a cache is met, the object is stored in the cache, because the hash values of objects and caches are fixed, the cache must be unique and definite. So I can find the ing method between the object and the cache ?! Continue with the above example (see figure 3). Then, based on the above method, the object object1 will be stored on cache A; object2 and object3 will correspond to cache C; object4 corresponds to cache B. As mentioned earlier, the biggest problem caused by the hash-based remainder method is that it cannot satisfy the monotonicity. When the cache changes, the cache will become invalid, this has a huge impact on backend servers. Now we will analyze and analyze the consistent hashing algorithm. If cache B fails to be removed from the cache, according to the above-mentioned ing method, the affected objects will only traverse the objects along cache B clockwise until the next cache (cache C, that is, the objects mapped to cache B. Therefore, you only need to change the object object4 and remap it to cache C. See figure 4. Figure 4 when Cache B is removed from the cache ing to add a cache and then consider adding a new cache D, assuming that in this Circular hash space, cache D is mapped between the object object2 and object3. At this time, the affected objects will only traverse the objects along the cache D counterclockwise until the next cache (cache B) (they are also part of the objects originally mapped to the cache C ), remap these objects to cache D. Therefore, you only need to change the object object2 to remap it to cache D. See figure 5. Figure 5 ing relationship between virtual nodes after cache D is added another metric of the Hash algorithm is Balance. The definition is as follows: balanced balance means that the hash results can be distributed to all the buffers as much as possible, so that all the buffer spaces can be used. The hash algorithm does not guarantee absolute balance. If the cache is small, objects cannot be evenly mapped to the cache. For example, in the preceding example, when only cache A and cache C are deployed, cache A only stores object1 among the four objects, while cache C stores object2, object3, and object4; the distribution is unbalanced. To solve this problem, consistent hashing introduces the concept of "virtual node", which can be defined as follows: "virtual node) is the replica of the actual node in the hash space (replica). An actual node corresponds to several "virtual nodes", and the corresponding number also becomes "Number of copies ", "virtual nodes" are arranged in hash values in the hash space. We still use the deployment of only cache A and cache C as an example. As shown in figure 4, the cache distribution is uneven. Now we introduce virtual nodes and set "Number of copies" to 2, which means there will be four "virtual nodes" in total. cache A1 and cache A2 represent cache; cache C1 and cache C2 represent cache C. For an ideal scenario, see Figure 6. Figure 6 ing between objects and virtual nodes is as follows: objec1-> cache A2; objec2-> cache A1; objec3-> cache C1; objec4-> cache C2; therefore, both object1 and object2 are mapped to cache A, while object3 and object4 are mapped to cache C; the balance has been greatly improved. After "virtual nodes" are introduced, the ing relationships are converted from {Object> node} to {Object> virtual node }. The ing relationship 7 shows when querying the cache where the object is located. Figure 7 the hash calculation of the cache "virtual node" where the query object is located can use the IP address of the corresponding node plus a digital suffix. For example, assume that the IP address of cache A is 2018.14.241. Before introducing "virtual nodes", calculate the hash value of cache A: Hash ("2018.14.241"); after introducing "virtual nodes, calculate the hash values of cache A1 and cache A2 at "virtual node": Hash ("2018.14.241 #1"); // cache A1 Hash ("2018.14.241 #2 "); // cache A2 The following is the java code for implementing this algorithm: First, we implement a HashFunction, refer to net. spy. memcached. the hash algorithm in DefaultHashAlgorithm. The KETAMA_HASH algorithm copies the code import java. security. messageDigest; import java. security. noSuchAlgorithmException; public class HashFunction {pr Ivate MessageDigest md5 = null; public long hash (String key) {if (md5 = null) {try {md5 = MessageDigest. getInstance ("MD5");} catch (NoSuchAlgorithmException e) {throw new IllegalStateException ("no md5 algorythm found");} md5.reset (); md5.update (key. getBytes (); byte [] bKey = md5.digest (); long res = (long) (bKey [3] & 0xFF) <24) | (long) (bKey [2] & 0xFF) <16) | (long) (bKey [1] & 0xF F) <8) | (long) (bKey [0] & 0xFF); return res & 0 xffffffffL ;}} copy the code and implement ConsistentHash <T>, refer to the Code https://weblogs.java.net/blog/2007/11/27/consistent-hashing to copy the code import java. util. collection; import java. util. sortedMap; import java. util. treeMap; public class ConsistentHash <T> {private final HashFunction hashFunction; private final int numberOfReplicas; // virtual node private final SortedMap <Long, T> circle = New TreeMap <Long, T> (); // used to store the public ing from the hash value of a virtual node to a real node public ConsistentHash (HashFunction hashFunction, int numberOfReplicas, Collection <T> nodes) {this. hashFunction = hashFunction; this. numberOfReplicas = numberOfReplicas; for (T node: nodes) {add (node) ;}} public void add (T node) {for (int I = 0; I <numberOfReplicas; I ++) {circle. put (hashFunction. hash (node. toString () + I), node) ;}} public Void remove (T node) {for (int I = 0; I <numberOfReplicas; I ++) {circle. remove (hashFunction. hash (node. toString () + I) ;}}/*** obtain the nearest clockwise node * @ param key to take the Hash for the given key, obtain the actual node * @ return */public T get (Object key) {if (circle. isEmpty () {return null;} long hash = hashFunction. hash (String) key); if (! Circle. containsKey (hash) {SortedMap <Long, T> tailMap = circle. tailMap (hash); // return some views mapped to the map. The key is greater than or equal to hash = tailMap. isEmpty ()? Circle. firstKey (): tailMap. firstKey ();} return circle. get (hash);} public long getSize () {return circle. size () ;}} copy the code and write a program to test the following: copy the code public class MainApp {public static void main (String [] args) {Set <String> nodes = new HashSet <String> (); nodes. add ("A"); nodes. add ("B"); nodes. add ("C"); ConsistentHash <String> consistentHash = new ConsistentHash <String> (new HashFunction (), 160, nodes); consistentHash. add ("D"); System. out. println (consistentHash. getSize (); // 640 System. out. println (consistentHash. get ("test5") ;}} copy the code run result: 640B 640 is the number of virtual nodes. The size of this value affects whether the allocation is balanced.