First, the business scene
If we now have 12 Redis servers (anything else), there are a lot of user data data from the front end, and then to 12 Redis server storage, there will be a problem in storage, 12 servers, It is possible that several of the Redis servers (cluster A) Save a lot of data, then several other Redis servers (cluster B) have very little data, so that the read and write pressure on the A will be very large (of course, this depends on the size of your data, if you are very small amount of data, Basically no pressure, but the volume of data is very large, that 、、、), for such a problem, what is our usual solution? Usually the idea is to hash out the rest! That is, the Hash (userid)% N (n=12), so that it can reduce a lot of pressure, but this will create some new problems, add nodes and reduce the node:
1, reduce the node:
If one of the Redis servers is hung up, then is the hash (userid)% N (n=12) hash fetched to the Redis all the data is lost? If this node is not, then only 11 Redis servers left, the original mapping relationship hash (userid)% N (n=12) into a hash (userid)% (N-1) (n=12), the focus is how to recover data loss?
2, increase the node:
If one day want to add Redis server, this time trouble comes, this time need to the original mapping relationship hash (userid)% N (n=12) into a hash (userid)% (n+1) (n=12), as with the reduction of nodes, This is not all the previous mappings have been invalidated? This isn't a pit dad,!!!. The data is confusing, the backstage may collapse instantly, that cost is very big 、、、
3, the hardware is getting cheaper, the company is getting bigger, and then the server more and more:
Hash (userid)% N (n=12) is also completely unable to meet the requirements, can not modify the mapping relationship, after the modification of gray often troublesome.
What should we do then? There is a way to change this situation, of course, is the consistent hash consistent hashing ...
I am too lazy to write the principle, the following reference to the http://blog.csdn.net/sparkliang/article/details/5279393, detailed description of the consistent hash, Ash is good 、、、
hash algorithm and monotonicity
One metric of the Hash algorithm is monotonicity (monotonicity), defined as follows:
Monotonicity is the addition of new buffers to the system if some content has been allocated to the corresponding buffer by hashing. The result of the hash should be that the original allocated content can be mapped to the new buffer without being mapped to other buffers in the old buffer set.
It is easy to see that the above simple hash algorithm hash (object)%N difficult to meet the monotonic requirements.
The principle of consistent hashing algorithm
Consistent hashing is a hash algorithm, simply, in removing/adding a cache, it can be as small as possible to change the existing key mapping relationship, as far as possible to meet the requirements of monotonicity.
Here are 5 steps to simply talk about the fundamentals of the consistent hashing algorithm.
Four, annular hash space
Consider the usual hash algorithm is to map value to a 32 of the key value, that is, the 0~2^32-1 of the second square of the numerical space; we can think of this space as a circle of the first (0) tail (2^32-1), as shown in Figure 1 below.
Figure 1 Annular Hash space
V. Mapping objects to hash space
Next, consider 4 object Object1~object4, the distribution of the hash value key on the ring by the hash function is shown in Figure 2.
Hash (object1) = Key1;
... ...
Hash (OBJECT4) = Key4;
Figure 2 The distribution of key values for 4 objects