Consistent hashing algorithm and go language implementation

Source: Internet
Author: User
Tags crc32

Consistent hashing algorithm, when I first heard the name, it felt particularly advanced. And it tends to be related to distributed systems, which, to be exact, are distributed caches.

In Web services, caching is a thing between a database and a server-side program. In the site's business is not very large, generally do not need this thing, every time can be queried from the database to obtain data, but as the number of visits to the site, each request to access the database, the database pressure will be very large, resulting in a slow response, so you need a cache database, Their storage structures are generally simple, and their data is stored in memory. The typical cache database has redis,memcache. With the cache, the server program can read the data from the cache first, if there is no data, then read from the database, and put it into the cache, the local principle, they will be accessed the next time the probability is very large, so when the data is accessed again, you can directly from the cache to read the data. Because disk and memory performance differences are significant, the performance gap between hit cache and Miss Cache is also significant. To increase the hit probability of the cache, it is clear that increasing the machine is a simple and effective solution, but the memory of a machine is very limited, so we need multiple machines together to form a cache cluster. Then we spread the data across the machines, and in this decentralized process, I needed a hash algorithm. Without extra consideration, we can make the machine number 0 to n-1 according to the number of machines N, using the method of modulo n dispersed to each machine, according to the probability distribution principle, the data obtained by each machine is roughly the same.

With the increase of the size of the site, n machines are not enough to support the existing data cache, this time, the need to increase the machine, if still in accordance with the method of modulus, if the current machine added to the N+1 platform, then the cache hit rate will become the original 1/(n+1), the hit rate is too low. According to the theoretical calculation, in order to ensure maximum load balance, we can increase the maximum hit rate after a machine is able to achieve the original n/(n+1). It is equivalent to taking out 1/(n+1) data from each of the original machines and putting it into the new machine. According to the mathematical induction method can be obtained on the N machine to increase the M machine, in the case of maximizing load balance, the cache hit rate can reach the original n/(n+m). So what kind of hashing algorithm can make the command rate after the expansion close to the highest theoretical value? This is the consistent hashing algorithm.

Now, what we're going to do is design a hashing algorithm that satisfies the requirements. Our expectation is that when a new member (cache machine) is added, we can extract a small fraction of the data from the original machine to our newly added machine, thus ensuring that the data of the original machine does not change. To give an example of the image, we drink coke together, start eight people, each one Xiaoman cup, just pour out. Another two people, this time, we can not pour all the coke back to a large bottle, and then to everyone, the best way is to duyun a small point to the new two empty cups. This is just an example, not necessarily practical, but the truth is the same. In this distributed cache system, what should we do? In the theory of quantum mechanics, Power is a part, not a continuous one. Inspired by this, we can divide the machine into a small part, and then each small part is responsible for a hash range. Such a machine is responsible for multiple hash ranges. These are small copies, which we call virtual nodes. Consider increasing the number of M machines on the basis of N machines. In fact, as long as the original machine per unit of the total number of m/(N+M) to the new machine can be. As to how many parts of the entire hash range, this problem needs practice and experience to get, obviously, more than the words, management and operation efficiency decline, and less, as the machine increases easily cause the machine load uneven.

First, we envision the entire hash range as a line segment, typically with an unsigned 32-bit integer representing the entire range. On the line segment, we can divide it into a paragraph, each small segment header and tail two coordinates to represent, at the same time in order to avoid adjacent to the intersection of two segments, we can pre-define each small line is left closed right. Each machine will then manage multiple small segments. When new machines are added, we calculate the number of machines that need to be taken out of each existing machine according to the maximum load balancing principle based on the amount of machine added. This means that each machine maintains its own data structure, which includes the set of intervals that it needs to manage, and each time the machine changes, the data mechanism is updated. In order to speed up the query, it is necessary to set up an inverted index to quickly find the corresponding machine when querying the data of a key. This approach is what I think about the purpose of consistent hashing. It can achieve the requirements of consistent hashing, but obviously management is more troublesome, and the specific number of parts is also a difficult to decide the value. Below, simplify it.

Above, each machine manages a line segment, obviously this is more complex, for a period of time, need to maintain a two-value tuple. In fact, just maintaining a single point, we can define this point to govern only the interval between itself and the clockwise direction until the next point (excluding the next point). But for the last point and the first point will be a bit troublesome, so, the original line at both ends, so that the formation of a ring. In this way, each point can be looked at in a clockwise direction and be able to find the corresponding point. Therefore, a machine can only maintain a list of his own points. On top of that, when the machine is added, we choose to move the existing part to the new machine, so we need to maintain an algorithm to ensure maximum load balance, and actually can add points on the existing ring to achieve the purpose of splitting the existing machine data. For example, in the line segment of a point, the segment is divided into two segments, so that the data segmentation skillfully solve the problem, of course, after doing so, resulting in the total number of copies is no longer a definite value (in fact, we do not care about the total number of copies), the total number of machines and the number of copies per machine. How many copies per machine is more reasonable? Someone gave the answer is 150. This, of course, is also an empirical value. Now, the consistent hashing algorithm becomes this: Each machine has a fixed number of points, which are distributed over a ring of 32-bit unsigned shaping. Each point manages itself and the clockwise direction until the next point in the area. How do you generate these points? Random. Hash the machine name with a hash function to get an unsigned 32-bit integer. Use crc32.ChecksumIEEE function generation in Golang. Therefore, for a key, you need to query its corresponding machine, only need to find its counter-clockwise direction of the first point, and then find the point corresponding to the machine.

Here is a simple implementation of the Golang language, for informational purposes only. The main reference is the implementation of this code. In fact, for the query of the node, here uses the sort after binary query (Golang comes with the sort package). Complexity logN is, but the complexity of insertions and deletions is nlogN . If the use of red and black trees, the complexity of both nlogN , from the algorithm point of view, the use of red and black trees is more appropriate, but in fact, the consistent hash is the most used to find, and the changes to the node is relatively small, so the gap is not very large.

Import ("HASH/CRC32"    "Sort"    "StrConv"    "Sync") Type errorstring struct {sstring}func (e *errorstring) Error ()string{return"Consistenterror:"+ e.s}//Defining error TypesFunc Consistenterror (textstring) Error {return &errorstring{text}}//Defining ring Typestype Circle []uint32func (c circle) len () int {return Len (c)}func (c circle) Less (i, J int)BOOL{return c[i] < C[j]}func (c Circle) Swap (i, J int) {C[i], c[j] = c[j], C[i]}type Hash func (Date []byte) UI Nt32type consistent struct {hash hash//functions that produce uint32 typesCircle Circle//RingVirtualnodes int//The number of virtual nodes, the text saidVirtualmap Map[uint32]string  //Point-to-host mappingsMembers map[string]BOOL  //Host Listsync. Rwmutex}func Newconsisten () *consistent {return &consistent{Hash:CRC32. Checksumieee,Circle:circle{},Virtualnodes:  Max,        Virtualmap:Make (Map[uint32]string),        Members :Make (map[string]BOOL),    }}//generate a String key for an element with an indexFunc (c *consistent) Eltkey (keystring, idx int)string{return key +"|"+ StrConv. Itoa (IDX)}func (c *consistent) updatecricle () {c.circle = circle{} for k: = Range C.virtualmap {c.circle = Append (C.circle, k)} sort. Sort (c.circle)}func (c *consistent) members () []string{c.rlock () defer C.runlock () m: = Make ([]string, Len (c.members)) var i =0For k: = Range c.members {m[i] = k i++} return M}func (c *consistent) Get (Keystring)string{hashkey: = C.hash ([]byte (Key)) C.rlock () defer c.runlock () I: = C.search (HashKey) return C.virtualma P[c.circle[i]]}//search nearly vnode around key//sort. Search uses binary search to find key//every Vnode cover its self and clockwise areafunc (c *consistent) search (key UInt32) int {f: = func (x int)BOOL{return c.circle[x] >= key} I: = sort. Search (Len (c.circle), f) i = i-1If I <0{i = len (c.circle)-1} return i}//This function is beautifulFunc (c *consistent) Forceset (Keys ...string) {MEMS: = C.members () for _, ELT: = range MEMS {var found = FalseFoundloop:for _, K: = Range keys {if k = = ELT {found = True BreakFoundloop}} If!found {c.remove (ELT)}} for _, K: = Range keys {C.rlock () _, OK: = c.members[ K] C.runlock () if!ok {C.add (k)}}}func (c *consistent)ADD(ELTstring{c.lock () defer c.unlock () If _, OK: = C.members[elt]; OK {return} C.members[elt] = True For idx: =0; idx < c.virtualnodes; idx++ {c.virtualmap[c.hash (C.eltkey (ELT, idx))] = ELT} c.updatecricl E ()}func (c *consistent) Remove (ELTstring{c.lock () defer c.unlock () If _, OK: = C.members[elt];!ok {return} delete (C.members, ELT) For idx: =0; idx < c.virtualnodes; idx++ {Delete (C.virtualmap, C.hash (]byte (ELT, idx))} C.eltkey CLE ()}

Reprint: http://cloudaice.com/consistent-hash/

Consistent hashing algorithm and go language implementation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.