Since Redis is a single point, but it is unavoidable to use multiple Redis cache servers in the project, how do you map the cached key evenly to multiple Redis servers, and minimize the cache key hit rate as the cache server increases or decreases? This requires us to implement our own distribution.
Memcached should be no stranger to everyone, by mapping the key to the memcached server for fast reading. We can dynamically increase its nodes without affecting the relationship between the key previously mapped to memory and the memcached server, because a consistent hash is used.
Because Memcached's hash policy is implemented on its clients, different client implementations also differ, with Spymemcache, Xmemcache as an example, using Ketama as their implementation.
Therefore, we can also use the consistent hash algorithm to solve the problem of Redis distributed. Before introducing the consistent hash algorithm, let me introduce a method I thought before, how to map the key evenly to multiple Redis servers.
Because the LZ level is limited and the study of Redis is not deep, please correct me where the writing is wrong.
Programme I
This scenario is a method that was thought for a few days ago, and the main idea is to sum the ASCII code value of the letters and numbers in the cache key, and the sum value for the total number of Redis servers is the Redis server to which the key is mapped. A big drawback of this approach is that when Redis server is increased or decreased, basically all of the keys do not map to the corresponding Redis server. The code is as follows:
/// <summary> ///map the corresponding server according to the cached key/// </summary> /// <param name= "Key" ></param> /// <returns></returns> Public StaticRedisclient Getredisclientbykey (stringKey) {List<RedisClientInfo> redisclientlist =NewList<redisclientinfo>(); Redisclientlist.add (NewRedisclientinfo () {Num =0, Ipport ="127.0.0.1:6379" }); Redisclientlist.add (NewRedisclientinfo () {Num =1, Ipport ="127.0.0.1:9001" }); Char[] Charkey =Key.tochararray (); //The ASCII code of all letters and numbers in key is recorded and intKeynum =0; //Record remainder intNum =0; foreach(varCinchCharkey) { if(c >='a'&&'Z'>= c) | | (c >='A'&&'Z'>=c)) {System.Text.ASCIIEncoding asciiencoding=NewSystem.Text.ASCIIEncoding (); Keynum= Keynum + (int) Asciiencoding.getbytes (C.tostring ()) [0]; } if(c >='1'&&'9'>=c) {Keynum+=Convert.ToInt32 (c.tostring ()); }} Num= keynum%Redisclientlist.count; return NewRedisclient (Redisclientlist.where (it and it). Num = =Num). First (). Ipport); } //Redis Client Information Public classRedisclientinfo {//Redis Server Number Public intNum {Get;Set; } //Redis Server IP address and port number Public stringIpport {Get;Set; } }
Scenario 21, distributed implementation
The key corresponds to the distribution of the Redis nodes by making a consistent hash of the keys.
Implementation of a consistent hash:
- Hash value calculation: By supporting MD5 and MurmurHash two kinds of calculation methods, by default is the use of MurmurHash, efficient hash calculation.
- Consistent implementation: TreeMap in Java to simulate a ring structure for uniform distribution
Nothing to say, directly on the code bar, LZ is only know a little fur, code there are some do not understand the place, keep the later slowly pondering
Public classKetamanodelocator {//The original Java class TREEMAP implementation of the comparator method, where I figure the easy, direct use of the net under the SortedList, which comparer interface method) Privatesortedlist<Long,string> ketamanodes =Newsortedlist<Long,string>(); PrivateHashAlgorithm HashAlg; Private intNumreps = the; //The parameter here differs from the Java version because of the static method used, so the HashAlgorithm ALG parameter is no longer passed PublicKetamanodelocator (list<string> Nodes/*, int nodecopies*/) {Ketamanodes=Newsortedlist<Long,string>(); //numreps = nodecopies; //for all nodes, generate ncopies virtual nodes. foreach(stringNodeinchnodes) { //each of the four virtual nodes is a set of for(inti =0; I < numreps/4; i++) { //the Getkeyfornode method obtains a unique name for this set of virtual nodes. byte[] Digest = HASHALGORITHM.COMPUTEMD5 (node +i); /** MD5 is a 16-byte array of 16-byte arrays of four bytes each, corresponding to a virtual node, which is why the above virtual node four is divided into a group of reasons*/ for(inth =0; H <4; h++) { Longm =Hashalgorithm.hash (Digest, h); KETAMANODES[M]=node; } } } } Public stringGetprimary (stringk) {byte[] Digest =Hashalgorithm.computemd5 (k); stringRV = Getnodeforkey (Hashalgorithm.hash (Digest,0)); returnRV; } stringGetnodeforkey (Longhash) { stringRV; LongKey =Hash; //If this node is found, take the node directly and return if(!Ketamanodes.containskey (Key)) { //get the sub map that is larger than the current key, and then remove the first key from it, which is greater than the key that is closest to it , as described in:http://www.javaeye.com/topic/684087 varTailmap = fromCollinchKetamanodeswhereColl. Key >HashSelect New{Coll. Key}; if(Tailmap = =NULL|| Tailmap.count () = =0) Key=Ketamanodes.firstordefault (). Key; ElseKey=Tailmap.firstordefault (). Key; } RV=Ketamanodes[key]; returnRV; } } Public classHashAlgorithm { Public Static LongHashbyte[] Digest,intntime) { LongRV = ((Long) (digest[3+ Ntime *4] &0xFF) << -) | ((Long) (digest[2+ Ntime *4] &0xFF) << -) | ((Long) (digest[1+ Ntime *4] &0xFF) <<8) | ((Long) digest[0+ Ntime *4] &0xFF); returnRV &0xffffffffL;/*Truncate to 32-bits*/ } /** Get The MD5 of the given key. */ Public Static byte[] ComputeMd5 (stringk) {MD5 MD5=NewMD5CryptoServiceProvider (); byte[] Keybytes =Md5.computehash (Encoding.UTF8.GetBytes (k)); MD5. Clear (); //md5.update (keybytes); //return Md5.digest (); returnkeybytes; } }
2. Distributed testing
1, fake with two server:0001 and 0002, loop call 10 times to see if the key value can be evenly mapped to the server, the code is as follows:
Static voidMain (string[] args) { //assume that the serverlist<string> nodes =Newlist<string> () {"0001","0002" }; Ketamanodelocator k=Newketamanodelocator (nodes); stringstr =""; for(inti =0; I <Ten; i++) { stringkey="User_"+i; STR+=string. Format ("key:{0} is assigned to the server: {1}\n\n", Key, K.getprimary (key)); } Console.WriteLine (str); Console.ReadLine (); }
The results of the program running two times are as follows, and it is found that the key is basically evenly distributed to the server node.
2, we are adding a 0003 server node, the code is as follows:
static void Main (string[] args) { //assumed server list<string> nodes = new list<string> () {"0001", "0002", "0003"}; Ketamanodelocator k = new Ketamanodelocator (nodes); String str = ""; for (int i = 0; i < i++) { string key= "user_" + i; str + = string. Format ("key:{0} assigned to the server is: {1}\n\n", key, K.getprimary (key)); } Console.WriteLine (str); Console.ReadLine (); }
The results of the program running two times are as follows:
Comparing the first run results, it is found that only user_5,user_7,user_9 cache is missing and other caches can be hit.
3, we remove the server 0002, run two times the result is as follows:
Comparison of the second and this run results found that the user_0,user_1,user_6 cache was missing.
Conclusion
The consistency hash algorithm can solve the problem of Redis distributed very well, and when Redis server is increased or decreased, the cache hit ratio of the previous storage is relatively high.
Http://www.cnblogs.com/lc-chenlong/p/4195814.html
http://blog.csdn.net/cywosp/article/details/23397179/
The use of consistent hash algorithm in Redis distribution