REDIS-2.4.15 currently does not provide cluster functionality, the Redis author says in a blog that the cluster mechanism will be implemented in 3.0. At present, the method of cluster implementation of Redis mainly uses the consistent ha-sparse shard (Shard), and assigns different keys to different Redis servers to achieve horizontal scaling. Here's a more common distributed scenario:
In the read and write operation is more uniform and real-time requirements are high, you can use the following diagram of the distributed mode:
In a read operation much more than a write operation, the following graph can be used for distributed mode:
For the consistency of the Ha shard algorithm, Jedis-2.0.0 has provided, the following is the use of the sample code (in Shardedjedispool as an example):
Package com.jd.redis.client; import java.util.ArrayList; import java.util.List; import redis.clients.jedis.JedisPoolConfig; import Redis.clients.jedis.JedisShardInfo; import Redis.clients.jedis.ShardedJedis; import Redis.clients.jedis.ShardedJedisPool; import redis.clients.util.Hashing; import redis.clients.util.Sharded; Publicclass redisshardpooltest { Static Shardedjedispoolpool; Static { Jedispoolconfig config =new jedispoolconfig ();//jedis Pool Configuration Config.setmaxactive (500);//maximum number of active objects Config.setmaxidle (1000 * 60);//object Maximum idle time Config.setmaxwait (1000 * 10);//maximum wait time to get object Config.settestonborrow (true); String HostA = "10.10.224.44"; int PortA = 6379; String HostB = "10.10.224.48"; int PORTB = 6379; list<jedisshardinfo> jdsinfolist =new arraylist<jedisshardinfo> (2); Jedisshardinfo Infoa = new jedisshardinfo (HostA, PortA); Infoa.setpassword ("Redis.360buy"); Jedisshardinfo infob = new jedisshardinfo (HostB, PORTB); Infob.setpassword ("Redis.360buy"); Jdsinfolist.add (INFOA); Jdsinfolist.add (INFOB); Pool =new shardedjedispool (config, jdsinfolist, Hashing.murmur_hash, Sharded.default_key_tag_pattern); } /** * @param args */ publicstaticvoid Main (string[] args) { for (int i=0; i<100; i++) { String key = GenerateKey (); Key + = "{AAA}"; Shardedjedis JDS = null; Try { JDS = Pool.getresource (); System.out.println (key+ ":" +jds.getshard (Key). Getclient (). GetHost ()); System.out.println (Jds.set (Key, "1111111111111111111111111111111")); } catch (Exception e) { E.printstacktrace (); } finally { Pool.returnresource (JDS); } } } privatestaticintindex = 1; publicstatic String GenerateKey () { return String.valueof (Thread.CurrentThread (). GetId ()) + "_" + (index++); } } |
As you can see from the running results, different keys are assigned to different redis-server up.
In fact, there are two problems with the cluster pattern above:
1. Expansion issues:
Because the consistency of the hash is used for sharding, then the different key is distributed to different redis-server, when we need to expand, we need to add the machine to the Shard list, this time will make the same key to the original machine, so if you want to take a certain value, may not be possible, for this scenario, the Redis author proposes a method called Pre-sharding:
The Pre-sharding method is to run a number of redis instances of different fractures on each of the physical machines, and if there are three physical machines running three Redis actual per physical machine, then we actually have 9 Redis instances in our Shard list, and when we need to expand, add a physical machine, The steps are as follows:
A. Run the Redis-server on the new physical machine;
B. The redis-server from a redis-server (assuming Redisa) belonging to the (slaveof) shard list;
C. When master-slave replication (Replication) is completed, the IP and port of Redisa in the client Shard list are changed to Redis-server IP and port on the new physical machine;
D. Stop Redisa.
This equates to the transfer of a redis-server to a new machine. Prd-sharding is actually a way to expand on-line, but still relies on the copy function of Redis itself, if the main library snapshot data file is too large, this replication process will be long, and will put pressure on the main library. So the process of doing this split is best chosen for the low peak hours of business access.
Http://blog.nosqlfan.com/html/3153.html
2. Single point of failure:
Or the use of Redis master-slave replication function, the two physical hosts are running Redis-server, one of the Redis-server is another from the library, the use of dual-machine hot standby technology, the client through the virtual IP access to the physical IP of the main library, when the main library down, Switch to the physical IP from the library. Just when you fix the main library afterwards, you should change the previous from library to the main library (using the command slaveof no one) and the main library to its slave (make the command slaveof IP PORT) in order to guarantee the consistency of the new data during the repair.