REDIS-2.4.15 does not currently provide cluster functionality, Redis's blog says the cluster mechanism will be implemented in 3.0. At present, the method of Redis implementation is mainly to use the consistent ha-sparse slicing (Shard), to assign different keys to different Redis servers, so as to achieve the horizontal expansion. Here's a more common distributed scenario:
When the read and write operations are more homogeneous and require high real-time performance, you can use the distributed mode of the following diagram:
When there are far more reads than writes, you can use the distributed pattern in the following illustration:
For the consistency of the HA sparse piecewise algorithm, Jedis-2.0.0 has been provided, the following is the use of sample code (for example in Shardedjedispool):
package com.jd.redis.client; import java.util.ArrayList; import java.util.List; import redis.clients.jedis.JedisPoolConfig; import Redis.clients.jedis.JedisShardInfo; import Redis.clients.jedis.ShardedJedis; import Redis.clients.jedis.ShardedJedisPool; import redis.clients.util.Hashing; import redis.clients.util.Sharded; Publicclass redisshardpooltest { Static Shardedjedispoolpool; Static { Jedispoolconfig config =new jedispoolconfig ();//jedis Pool Configuration Config.setmaxactive (500);//Maximum active object count Config.setmaxidle (1000 * 60);//object Maximum idle time Config.setmaxwait (1000 * 10);//maximum wait time when getting objects Config.settestonborrow (true); String HostA = "10.10.224.44"; int PortA = 6379; String HostB = "10.10.224.48"; int PORTB = 6379; list<jedisshardinfo> jdsinfolist =new arraylist<jedisshardinfo> (2); Jedisshardinfo Infoa = new jedisshardinfo (HostA, PortA); Infoa.setpassword ("Redis.360buy"); Jedisshardinfo infob = new jedisshardinfo (HostB, PORTB); Infob.setpassword ("Redis.360buy"); Jdsinfolist.add (INFOA); Jdsinfolist.add (INFOB); Pool =new shardedjedispool (config, jdsinfolist, Hashing.murmur_hash, Sharded.default_key_tag_pattern); } /** * @param args */ publicstaticvoid Main (string[] args) { for (int i=0; i<100; i++) { String key = GenerateKey (); Key = = "{AAA}"; Shardedjedis JDS = null; Try { JDS = Pool.getresource (); System.out.println (key+ ":" +jds.getshard (Key). Getclient (). GetHost ()); System.out.println (Jds.set (Key, "1111111111111111111111111111111")); Catch (Exception e) { E.printstacktrace (); } finally { Pool.returnresource (JDS); } } } privatestaticintindex = 1; publicstatic String GenerateKey () { return String.valueof (Thread.CurrentThread (). GetId ()) + "_" + (index++); } } |
From the results of the operation can be seen, the different key is assigned to different redis-server up.
In fact, there are two problems with the cluster pattern above:
1. Expansion issues:
Because the use of the consistency of the HA sparse for fragmentation, then the different key distribution to different redis-server, when we need to expand the machine to the fragment list, this will make the same key to calculate the same as the original different machines, so if you want to take a value, will be out of the picture, in this case, Redis's author proposes a way called pre-sharding:
The Pre-sharding method is to put every physical machine on the computer, to run a number of redis instances of different fractures, if there are three physical machines, each physical machine runs three Redis actual, then we actually have 9 Redis in the fragment list, when we need to enlarge, add a physical machine, The steps are as follows:
A. Running redis-server on the new physical machine;
B. The redis-server from a redis-server (assuming Redisa) belonging to the (slaveof) fragment list;
C. After the completion of the master-slave Replication (Replication), the Redisa IP and port in the client fragment list are changed to the IP and port of Redis-server on the new physical machine;
D. Stop Redisa.
This is the equivalent of transferring a redis-server to a new machine. Prd-sharding is actually a way to expand online, but still very dependent on the Redis itself, if the main library snapshot data file is too large, the replication process will be very long, but also bring pressure on the main library. So the process of doing this split is best chosen for business access to low peak period.
Http://blog.nosqlfan.com/html/3153.html
2. Single point of failure:
Or use the Redis master-slave copy function, the two physical hosts are running Redis-server, one of the Redis-server is another from the library, the use of dual-machine hot standby technology, the client through the virtual IP access to the main library of the physical IP, when the main library downtime, Switch to the physical IP from the library. Only after the main library is repaired, you should change the previous from library to the master library (using the command slaveof no one), and the master library becomes its from the library (making the command slaveof IP PORT) in order to guarantee the consistency of the new data during the repair.