Typically, after a request is made by the Redis client, it usually blocks and waits for the Redis server to process, and after the Redis server finishes processing the request, the result is returned to the client via a response message. This is a bit similar to HBase's scan, which is usually the client side that gets each record as a RPC call server. In Redis, is there something like hbase Scanner caching, a request that returns multiple records? Yes, this is pipline. Official introduction http://redis.io/topics/pipelining.
By pipeline mode when there is a large number of operations, we can save a lot of the original wasted time in the network delay, we need to note that the pipeline way to package command delivery, Redis must cache all commands before processing the results of the processing. The more commands are packaged, the more memory is consumed by the cache. So the more you pack the more commands, the better.
With pipeline, there is a significant performance improvement when it comes to redis bulk reading and writing.
Test it with Java:
Import Java.util.HashMap;
Import Java.util.Map;
Import Java.util.Set;
Import Redis.clients.jedis.Jedis;
Import Redis.clients.jedis.Pipeline;
Import Redis.clients.jedis.Response; public class Test {public static void main (string[] args) throws Exception {Jedis Redis = new Jedis ("127.0.0
.1 ", 6379, 400000);
map<string, string> data = new hashmap<string, string> ();
Redis.select (8);
Redis.flushdb ();
Hmset Long start = System.currenttimemillis ();
Direct hmset for (int i = 0; i < 10000; i++) {data.clear ();
Data.put ("K_" + I, "v_" + i);
Redis.hmset ("Key_" + I, data);
} Long end = System.currenttimemillis ();
System.out.println ("dbsize:[" + redis.dbsize () + "]");
System.out.println ("Hmset without pipeline used [" + (End-start)/1000 + "] seconds:");
Redis.select (8);
Redis.flushdb (); Using Pipeline Hmset PipEline p = redis.pipelined ();
Start = System.currenttimemillis ();
for (int i = 0; i < 10000; i++) {data.clear ();
Data.put ("K_" + I, "v_" + i);
P.hmset ("Key_" + I, data);
} p.sync ();
End = System.currenttimemillis ();
System.out.println ("dbsize:[" + redis.dbsize () + "]");
System.out.println ("Hmset with pipeline used [" + (End-start)/1000 + "] seconds:");
Hmget Set keys = Redis.keys ("*");
Direct use of Jedis hgetall start = System.currenttimemillis ();
Map<string, map<string, string>> result = new hashmap<string, map<string, string>> ();
for (String Key:keys) {result.put (key, Redis.hgetall (key));
} end = System.currenttimemillis ();
SYSTEM.OUT.PRINTLN ("result size:[" + result.size () + "]"); System.out.println ("Hgetall without pipeline used [" + (End-start)/1000 + "] seconds."); Using Pipeline Hgetall map<string, response<map<string, string>>> responses = NE
W hashmap<string, Response<map<string, string>>> (Keys.size ());
Result.clear ();
Start = System.currenttimemillis ();
for (String Key:keys) {responses.put (key, P.hgetall (key));
} p.sync ();
For (String K:responses.keyset ()) {result.put (k, Responses.get (k). get ());
} end = System.currenttimemillis ();
SYSTEM.OUT.PRINTLN ("result size:[" + result.size () + "]");
System.out.println ("Hgetall with pipeline used [" + (End-start)/1000 + "] seconds:");
Redis.disconnect ();
}}//Test Result://Use pipeline to read and write 10,000 records in bulk, it is a piece of cake, the second end.
DBSIZE:[10000]..
Hmset without pipeline used [243] seconds.
DBSIZE:[10000]..
Hmset with pipeline used [0] seconds.
Result size:[10000]..
Hgetall without pipeline used [243] seconds. REsult size:[10000]. Hgetall with pipeline used [0] seconds.