One, request answer protocol and RTT:
Redis is a typical TCP server based on the C/S model. In the client-server communication process, the client is usually the first to initiate the request, the server performs the corresponding task after receiving the request, and finally sends the obtained data or the processing result to the client in the reply way. During this process, the client waits for the results returned by the server in a blocked manner. See the following command sequence:
CLIENT:INCR x server:1 client:incr x server:2 client:incr x server:3 client:incr x Ser Ver:4
In each request and answer process, we have to bear the additional overhead of network transmission. We typically refer to this overhead as RTT (Round trip time). Now we assume that each request and answer RTT is 250 milliseconds, and our server can process 100k of data in one second, and the result is that our server processes up to 4 requests per second. To solve this performance problem, how do we optimize it?
Second, Pipeline (pipelining):
Redis has provided support for the command pipeline in an earlier version. Before giving a specific explanation, we first transform the above example of synchronous response into an asynchronous response based on the command line, so that we can have a better perceptual knowledge.
CLIENT:INCR x client:incr x client:incr x client:incr x server:1 server:2 server:3
server:4
As can be seen from the example above, the client does not have to wait for an answer from the server immediately after sending the command, but can continue to send the subsequent command. After the command is sent, the reply of all commands is read again. This saves the overhead of the RTT in synchronous mode.
Finally, if the Redis server discovers that the client's request is based on the pipeline, the server side, after receiving the request and processing, will queue up the answer data for each command before sending it to the client.
Third, Benchmark:
The following are test cases and test results from the Redis official website. It should be explained that the test is based on loopback (127.0.0.1), so the RTT takes relatively little time, and if it is based on the actual network interface, then the performance improvement of the pipeline mechanism is even more significant.
Require ' RubyGems ' require ' Redis ' def bench (descr) start = Time.now yield puts "#{descr} #{ Time.now-start} seconds " end def without_pipelining r = redis.new 10000.times { r.ping } End def with_pipelining r = redis.new r.pipelined { 10000.times { r.ping } } End Bench ("without pipelining") { without_pipelining } Bench ("with pipelining") { with_ Pipelining } //without pipelining 1.185238 seconds//with pipelining 0.250783 seconds
The above is the Redis tutorial (13): Pipeline detailed content, more relevant content please pay attention to topic.alibabacloud.com (www.php.cn)!