Redis is a TCP server using the Client-server model and what is called a Request/response protocol.
Redis uses the Client-server model based on the TCP protocol, or it can be called the
request/response protocol model. This means, usually a request is accomplished with the following steps:
It means that usually a request is completed into the following steps
- The client sends a query to the server, and reads from the socket, usually in a blocking-out, for the Server response.
- The client queries the Redis server, connects to the Redis server side via the socket, and is usually blocked before receiving a return.
- The server processes the command and sends the response back to the client
- Redis server executes commands and sends results back to client
So for instance a four commands sequence are something like this:
< Span style= "BACKGROUND-COLOR:INHERIT; Line-height:24px "> So the order in which the RDIs instance executes 4 commands is probably this:
- Client: &NBSP;INCR X
- Server: 1
- Client: INCR X
- server: 2
- client: INCR X
- server: 3
- client: incr X
- server: 4
Clients and Servers are connected via a networking link. Such A link can be very fast (a loopback interface) or very slow (a connection established over the Internet with many hop s between the hosts). Whatever The network latency is, there are a time for the packets-to-travel from the client to the server, and the back from th E server to the client to carry the reply.
the client connects to the Redis server over the network. Such connections can be very fast (a local loopback port) or very slow (two hosts connected over the Internet). However, regardless of this network factor, there is a packet to the server, the server will return a reply. (Explanation: The request is sent to the time the return is received is called RTT)< Span style= "BACKGROUND-COLOR:INHERIT; line-height:24px ">this time is called RTT (Round trip time). It's very easy-to-see how this can affect the performances when a client needs to perform many requests in a row (for INS Tance adding many elements to the same list, or populating a database with many keys). For instance if the RTT time was milliseconds (in the case of a very slow link over the Internet), even if the server I s able to process 100k requests per second, we'll be able to process at max four requests per second.
The time overhead for this process is called RTT (a round trip time). Obviously, when a client continuously requests (adding many elements into the same list, or acquiring many keys) has an impact on performance. So if the RTT time of a Redis instance is 250 millimeters (the connected network is slow), although redis can theoretically handle 100k requests per second, we can only process 4 requests per second. If The interface used is a loopback interface, the RTT was much shorter (for instance my host reports 0,044 mil Liseconds pinging 127.0.0.1), but it's still a lot if you need to perform many writes in a row.
If the network uses a send-to interface, the RTT time is much shorter (for example, I ping the instance of this machine as long as 44 millimeters), but the continuous sending of commands will still be longer. Fortunately there is a-to-improve this use case.
Fortunately, there are some examples of improvements.Redis Pipelininga Request/response Server can be implemented so it's able to process new requests even if the client Didn ' t already read the old responses. This-on-the-possible to send
Multiple Commands To the server without waiting for the replies on all, and finally read the replies in a single step.
a request/response sever can execute a new requests, although the client is confiscated from the previous respones. This is a way to send many commands to the server, without waiting for an answer, until the last one reads all the answers at once. This is called pipelining, and are a technique widely in use since many decades. For instance many POP3 protocol implementations already supported this feature, dramatically speeding up the process of doing Wnloading new emails from the server.
This is called pipelining (pipe), which is a technology that has been widely used more than 10 years ago. For example, this feature has been implemented by the POP3 protocol, which greatly speeds up the download of messages from the server. Redis supports pipelining since the very early days, so whatever version is running, you can use Pipelini Ng with Redis. This was an example using the raw netcat utility:
Redis has supported pipelining early on , so you can use pipelining regardless of the version. Here is an example of using netcat:
$ (printf "PING\r\nPING\r\nPING\r\n"; sleep 1) | nc localhost 6379+PONG+PONG+PONG
This time we is not paying the cost of RTT for every call, but just one time for the three commands
Not every call spends one RTT time, but one RTT time executes 3 commands. To is very explicit, with pipelining the order of operations of our very first example'll be the following:< /c4>
Obviously, thepipelining process is this:
- Client: &NBSP;INCR X
- Client: &NBSP;INCR X
- Client: &NBSP;INCR X
- Client: &NBSP;INCR X
- Server: 1
- Server: 2
- server: 3
- server: 4
< Span style= "BACKGROUND-COLOR:INHERIT; line-height:24px ">important NOTE :While the Client sends commands using pipelining, the server would be a forced to queue the replies, using memory. So if you need to send a lot of commands with pipelining, it's better to send them as batches have a reasonable number, For instance 10k commands, read the replies, and then send another 10k commands again, and so forth. The speed'll be nearly the same, but the additional memory used'll be in max the amount needed to queue the replies fo R this 10k commands.
When a client uses pipelining to make some commands, Redis server uses memory to create a queue for the answer. So you use pipelining to take place a lot of commands, a good practice is to reasonably bulk send, for example, 10k a batch, get back in the event of 10k. The speed is probably the same, but Redis server will spend memory to create the answer queue for this 10k command. Some Benchmarkin The following benchmark we ' ll use the Redis Ruby client, supporting pipelining, to test the speed improve ment due to pipelining:
Here's an example of using Ruby to support pipelining clients:
require ‘rubygems‘require ‘redis‘def bench(descr) start = Time.now yield puts "#{descr} #{Time.now-start} seconds"enddef without_pipelining r = Redis.new 10000.times { r.ping }enddef with_pipelining r = Redis.new r.pipelined { 10000.times { r.ping } }endbench("without pipelining") { without_pipelining}bench("with pipelining") { with_pipelining}
Running the above simple script would provide the following figures in my Mac OS X system, Running over the loopback interf Ace, where pipelining would provide the smallest improvement as the RTT is already pretty low:
running the above footsteps on Mac OS X, and not using pipelining to run the comparison as follows:
without pipelining 1.185238 secondswith pipelining 0.250783 seconds
As can see, using pipelining, we improved the transfer by a factor of five.
It can be found that the transfer speed is 5 times times higher with pipelining
pipelining VS scriptingusing Redis scripting (Available in Redis version 2.6 or greater) a number of the use cases for pipelining can is addressed more efficiently using S Cripts that perform a lot of the work needed at the server side. A big advantage of scripting is that it's able to both read and write data with minimal latency, making operations like
read, COMPUTE, write Very fast (pipelining can ' t help in this scenario since the client needs the reply of the Read command before it can call The Write command).
< Span style= "BACKGROUND-COLOR:INHERIT; line-height:24px "> A lot of experiments proved that for Pipelining&NBSP; using footsteps can handle more work on the server side. One big advantage is that using footsteps can make reading and writing delays small, making reading, writing, and computing fast (Pipelining is not satisfied ) < Span style= "BACKGROUND-COLOR:INHERIT; line-height:24px ">Sometimes the application May also want to Send&NBSP; EVAL &NBSP; or&NBSP; Evalsha&NBSP; commands in a pipeline. This was entirely possible and Redis explicitly supports it with The&NBSP; SCRIPT load&NBSP; command (it guarantees That&NBSP; evalsha&NBSP; can be called Without the risk of failing).
Some applications may want to have an eval or Evalsha command in the pipeline. Using the Scriot Load command is fully supported (it guarantees execution without the risk of failure)
Redis Translation _redis pipeline