Tags: TCP connection Tween 0MS delay details combat why FAs mode
Pipeline technology is not unique to Redis, and pipeline technology is used in many places in computer science.
Explanations from the wiki:
In computing, a pipeline, also known as a data pipeline, are a set of data processing elements connected in series, wher E The output of one element is the input of the next one. The elements of a pipeline is often executed in parallel or in time-sliced fashion. Some amount of buffer storage is often inserted between elements.
In the computer, the pipeline is also called the data pipeline, is the processing element. The output of one component as input to another component. Elements in a pipeline are often parallel or event-based shards. A buffer is inserted between the elements.
To be blunt is to transmit computational elements. There are many pipelines related to the computer field, in which the HTTP pipelining of the network protocol is encountered in the development of the usual application.HTTP pipelining
HTTP pipelining is a technique in which multiple HTTP requests be sent on a single TCP connection without waiting for the corresponding responses. 
HTTP pipelining is a technique in which multiple HTTP requests are sent based on a TCP connection in a manner that does not wait for a matching response.
The introduction of pipelining in HTTP is primarily intended to improve performance. In some high-latency networks, if you need to wait for a response from a previous HTTP request before sending the next request, network latency is bound to be a performance bottleneck. Pipelining in order to solve this problem, the pipelining sends HTTP requests, and the client does not wait for the response from the previous request, as long as the server returns the response in the order requested. This does not fundamentally solve the problem of network latency, but it can reduce the loopback time that request/response this pattern brings.
Pipelining brings the following advantages:
The same balance of things, pipelining also have many limitations:
Refer to the wiki and the HTTP authoritative guide for a description of the HTTP pipelining:Reference
"HTTP authoritative guide"
After reading pipelining, I believe that we have a first knowledge of pipeline (pipelining) technology. Let's look at how it's being applied in Redis and why it's applied.
Redis is a client-server request/response mode, and Redis has a separate TCP Server.
This is the same as the HTTP Request/response described above. In this mode, the packet must be sent from the client to the server side and then returned from the server side to the client side, which is called RTT (Round trip time).
Assuming that Redis server can handle 100k requests per second, but RTT is 250ms, so Redis server can actually handle only 4 requests per second, and this effect increases as the network latency increases. The direct impact of the result:
In order to solve this problem, a client is required to send a request to the server mode without waiting for a response, that is, Redis pipelining.
Redis pipelining enables the client to send multiple command requests to the REDIS server without waiting for a response, and then the server side returns the response in the order requested, similar to the following:
Two. Issues addressed by Redis Pipelining
client> set k0 v0;client> set k1 v1;client> set k2 v2;server> okserver> okserver> ok
As you can see from the Redis pipelining introduction above, the Redis pipelining reduces the loopback time consumption of request responses from the Request/response approach in a high latency network:
More importantly, in the Request/response way, successive send request to server side, server side each need read/write, here Read/write is Systcall, involves the kernel state and the user state switch, Consumes system resources very much. The Redis pipelining approach minimizes the switching overhead of this system state.Three. Use of pipelining in Jedis
Pipelining need support for Redis server side and Redis client. The client needs to be able to send multiple commands without getting a response, and the server side needs to orchestrate the response back to the client in the order requested.
The classic client Jedis of Redis is supported by pipelining. There are two ways to use it, and there are two ways to get a response.
1. Use the Pipelining method:
2. How to get the response:
The code for the test and pipelining API usage details can be timestampFour. The benchmark of pipleling in Jedis
The analysis of pipelining in the second section above is to resolve network latency and improve throughput. So, with pipelining, does the performance increase relative to not being used? How much more?
Everything needs to use data to speak. Below respectively run down the test, compare below. The Jedis test case has the benchmark module, which uses its pipelining and Getset benchmark test procedures directly:
|number of execution commands||OPS (no pipelining/pipelining)|
|100||6666 ops/9090 Ops|
|100000||26870 ops/316455 Ops|
|1000000||28967 ops/422922 Ops|
From the above, it can be seen that using pipelining can improve the performance by roughly 20 times times.Five. The application of Redis pipelining in project combat
First recognize the characteristics of the pipelining:
Two characteristics determine the application scenario must have the following characteristics:
Generally applies to batch cache data or pre-load cached data. Because you do not need to send the cached data, you can send it pipelining way. Because it is cached data, the cached scene can tolerate some data cache failure, nothing more than a hit, loading a single load into the cache.
Recently in a business data early warning inspection project, the use of the responsibility chain model, the need for early warning inspection of each data traversal responsibility chain, complete data consistency check. Each inspector in the responsibility chain is configurable for invalidation and supports extensible inspectors. extensibility, configurable type is strong, but the problem is that each need to check the data to traverse the responsibility chain checker, you need to repeatedly query some other table data and configuration. This introduces huge disk I/O, which creates performance bottlenecks.
Later, if you can preload the data from these other tables into the cache, each time you traverse the data that needs to be checked for alerts, it is not retrieved from the DB and fetched directly from the cache. Because the service is clustered, you need to use distributed cache and the local cache capacity is limited.
Finally, the data or configuration of other tables is cached by lazy loading, and when the first data is checked, the individual inspectors load the data or configuration of the other tables into the cache while using the pipelining mode cache to Redis.
When you walk through the inspector for the next warning data, each inspector gets the data or configuration of the other tables from the cache. Dramatically reduce I/O from DB, breaking performance bottlenecks.
Redis (vi) piping (pipelining)