Redis (vi) piping (pipelining)

Source: Internet
Author: User

Tags: TCP connection Tween 0MS delay details combat why FAs mode

Pipeline technology is not unique to Redis, and pipeline technology is used in many places in computer science.

Explanations from the wiki:

In computing, a pipeline, also known as a data pipeline,[1] are a set of data processing elements connected in series, wher E The output of one element is the input of the next one. The elements of a pipeline is often executed in parallel or in time-sliced fashion. Some amount of buffer storage is often inserted between elements.

In the computer, the pipeline is also called the data pipeline, is the processing element. The output of one component as input to another component. Elements in a pipeline are often parallel or event-based shards. A buffer is inserted between the elements.

To be blunt is to transmit computational elements. There are many pipelines related to the computer field, in which the HTTP pipelining of the network protocol is encountered in the development of the usual application.

HTTP pipelining

HTTP pipelining is a technique in which multiple HTTP requests be sent on a single TCP connection without waiting for the corresponding responses. [1]

HTTP pipelining is a technique in which multiple HTTP requests are sent based on a TCP connection in a manner that does not wait for a matching response.

The introduction of pipelining in HTTP is primarily intended to improve performance. In some high-latency networks, if you need to wait for a response from a previous HTTP request before sending the next request, network latency is bound to be a performance bottleneck. Pipelining in order to solve this problem, the pipelining sends HTTP requests, and the client does not wait for the response from the previous request, as long as the server returns the response in the order requested. This does not fundamentally solve the problem of network latency, but it can reduce the loopback time that request/response this pattern brings.

Pipelining brings the following advantages:

    • As described above, reducing the overall loopback time for multiple requests improves performance
    • For server side, can receive more requests within the unit time, and the server performance is satisfied, improve the throughput

The same balance of things, pipelining also have many limitations:

    • There should be no dependency between requests, and the latter request cannot depend on the response result of the previous request. This scenario is not suitable for use with pipelining
    • A non-idempotent request cannot use pipelining (such as a POST request), and the same post request alters the state of the resource, resulting in the use of pipelining technology, which results in a non-idempotent

Refer to the wiki and the HTTP authoritative guide for a description of the HTTP pipelining:

Reference

Pipelining
HTTP pipelining
"HTTP authoritative guide"

Redis-pipelining

After reading pipelining, I believe that we have a first knowledge of pipeline (pipelining) technology. Let's look at how it's being applied in Redis and why it's applied.

    • About Redis pipelining
    • Issues solved by Redis pipelining
    • Use of pipelining in Jedis
    • The benchmark of pipleling in Jedis
    • The application of Redis pipelining in actual project
I. Redis pipelining INTRODUCTION

Redis is a client-server request/response mode, and Redis has a separate TCP Server.

    • The client sends a request request to the server side and then needs to block the response waiting for the server side
    • The server side processes the request and then returns the response to the client

This is the same as the HTTP Request/response described above. In this mode, the packet must be sent from the client to the server side and then returned from the server side to the client side, which is called RTT (Round trip time).

Assuming that Redis server can handle 100k requests per second, but RTT is 250ms, so Redis server can actually handle only 4 requests per second, and this effect increases as the network latency increases. The direct impact of the result:

    • Blocking client threads or processes, consuming resources, and undoubtedly reducing client performance with large requests
    • Reduce server-side throughput

In order to solve this problem, a client is required to send a request to the server mode without waiting for a response, that is, Redis pipelining.

Redis pipelining enables the client to send multiple command requests to the REDIS server without waiting for a response, and then the server side returns the response in the order requested, similar to the following:

client> set k0 v0;client> set k1 v1;client> set k2 v2;server> okserver> okserver> ok
Two. Issues addressed by Redis Pipelining

As you can see from the Redis pipelining introduction above, the Redis pipelining reduces the loopback time consumption of request responses from the Request/response approach in a high latency network:

    • Send requests in a disorderly wait response, reducing wait times (especially in high latency networks)
    • Returns the response at once, reducing the time consumed by multiple responses

More importantly, in the Request/response way, successive send request to server side, server side each need read/write, here Read/write is Systcall, involves the kernel state and the user state switch, Consumes system resources very much. The Redis pipelining approach minimizes the switching overhead of this system state.

Three. Use of pipelining in Jedis

Pipelining need support for Redis server side and Redis client. The client needs to be able to send multiple commands without getting a response, and the server side needs to orchestrate the response back to the client in the order requested.

The classic client Jedis of Redis is supported by pipelining. There are two ways to use it, and there are two ways to get a response.

1. Use the Pipelining method:

    • Direct use of pipelining
    • Using pipelining in a transaction

2. How to get the response:

    • Synchronize, get the response of all requests at once, return in list mode
    • synchronization, specifying the response to be returned when the pipelining operation is called

The code for the test and pipelining API usage details can be timestamp

Four. The benchmark of pipleling in Jedis

The analysis of pipelining in the second section above is to resolve network latency and improve throughput. So, with pipelining, does the performance increase relative to not being used? How much more?

Everything needs to use data to speak. Below respectively run down the test, compare below. The Jedis test case has the benchmark module, which uses its pipelining and Getset benchmark test procedures directly:

number of execution commands OPS (no pipelining/pipelining)
100 6666 ops/9090 Ops
100000 26870 ops/316455 Ops
1000000 28967 ops/422922 Ops

From the above, it can be seen that using pipelining can improve the performance by roughly 20 times times.

Five. The application of Redis pipelining in project combat

First recognize the characteristics of the pipelining:

    • Batch Pipeline Delivery
    • Return the corresponding response at once

Two characteristics determine the application scenario must have the following characteristics:

    • Because it is a batch pipeline, there is no state between each request, that is, after the request does not depend on the response result of the previous request
    • Because a response is returned at one time, some requests for command execution may be successful and some fail. So the scenario must be able to tolerate data processing errors or the risk of data loss, or be able to accept other mechanisms to compensate for the loss or the wrong way

Generally applies to batch cache data or pre-load cached data. Because you do not need to send the cached data, you can send it pipelining way. Because it is cached data, the cached scene can tolerate some data cache failure, nothing more than a hit, loading a single load into the cache.

Recently in a business data early warning inspection project, the use of the responsibility chain model, the need for early warning inspection of each data traversal responsibility chain, complete data consistency check. Each inspector in the responsibility chain is configurable for invalidation and supports extensible inspectors. extensibility, configurable type is strong, but the problem is that each need to check the data to traverse the responsibility chain checker, you need to repeatedly query some other table data and configuration. This introduces huge disk I/O, which creates performance bottlenecks.

Later, if you can preload the data from these other tables into the cache, each time you traverse the data that needs to be checked for alerts, it is not retrieved from the DB and fetched directly from the cache. Because the service is clustered, you need to use distributed cache and the local cache capacity is limited.

Finally, the data or configuration of other tables is cached by lazy loading, and when the first data is checked, the individual inspectors load the data or configuration of the other tables into the cache while using the pipelining mode cache to Redis.

When you walk through the inspector for the next warning data, each inspector gets the data or configuration of the other tables from the cache. Dramatically reduce I/O from DB, breaking performance bottlenecks.

Redis (vi) piping (pipelining)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

Tags Index: