Python Redis Pipeline operation

Source: Internet
Author: User
Tags redis server

Redis is a CS architecture built on the TCP protocol that client clients interact with in the way that Redis server responds to requests.

Generally speaking, the client needs to transmit two TCP packets from submitting the request to getting the server accordingly.

Imagine a scenario where you want to perform a series of REDIS commands in bulk, such as performing 100 get keys, when you want to request 100 times + get a response 100 times for Redis. If you can submit 100 requests to Redis server at once, after the completion of the batch to obtain the corresponding, only to Redis request 1 times, and then batch Execution command, one-time results, performance is not much better?

The answer is yes, the time saved is the round-trip network latency between client clients and server Redis server. This time can be viewed with the ping command.

High Network latency: batch execution with significantly improved performance

Low network latency (native): Batch execution with no noticeable performance improvement

Some clients (Java and Python) provide a programming pattern called pipeline to resolve bulk-commit requests.

Here we use the Python client to illustrate.

1, Pipeline

Network Latency

The network latency between the client and the server machine is as follows, approximately 30ms.

Test Cases

Perform the try_pipeline and without_pipeline statistical processing time respectively.

#-*-coding:utf-8-*-ImportRedisImport Time fromConcurrent.futuresImportProcesspoolexecutorr= Redis. Redis (host='10.93.84.53', port=6379, password='bigdata123')deftry_pipeline (): Start=Time.time () with R.pipeline (transaction=False) as P:p.sadd ('Seta', 1). Sadd ('Seta', 2). Srem ('Seta', 2). Lpush ('lista', 1). Lrange ('lista', 0,-1) P.execute ()PrintTime.time ()-Startdefwithout_pipeline (): Start=time.time () R.sadd ('Seta', 1) R.sadd ('Seta', 2) R.srem ('Seta', 2) R.lpush ('lista', 1) R.lrange ('lista', 0,-1)    PrintTime.time ()-Startdefworker (): whileTrue:try_pipeline () with Processpoolexecutor (Max_workers=12) as Pool: for_inchRange (10): Pool.submit (worker)

Results analysis

Try_pipeline average processing time: 0.04659

Without_pipeline average processing time: 0.16672

We have 5 operations in the batch and 4 times times more performance on the processing time dimension!

Network latency is about 30ms, without the use of bulk, the time loss on the network has more than 0.15s (30ms*5). The pipeline batch operation only makes one network round trip, so the delay is only 0.03s. You can see that the time saved is basically network latency.

2, Pipeline and Transation

Pipeline is not only used for batch commit commands, but also for transaction transation.

There's not much discussion of Redis transactions here, just a demo. For a detailed description you can refer to this blog. Redis transactions

Careful you may have found that the use of transaction or not is different from the creation of pipeline instances when the transaction is open, the default is open.

#-*-coding:utf-8-*-ImportRedis fromRedisImportWatcherror fromConcurrent.futuresImportProcesspoolexecutorr= Redis. Redis (host='127.0.0.1', port=6379)#reduce inventory function, cycle until the inventory is completed#inventory sufficient, reduce inventory success, return True#insufficient inventory, reduced inventory failure, return falsedefDecr_stock ():#Redis transactions in Python are implemented through the encapsulation of pipelineWith r.pipeline () as pipe: whileTrue:Try:                #Watch inventory key, multi if the key is changed by another client, the transaction operation throws Watcherror exceptionPipe.watch ('Stock:count') Count= Int (Pipe.get ('Stock:count'))                ifCount > 0:#Available in Stock                    #Transaction StartPipe.multi () PIPE.DECR ('Stock:count')                    #push the order over .                    #Execute returns a list of command execution results, where only one DECR returns the current value                    Printpipe.execute () [0]returnTrueElse:                    returnFalseexceptWatcherror, ex:#Print a Watcherror exception and observe the condition that the watch is locked                Printex Pipe.unwatch ()defworker (): whileTrue:#Exit without stock        if  notDecr_stock (): Break#experiment started .#Set Inventory toR.set ("Stock:count", 100)#Multi-process simulation multiple client submissionsWith Processpoolexecutor (max_workers=2) as Pool: for_inchRange (10): Pool.submit (worker)

Python Redis Pipeline operation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.