It took a long time to Redis. As the requirements of the business become more and more high. The requirements for the read and write speed of Redis are also higher. Just recently there is a need (need to value 1000+ in seconds), if the traditional word value, loop value, the consumption is really large, there are small partners may consider multi-threading, but this is not the best solution, here consider the unique features of Redis pipeline pipeline functionality. Here is a more demonstration of the use of pipeline in the Python environment.
1. Inserting data
>>> Import Redis>>> conn = Redis. Redis (host= ' 192.168.8.176 ', port=6379) >>> pipe = Conn.pipeline () >>> pipe.hset ("Hash_key", " leizhu900516 ", 8) pipeline<connectionpool<connection
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/84/0E/wKioL1eElAnAbWYLAAEXcklxX2c229.png-wh_500x0-wm_3 -wmp_4-s_1260413211.png "title=" Qq20160712145351.png "alt=" Wkiol1eelanabwylaaexcklxx2c229.png-wh_50 "/>
2. Bulk read Data
>>> pipe.hget ("Hash_key", "leizhu900516") pipeline<connectionpool<connection
Summary: Redis's pipeline is so simple, the actual production environment, according to the need to write the corresponding code. The same idea. Redis on line is generally cluster mode, when using pipeline in cluster mode, when creating pipeline objects, you need to specify
Pipe =conn.pipeline (Transaction=false)
After the online measurement, the use of pipeline value of 3,500 data, about 900ms, if the use of the thread or the application, the return of 1W data per second is no problem, basically can meet most of the business.
This article is from the "people on the Run" blog, please be sure to keep this source http://leizhu.blog.51cto.com/3758740/1825733
Python uses pipeline to read and write Redis