Redis is a key-value storage system. Similar to memcached, it supports storing more value types, including string (string), list (linked list), set (set), Zset (sorted set-ordered collection), and hash (hash type). These data types support Push/pop, Add/remove, and intersection-set and difference sets, and richer operations, and these operations are atomic. Based on this, Redis supports sorting in a variety of different ways. As with memcached, data is cached in memory to ensure efficiency. The difference is that Redis periodically writes the updated data to disk or writes the modified operation to the appended record file, and Master-slave (Master-Slave) synchronization is implemented on this basis.
Python also provides modules to operate on Redis. With this module we can write and fetch data to Redis.
See a simple code for inserting and querying data.
#!/usr/bin/env python#-*-coding:utf-8-*-import redis# Create a Redis method instance that specifies the address and port R = Redis where the Redis service resides. Redis (host= ' 192.168.10.105 ', port=6379) "" "writes data to Redis through the set () method, because Redis storage is a dictionary mode of the k-v type, so the following code is equivalent to writing a dictionary to Redis {' Foo ': ' Bar '} ' "" "R.set (' foo ', ' Bar ') #通过get () method from the value of the key value ' foo ' in Redis for print r.get (' foo ')
The operation results are as follows
Bar
Connection pool
First, the difference between a thread pool (process pool) and a connection pool. As I understand it, the thread pool (process pool) is the limit of the number of threads (processes) that can be started when a task is executed, and a thread (process) dies when it finishes processing the task. Then immediately create a new thread (process) into the queue to get the task to execute and then perish. This cycle of operation. The thread (process) pool only limits the number of threads (processes) that exist at the same time. The advantage of this approach is that you do not have to reserve a large number of threads to wait for connections. The disadvantage is that the threads need to be created and destroyed repeatedly, and the process is expensive.
The connection pool is run by threads. These threads are used to connect applications and databases. Each connection pool starts with a minimum of n threads to wait for the connection to occur. When the number of connections applied is greater than the number of existing threads, a new line thread executes is generated to the connection request. However, when the preset line is reached, no new threads are generated, and other requests can only be connected directly after having a free thread. This way the threads will only produce and wait for the busy time to not die. When the business peaks past. Without each disconnection, one thread will die out. When you know that the thread is down to the minimum number of threads N, the number of threads does not continue to die. The advantage of this approach is that you do not have to create new threads repeatedly to increase the creation overhead. But a startup requires creating a lot of threads waiting to be connected. Long time will also be excessive consumption of resources.
Redis-py uses connection pool to manage all connections to a Redis server, avoiding the overhead of each establishment and release of the connection. By default, each Redis instance maintains its own pool of connections. You can create a connection pool directly, and then as a parameter Redis, you can implement multiple Redis instances to share a single connection pool. Look at the code:
Import Redis "" calls the ConnectionPool () method to create a connection pool instance, where the IP and port specified for the Redis server can also specify the maximum number of connections for the connection pool. If not specified is the maximum number of connections to the system (specifically what you sometimes need to take a good look at the source of interpretation) "" "Pool = Redis. ConnectionPool (host= ' 192.168.10.105 ', port=6379,max_connections=10) "" "When instantiating the Redis API, the parameters that are filled in become the connection pool" "" R = Redis that you just defined. Redis (Connection_pool=pool) r.set (' foo ', ' Bar ') print r.get (' foo ')
There is no difference between executing the result and calling the normal method, except that only 10 such set operations can be connected to the Redis server after the connection pool is used
Bar
Pipeline
All of our commands above are performed one set operation at a time, and each set will connect to the database once. If the set is a large amount, the operation is more dense, you can use the Pipline pipeline operation. The function of this method is to put the set of such operations for Redis into the pipeline, and finally uniformly executed once. The benefit is that a single connection executes multiple commands, reducing the number of connections to Redis. Look at the code
#!/usr/bin/env python#-*-coding:utf-8-import Redis "" "Calls the ConnectionPool () method to create a connection pool instance, Regardless of the IP and port specified for the Redis server, you can also specify the maximum number of connections for the connection pool. If not specified is the maximum number of connections to the system (specifically what you sometimes need to take a good look at the source of interpretation) "" "Pool = Redis. ConnectionPool (host= ' 192.168.10.105 ', port=6379,max_connections=10) R = Redis. Redis (Connection_pool=pool) "" By default, set data once to Redis is called an atomic operation. It's all about things. If the set succeeds, it will be fine, if it is unsuccessful, the inserted data will be rolled back. The non-atomic operation is the data that continues and does not return regardless of success. Whether to enable atomic operation by transaction keyword to set "" "# pipe = R.pipeline (transaction=false) #创建一个管道实例pipe = R.pipeline (transaction=true) # The pipeline below inserts the statement to execute R.set (' name ', ' Alex ') r.set (' Role ', ' SB ') #执行通道里的所有命令pipe. Execute ()
Redis subscriptions and publishing.
Redis has a very useful function, and it works like a radio. A subscriber is equivalent to an audience, and a publisher is similar to an anchor. Redis is a radio station. The host (the publisher) communicates to the audience (subscribers) via the radio (Redis). Another thing to illustrate is that the host and audience are not one-to-many, but many-to-many relationships. Can be multiple hosts for multiple listeners. The following is done by code.
Let's set up the station on this side.
#!/usr/bin/env python# coding:utf-8import Redisr=redis. Redis (host= ' 192.168.10.105 ', port=6379) #就这么简单, two words. Sends a message to the specified channel r.publish (' Wgw_channel ', ' Hello everyone ')
Here's an audience code for the end of the story.
#!/usr/bin/env python#-*-coding:utf-8-*-import Redisr=redis. Redis (host= ' 192.168.10.105 ', port=6379) #打开收音机电源sub =r.pubsub () #调整收音机的接收频率, the frequencies on either side must be the same, or they will not be received. Sub.subscribe (' Wgw_channel ') #调整好了就循环接受接收电台的信号, play it out while True:print sub.parse_response ()
After the client runs, it receives the results of the run as follows
D:\Python27\python.exe f:/python_file/day12/test.py
#这一行第一次运行的时候肯定会出现, this is equivalent to a boot prompt on the channel
[' subscribe ', ' Wgw_channel ', 1L]
#这里开始就是接收到的从电台主播那里发送过来的消息了
[' Message ', ' Wgw_channel ', ' hello everyone ']
This article from "Thunderbolt Tofu" blog, declined reprint!
Python Redis module