Python uses redis-py and redisco to operate redis

Source: Internet
Author: User
Tags install redis

1. Install redis


1.1 Preparation:

What is redis?

Redis is short for REmote DIctionary Server. A non-Relational Database

Why redis?

1. Because redis is fast, Redis is pretty fast !, 110000 SETs/second, 81000 GETs/second

2. complex SQL statements are saved.

3. You can save memcache.


1.2 download, decompress, and compile:


$ wget http://redis.googlecode.com/files/redis-2.6.13.tar.gz$ tar xzf redis-2.6.13.tar.gz$ cd redis-2.6.13$ make

Why is the installation of three axes useless in standard Linux? The official wiki says this: Redis can run just fine without a configuration file (when executed without a config file a standard configuration is used ). with thedefault configurationRedis will log to the standard output so you can check what happens. later, you canchange the default settings.

1.3 run redis

Run the following command to run the compiled executable file in the src directory:

$ src/redis-server

1.4 connect to Redis

You can use a built-in client to connect to Redis:

$ src/redis-cliredis> set foo barOKredis> get foo"bar"

1.5 configure redis


Before Running redis, We need to configure it. The redis configuration file is in your installation directory. The name is redis. conf.

To put it simply, redis. conf:

Redis does not use daemonize by default. If you need to change daemonize no to daemonize yes. (You can check the printed information without changing it during the test .)

If you are not comfortable with redis's default port 6379, you can change port 6379.

If you want to put the data file in a specified folder, change dir/opt/data/

The default value is dir./, that is, it is stored in the installation directory by default.

Connection timeout, timeout 300, no changes to the header ......

Dir is the data file path. By default, it is in the installation directory.

* Select either of the following configurations. For more information, see section 2 in this article.

###### SNAPSHOTTING ##### memory snapshot method:

The default memory snapshot policy is,

There should be at least one data change within 900 seconds (15 minutes;

Or at least 10 data changes within 300 seconds;

Or within 60 seconds, there are at least 1000 data changes. The time + number of data changes affect the appearance of memory snapshots.

###### Append only mode ###### AOF Method

Appendfsync everysec synchronization per second. Comment out the following option: appendfsync no

For other configurations, the comments in the conf file are quite clear, so I will not talk nonsense. Let's just look at your configuration.

Note:

▲The default port number of redis is 6379. (According to the redis author antirez's blog post, 6379 is the MERZ number on the mobile phone button, and MERZ is taken from the name of Alessia Merz, the Italian karaoke girl. MERZ has long been synonymous with antirez and its friends as stupid .)

▲There are two storage methods for Redis. The default method is the snapshot method. The implementation method is to regularly save the snapshot of the memory to the hard disk, the disadvantage of this method is that if a crash occurs after persistence, a piece of data will be lost. Therefore, driven by the perfectionist, the author adds the aof method. Aof is append only mode. When writing memory data, the Operation Command is saved to the log file.

Refer:

Nosql enthusiasts Distribution Center:

Http://blog.nosqlfan.com

Several misunderstandings of redis:

Http://blog.nosqlfan.com/html/868.html

References:

Http://shopscor.javaeye.com/blog/792817


2. Install redis-py


Run

Open the Python Interpreter:

>>> Import redis
>>> R = redis. Redis (host = 'localhost', port = 6379, db = 0) # If a password is set, add password = password
>>> R. set ('foo', 'bar') # or r ['foo'] = 'bar'
True
>>> R. get ('foo ')
'Bar'
>>> R. delete ('foo ')
True
>>> R. dbsize () # number of keys and data records in the database
0
>>> R ['test'] = 'OK! '

>>> R. save () # forcibly save the database to the hard disk. Blocking during saving
True

--------------------------------

>>> R. flushdb () # delete all data in the current database
True

>>> A = r. get ('Chang ')
>>> A # because it is a Noen object, nothing is displayed!
>>> Dir ()
['_ Class _', '_ delattr _', '_ doc _', '_ format __', '_ getattribute _', '_ hash _', '_ init _', '_ new _', '_ reduce __', '_ performance_ex _', '_ repr _', '_ setattr _', '_ sizeof _', '_ str __', '_ subclasshook _']

>>> R. exists ('Chang') # Check whether the key value exists.
False

>>> R. keys () # list all key values. (At this time, four are saved)
['Aaa', 'test', 'bbb ', 'key1']

Import redisr = redis. redis (host = 'localhost', port = 6379, db = 0) r ['test'] = 'test' # or r. set ('test', 'test') to set keyr. get ('test') # obtain the value of test r. delete ('test') # delete this keyr. flushdb () # CLEAR database r. keys () # list all keyr. exists ('test') # check whether this key exists. dbsize () # number of items in the database


Note:

Let's take a look at the init () function definition of redis. Redis:

_ Init _ (self, host = 'localhost', port = 6379, db = 0, password = None, socket_timeout = None, connection_pool = None, charset = 'utf-
8', errors = 'strict ', decode_responses = False, unix_socket_path = None)

The latest redis 2.6.0 is added to the connection pool. For more information about the usage, see the author's blog.

Note B:

For other command APIs, please refer to the blog by the author of redis-Python and write it clearly:

Https://github.com/andymccurdy/redis-py


3. redisco

Redisco:

(1) developed by pyhton, you can directly look at its source code (github address: https://github.com/iamteem/redisco ).

2) It has all the functions of redis, because it is developed based on the official redis database.

3) data can be stored in redis using the built-in django orm and provides the same query function as most of django's orm.



In fact, I mainly want to introduce the third point above. redisco's model (also called in django) class, it allows data to be stored in redis in a format similar to dict or class in python, in this way, our use of redis is close to that of the nosql database (because redis comes with the persistent storage function ).


Tutorial example:


Create a Model that you want to store in redis. You can understand it as a table in mysql and inherit the models. Model class.


Python code
  1. Fromrediscoimportmodels
  2. ClassPerson (models. Model ):
  3. Name = models. Attribute (required = True)
  4. Created_at = models. DateTimeField (auto_now_add = True)
  5. Fave_colors = models. ListField (str)

All data types supported in the class are as follows:


Model AttributesAttributeStores unicode strings. if used for large bodies of text, turn indexing of this field off by setting indexed = True. integerFieldStores an int. ints are stringified using unicode () before saving to Redis. counterAn IntegerField that can only be accessed via Model. incr and Model. decr. dateTimeFieldCan store a DateTime object. saved in the Redis store as a float. dateFieldCan store a Date object. saved in Redis as a float. floatFieldCan store floats. booleanFieldCan store bools. saved in Redis as 1's and 0's. referenceField can reference other models. model class. For example, you can add a field in an address book class. Its type is character. All of them inherit model. the Model class can be referenced to each other so that you can directly use the address book. the ListField list type can be called in this way (later articles will introduce in detail). Similar to the python list, unicode, int, float and other redisco in python can be used. the model class is put in to generate a Person class data
 >>person = Person(name="hupu")

Because name is a required field, you must


Then the save method is called and stored in redis.


>>person.save()
True

If True is returned, the operation is successful.



Query the data that has just been stored, which is really similar to django's orm.

> Conchita = Person. objects. filter (name = 'hupu') [0]


Query all persons

> All_person = Person. objects. all ()


Query age greater than a certain age of 5


> All_person = Person. objects. zfilter (age _ gt = 5) is smaller than or equal to the field name _ lt)


The query range is a list. Do not forget [0] to get a result.


Last, redisco. model. when the Model class stores data, all types of fields except the Counter type are changed, such as person. age = 5 to change the age to 5, call person. the save () method can be changed successfully.

For more functions, see the example on the homepage of "Homepage" in its test code.

4. test

Redis-py is generally used in Python, or redisco based on this;

Test environment:

In the 100 Mb/s LAN

Server: redis 2.4 rc8/rhel6.0

Client: Python2.7.2/Win7

0101 importdatetime02020303deftest1 (string ):0404 importredis0505r = redis. StrictRedis (host = '2017. 69. * ', port = 211, db = 1)0606now = datetime. datetime. now ()0707 foriinrange ):0808r. set (I, string)0909printdatetime. datetime. now ()-now10101111deftest2 (string ):1212 importredisco1313fromredisco. containersimportHash, List, SortedSet, Set1414r = redisco. connection_setup (host = '2017. 69. * ', port = 211, db = 1)1515 h = Hash ("h ")1616now = datetime. datetime. now ()1717 foriinrange ):1818 h [I] = string1919printdatetime. datetime. now ()-now20202121if _ name __= = "_ main __":2222 string = "0"2323test1 (string)2424test2 (string)

Result: The CPU is not the bottleneck. It is about 1000 write operations/s, which is far worse than 10 W in transmission. It seems that network I/O is not a small bottleneck. It is not the amount of data transmitted, but the number of operations


Redis Configuration

The following is a copy of the configuration explanation:

Redis. conf configuration options are as follows:


Whether daemonize is run by a later process. The default value is no.
If pidfile is run by a later process, you must specify a pid. The default value is/var/run/redis. pid.
Bind Host IP address. Default Value: 127.0.0.1 (note)
Port listening port. The default value is 6379.
Timeout. The default value is 300 seconds)
Loglevel log record level, which has four optional values: debug, verbose (default), notice, and warning
Logfile logging method. The default value is stdout.
Number of available databases in databases. The default value is 16 and the default value is 0.
Save <seconds> <changes> specifies the duration of an update operation to synchronize data to a data file. This can be used with multiple conditions. For example, three conditions are set in the default configuration file.
Save 900 1

At least one key is changed within 900 seconds (15 minutes ).
Save 300 10300 seconds (5 minutes) at least 300 keys are changed
At least 1000060 keys in the save 60 10000 seconds are changed.
Whether the data is compressed when rdbcompression is stored in the local database. The default value is yes.
Dbfilename: name of the local database file. The default value is dump. rdb.
Path for storing the dir local database. The default value is ./
Slaveof <masterip> <masterport> when the local machine is a slave service, set the IP address and port of the master service (note)
Masterauth <master-password> when the local machine is a slave service, set the connection password of the master service (note)
Requirepass connection password (note)
Maxclients maximum number of client connections, no limit by default (note)
After maxmemory <bytes> sets the maximum memory to reach the maximum memory setting, Redis will first try to clear expired or expiring keys. After this method is processed, it will reach the maximum memory setting, no more write operations can be performed. (Note)
Whether appendonly records logs after each update operation. If it is not enabled, data may be lost for a period of time during power failure. Because redis synchronizes data files according to the save conditions above, some data will only exist in the memory for a period of time. The default value is no.
Appendfilename: update the log file name. The default value is appendonly. aof (comment)
Appendfsync updates log conditions. There are three optional values. "No" indicates that data is cached and synchronized to the disk by the operating system. "always" indicates that data is manually written to the disk by calling fsync () after each update operation. "everysec" indicates that data is synchronized once per second (default ).
Whether vm-enabled uses virtual memory. The default value is no.
Vm-swap-file: virtual memory file path. The default value is/tmp/redis. swap. It cannot be shared by multiple Redis instances.
Vm-max-memory stores all data greater than vm-max-memory into the virtual memory, no matter how small the vm-max-memory settings are, all index data is stored in memory (Redis index data is keys). That is to say, when vm-max-memory is set to 0, all values are actually stored on the disk. The default value is 0.

# Whether to compress data objects when dump. rdb Databases
Rdbcompression yes
# Name of the data stored in the dump Database
Dbfilename dump. rdb
# Redis working directory
Dir/var/lib/redis/
########### Replication #####################
# Redis replication Configuration
# Slaveof <masterip> <masterport>
# Masterauth <master-password>

############# SECURITY ###########
# Requirepass foobared

############## LIMITS ##############
# Maximum number of client connections
# Maxclients128
# Maximum memory usage
# Maxmemory <bytes>

######### Append only file mode #########
# Whether to enable the log function
Appendonly no
# Refresh the log to disk rules
# Appendfsync always
Appendfsync everysec
# Appendfsync no
############### Virtual memory ###########
# Whether to enable the VM Function
Vm-enabled no
# Vm-enabled yes
Vm-swap-file logs/redis. swap
Vm-max-memory 0
Vm-page-size 32
Vm-pages 134217728
Vm-max-threads 4
############ Advanced config ###############
Glueoutputbuf yes
Hash-max-zipmap-entries 64
Hash-max-zipmap-value 512
# Whether to reset the Hash table
Activerehashingyes

The official apsaradb for Redis documentation provides some suggestions for VM usage: when your key is small and the value is large, VM usage will be better. this saves a lot of memory. when your key is not hour, you can consider using some very method to convert a large key into a large value. For example, you can consider combining the key and value into a new value. it is best to use linux ext3 and other file systems that support sparse files to save your swap files. the vm-max-threads parameter can be used to set the number of threads accessing the swap file. It is best to set the number of threads not to exceed the number of machine cores. if it is set to 0, all operations on swap files are serial. it may cause a long delay, but it guarantees data integrity.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.