Redis installation configuration and application on Python

Source: Internet
Author: User
Tags syslog install redis redis server

Recently when using the kazoo (open source telephony system) API, the processing of one request calls several APIs, just to get a name and ID of the corresponding relationship, time consuming is very large, and began to want to use a simple implementation, directly to save the corresponding relationship to static static class variables, but after testing the discovery, This method will also cause the corresponding relationship is often lost, and then a simple understanding of the next Redis, found that the comparison applies to this situation, the specific operation process is as follows:

Redis installation and simple configuration

The development environment for Ubuntu 12.04, in Ubuntu under the Redis installation is very simple, directly through the Apt-get can be implemented, as follows:

#安装Redis服务器端~ sudo apt-get Install Redis-server

In this way, after the installation is complete, the Redis server starts automatically and checks the Redis server program:

[Email protected]:~# ps-ef | grep Redisredis     29481     1  0 A: Wu?        xx:xx/usr/local/bin/redis-server *:6379root      30117 25288  0 pts/: 0: xx: The    grep--color=auto  Redis

You can also compile the installation in such a way as follows:

Redis:http://redis.io/download

$ wget http://download.redis.io/releases/redis-2.8.19.tar.gz$ tar xzf redis-2.8.  + . tar.gz$ CD Redis-2.8.  + $ make$ Make install

For specific reference: http://rubyer.me/blog/638/

The Redis configuration file is stored in the redis.conf file and is configured as follows:

Daemonize Yes #---The default value of No, which is used to customize whether the Redis service is running in daemon mode. ---Pidfile/usr/local/webserver/redis/run/redis.pid #默认值/var/run/Redis.pid, specifies the process number file path for the Redis service, which is required to configure this parameter when running in daemon mode; Port6379#默认值6379, specify the port # bind for the Redis service127.0.0.1#绑定ip, default is all network devices on this machine; timeout0#客户端空闲n秒后断开连接; default is0means to keep on opening. LogLevel Notice # # #设置服务端的日志级别, there are several options: Debug: Record details for development or debugging; verbose: Provide a lot of useful information, but not as detailed as debug, the default is this option; Notice: Warning: Displays only important warning messages, logfile stdout # #指定日志的输出路径, default stdout, indicating output to screen, daemon mode output to/dev/NULLIf you want to output logs to syslog, you can start the syslog-enabled Yes, the default value for this option is no. # syslog-enabled Nodatabases -# # #指定数据库的数量, default is 16, the database used by default is DB0. ----The following snapshot-related settings:------# Save<seconds> <changes># #指定多长时间刷新快照至磁盘, this option has two attribute values that are triggered only when two property values are met; You can set multiple levels, such as the default parameter file: Save the 1: Every 900 seconds (15 minutes) is triggered at least once when the key value is changed; Save - Ten: Every 300 seconds (5 minutes) at least 10 times the key value change is triggered; Save - 10000: Triggered at least 10,000 times every 60 seconds when the key value is changed; Save the 1Save - TenSave - 10000rdbcompression Yes # #默认值yes, when the DUMP database uses LZF compressed string objects, if the CPU resources are more tense, you can set to No, choose not to compress; Rdbchecksum yes# the filename whereto dump the DB database file name dbfilename Dump.rdb # #默认值dump. rdb,dump to file system filename dir/USR/LOCAL/WEBSERVER/REDIS/DB # #默认值., that is, the current directory, dump out of the data file storage path;----The following replication-related settings, replication is not enabled by default, so the following table parameters are commented on in the default parameter file----# slaveof<masterip> <masterport># #指定主端ip和端口, used to create a mirror service # Masterauth<master-password># #如果master配置了密码的话, settings are also required here; slave-serve-stale-Data Yes # #默认值yes. When slave loses a connection to the master side, or if replication is still in process, then slave will have the following two performance: when this parameter value is yes, slave continues to respond to client requests, although the data is not synchronized or even has no data (appearing in the case of initial synchronization), when the value of this parameter is no, Slave will return"SYNC with Master in Progreee"the error message; Slave-read-Only Yes # #默认从Redis是只读模式 # REPL-ping-slave-periodTen# # #默认值10, specifies the period of slave periodic ping master; # Repl-timeout -# #默认值60, specify the time-out. Note This parameter includes the time to bulk transfer data and ping the response. ------The following security-related settings------# requirepass foobared # # #指定一个密码, the client will also need to connect with a password to successfully connect; # Rename-Command Config b840fc02d524045429941cc15f59e41cb7be6c52 # # #重定义命令, such as renaming the CONFIG command to a very complex name: # rename-command CONFIG""cancel the order;-----The following settings for resource throttling------# maxclients10000# #指定客户端的最大并发连接数, the default is no limit until Redis cannot create a new process, setting the parameter value to 0 also means no limit, if the parameter specifies a value, Redis closes all new connections when the concurrent connection reaches the specified value, and returns'Max number of clients reached'the error message; # MaxMemory<bytes># # #设置redis最大可使用内存. When the maximum memory is reached, Redis attempts to delete the key values according to the set recycle policy. If the key value cannot be deleted, or if the retention policy is set to not clear, Redis returns an error message to the memory-issuing request. This parameter is useful when Redis is cached as a first-level LRU. # maxmemory-policyvolatile-LRU # # #默认值volatile-LRU, which specifies the purge policy, has the following methods:volatile-LRU, remove the key with an expireSet usingAn LRU Algorithmallkeys-lruRemove any key accordingly to the LRU algorithmvolatile-random, remove a random key with an expireSetAllKeys->randomremove a random key, any keyvolatile-ttlremove the key with the nearest expire time (minor TTL) noevictionDon't expire at all, just return a error on write operations# Maxmemory-samples3# # # #默认值3, LRU and minimum TTL policies are not rigorous strategies, but approximate estimates, so you can choose to sample values for inspection. -----The following configuration for append----Only mode settings, by default Redis asynchronously dump data to disk, in extreme cases this may cause the loss of some data (such as sudden server outage), if the data is more important, do not want to lose, you can enable the direct write mode, In this mode, Redis synchronizes all the received writes to the Appendonly.aof file, which rebuilds all the data in memory when the Redis service starts. Note that this pattern has a very large impact on performance. AppendOnly No # #默认值no, specifies whether write-through mode is enabled; # appendfilename appendonly.aof # # #直写模式的默认文件名appendonly. Aofappendfsync: Call Fsync ()    mode allows the operating system to write data to disk, data synchronization mode, there are several modes: always: Every call, such as security, but the slowest; Everysec: synchronization per second, which is also the default; no: Do not call Fsync, the operating system determines when synchronization, such as fast mode; No-appendfsync-on-rewrite: Default Value No. When the AOF fsync policy is set to always or EVERYSEC, the background save process executes a large number of i/o operation.    In some Linux configurations, Redis may block too many fsync () calls. Auto-aof-rewrite-percentage: Default value of Auto-aof-rewrite-min-Size: Default value 64mb# appendfsync alwaysappendfsync everysec# appendfsync no-----The following settings related to Advanced configuration----Hash-max-zipmap-Entries: The default value of 512, when the number of elements of a map reaches the maximum, but the maximum element length does not reach the set threshold, its hash encoding takes a special way (more efficient use of memory). This parameter is used in combination with the following parameters to set these two thresholds. Set the number of elements; hash-max-zipmap-Value : The default value of 64, sets the maximum length of the value of the element in the map; the two list-max-ziplist-Entries: Default value 512, similar to hash, the list array that satisfies the condition also takes a special way to save space. List-max-ziplist-Value: Default value ofSet-max-intset-Entries: The default value of 512, when the data in a set type is a numeric type, and the number of integral elements in set does not exceed the specified value, a special encoding is used. Zset-max-ziplist-Entries: Default value of 128, similar to hash and list. Zset-max-ziplist-Value: Default value 64activerehashing: The default is yes, to control whether the hash is automatically rebuilt. Active rehashing uses 1 microseconds of CPU time per 100 microseconds to reorganize the Redis hash table. Rebuilding is a lazy way, the more you write a hash table, the more steps you need to perform rehashing, and the rehashing operation will execute if the server is currently idle. If the requirement for real-time is high and it is difficult to accept the 2 microsecond latency that Redis occasionally has, you can set activerehashing to No, otherwise the recommended setting is yes to save memory space. 
Application of Redis in Python

Redis's application in Python is also very simple, not much to say, see the code:

#!/usr/bin/env python#-*-coding:utf-8-*-ImportRedisImportRandom fromKazoo_api_configImportRedis_host, Redis_port, redis_dbclassMyredis (object):__redis_conn=[None, none, none, none, none]__host=Redis_host__port=Redis_port__db=redis_db @staticmethoddefget_conn (): forIinchXrange (Len (Myredis).__redis_conn)):            ifMyredis.__redis_connI isNone:myredis.__redis_conn[I] = Redis. Redis (Host=myredis.__host, Port=myredis.__port, Db=myredis.__db) Count= Random.randint (0, 4)        returnCount, Myredis.__redis_conn[Count] @staticmethoddefGet_val (Key, retries=5): COUNT, Conn=myredis.get_conn () flag=FalseTry:            ifconn.exists (key):returnconn.get (Key)Else:                returnFalseexceptException as E:flag=Truefinally:            ifFlag andRetries >0:myredis.__redis_conn[Count] = Redis. Redis (Host=myredis.__host, Port=myredis.__port, Db=myredis.__db)                returnMyredis.get_val (Key, Retries-1) @staticmethoddefSet_val (key, value, retries = 5): COUNT, Conn=myredis.get_conn () flag=FalseTry:            returnConn.set (key, value)exceptException as E:flag=Truefinally:            ifFlag andRetries >0:myredis.__redis_conn[Count] = Redis. Redis (Host=myredis.__host, Port=myredis.__port, Db=myredis.__db)                returnMyredis.set_val (key, value, retries-1) @staticmethoddefGet_all_keys (retries=5): COUNT, Conn=myredis.get_conn () flag=FalseTry:            returnConn.keys ()exceptException as E:flag=Truefinally:            ifFlag andRetries >0:myredis.__redis_conn[Count] = Redis. Redis (Host=myredis.__host, Port=myredis.__port, Db=myredis.__db)                returnMyredis.get_all_keys (retries-1) @staticmethoddefBatch_set (Val_dict, retries=5): COUNT, Conn=myredis.get_conn () flag=FalseTry:            returnConn.mset (val_dict)exceptException as E:flag=Truefinally:            ifFlag andRetries >0:myredis.__redis_conn[Count] = Redis. Redis (Host=myredis.__host, Port=myredis.__port, Db=myredis.__db)                returnMyredis.batch_set (Val_dict, retries-1) @staticmethoddefDelete_keys (Key, retries=5): COUNT, Conn=myredis.get_conn () flag=FalseTry:            returnConn.delete (Key)exceptException as E:flag=Truefinally:            ifFlag andRetries >0:myredis.__redis_conn[Count] = Redis. Redis (Host=myredis.__host, Port=myredis.__port, Db=myredis.__db)                returnMyredis.delete_keys (Key, Retries-1)

Redis installation configuration and application on Python

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.