A. What is it?
Baidu Definition:
"Redis is a key-value storage system. Like memcached, it supports a relatively large number of stored value types, including five data types string (string), list (linked list), set (set), Zset (sorted set-ordered set), and hash (hash type). These data types support Push/pop, Add/remove and intersection-set and differential sets and richer operations, and these operations are atomic. On this basis, Redis supports a variety of different ways of ordering. As with memcached, data is cached in memory to ensure efficiency. The difference is that Redis periodically writes the updated data to the disk or writes the modification operation to the appended record file, and on this basis, it realizes the Master-slave (master-slave) synchronization. Redis is a high-performance key-value database. The emergence of Redis, to a large extent, compensates for the shortage of such key/value storage in memcached, and in some cases can complement the relational database well. It provides Java,c/c++,c#,php,javascript,perl,object-c,python,ruby,erlang and other clients, easy to use. Redis supports master-slave synchronization. Data can be synchronized from the primary server to any number of servers, from which the server can be associated with other primary servers from the server. This enables Redis to perform a single layer tree replication. Saving can be intentionally or unintentionally to the data to write operations. With the full implementation of the publish/subscribe mechanism, you can subscribe to a channel and receive a full message publishing record from the primary server when you synchronize the tree anywhere from the database. Synchronization is useful for the scalability of read operations and for data redundancy. Redis's official website address, very good remember, is Redis.io. (specially checked, the domain name suffix io belongs to the national domain name, is British Indian Ocean territory, namely the British Indian Ocean Territory), VMware is currently financing the development and maintenance of REDIS projects. "
two. The difference between it and memcached
See also: http://blog.csdn.net/tonysz126/article/details/8280696/
three. Its Application scenario
1. Session Cache
One of the most commonly used scenarios for using Redis is session caching. The advantage of using Redis caching sessions over other stores, such as memcached, is that Redis provides persistence. When maintaining a cache that is not strictly consistent, if the user's cart information is lost, most people will be unhappy, and now they will.
Fortunately, with Redis these years of improvement, it is easy to find out how to use Redis to cache the session's documentation appropriately. Even the well-known commercial platform Magento also provides Redis plug-ins.
2. Full page caching (FPC)
In addition to the basic session token, Redis also offers a simple FPC platform. Back to the consistency issue, even if you restart the Redis instance, because of the persistence of the disk, the user will not see the page loading speed drop, which is a great improvement, similar to the PHP local FPC.
Again, take Magento as an example, Magento provides a plug-in to use Redis as a full-page cache backend.
In addition, for WordPress users, Pantheon has a very good plug-in wp-redis, this plugin can help you load the fastest speed you have visited the page.
3. Queue
One of the great advantages of Reids in the memory storage engine area is the availability of list and set operations, which makes Redis a good Message Queuing platform to use. The operation that Redis uses as a queue is similar to the push/pop operation of the list by the native program language (such as Python).
If you quickly search for "Redis queues" in Google, you will soon find plenty of open source projects designed to use Redis to create very good backend tools to meet a variety of queue requirements. For example, celery has a background that uses Redis as a broker, which you can view from here.
4. List/Counter
Redis the increment or decrement of numbers in memory is a good practice. Collections (set) and ordered sets (Sorted set) also make it very easy to perform these operations, Redis just provide both of these data structures. So, we're going to get to the top 10 users from the sorted set – what we call "user_scores"-we just have to do it like the following:
Of course, it is assumed that you are doing an incremental sort based on your user's score. If you want to return the user and the user's score, you need to do this:
Zrange User_scores 0 Withscores
Agora Games is a good example of what you can do with Ruby, and its leaderboard is to use Redis to store the data, as you'll see here.
5, publish/Subscribe
The last (but certainly not least important) is the Redis Publish/Subscribe feature. There are indeed a lot of usage scenarios for publishing/subscriptions. I've seen people use it in social networking connections, as a script trigger based on publish/subscribe, even with Redis publish/subscribe capabilities to build chat systems. (No, it's true, you can check it).
Redis offers all the features I feel this is the least of the people who like it, although it provides users with this versatility.
Wait a minute
four. Its advantages
extremely high performance –redis can support more than 100k+ per second read and write frequency.
rich data Types –redis supports Strings, Lists, hashes, Sets, and Ordered Sets data type operations for binary cases.
all operations of atomic –redis are atomic, while Redis also supports the atomic execution of several operations.
Rich features –redis also supports PUBLISH/SUBSCRIBE, notification, key expiration, and so on.
Five. Its disadvantages
is that the database capacity is limited by physical memory and cannot be used for high performance reading and writing of massive data, so the Redis fit scenarios are mainly confined to high-performance operations and operations with smaller data volumes.
Six. Its deployment
1. Download: https://github.com/MSOpenTech/redis/releases
2. After downloading and decompression.
3. Configuration Item Description
<span style= "Font-family:microsoft Yahei;"
> #是否以后台进程运行, the default is no, if you need to run the background process instead of the Yes daemonize no #如果以后台进程运行的话, you need to specify the PID, you can customize the Redis.pid file location. Pidfile/var/run/redis.pid #接受连接的端口号, if the port is 0 then Redis will not listen for TCP socket connection Port 6379 # If you want can bind a sing
Le interface, if the BIND option isn't # specified all the interfaces'll listen for incoming connections. # bind 127.0.0.1 # Specify the path for the UNIX sockets that'll be used to listen for # Incoming connections.
There is no default and so Redis won't listen # on a UNIX socket when not specified. # # Unixsocket/tmp/redis.sock # unixsocketperm 755 #连接超时时间, unit seconds.
(0 to disable). Timeout 300000000 #日志级别, default is verbose (verbose), various log levels: #debug: Very detailed information, suitable for development and testing #verbose: Contains a lot of less useful information, but more refreshing than debug (many Rarel Y useful info, but not a mess like #the debug level) #notice: better suited to the production environment #warning: Warning messages loglevel verbose #指定log文件的名字 , the default is stdout. StdOut will allow Redis to output the log to standard output. However, if you run Redis using stdout and a background feed, the log is exported to/Dev/null logfile stdout # ' syslog-enabled ' set to Yes will output the log to the system log, by default no # syslog-enabled no #指定syslog的标示符, if ' SYSL
Og-enabled ' is no, then this option is not valid.
# syslog-ident Redis #指定syslog Device (facility), must be either user or LOCAL0 to LOCAL7. # syslog-facility Local0 #设置数据库数目. The default database is DB 0. You can select a database through select <dbid>, dbid is a digital databases 16 ################## snapshot of [0, ' Databases '-1] ####################### ########## # # Save data on hard disk: # save <seconds> <changes> # <seconds> and <changes> will trigger data saving when they are met
Action. # # # In the following example: # After 900 seconds and 1 key changes will trigger the Save action # after 300 seconds and 10 key changes will trigger the Save action # after 60 seconds and at least 10,000 key posts
A change also triggers a save action. Note: If you don't want Redis to automatically save the data, then comment out the configuration below. Save 900 1 Save 10000 #存储数据时是否压缩数据.
The default is yes.
Rdbcompression Yes # saves the file name of the dump data dbfilename Dump.rdb # working directory.
# # Data will be persisted to the file specified in ' Dbfilename ' under this directory.
# # Note that the specified must be a directory and not a file. Dir./######## REPLICATION (copy, redundancy) ################################# # MaSter-slave replication. Use slaveof to set a Redis instance to be a from library (hot standby) of another Redis server.
Note: #配置只对当前slave有效.
# so you can configure a slave to use different time intervals to save data or listen to other ports, and so on. #命令格式: # slaveof <masterip> <masterport> #如果master有密码保护, slave requires a password checksum before the data is synchronized with master, or Master rejects the Slav
E's please # Beg. # # Masterauth <master-password> #当slave丢失与master的连接时, or slave still in master data synchronization (not yet consistent with master), #
Slave can respond to client requests in two ways: # # 1 if Slave-serve-stale-data is set to ' Yes ' (the default) slave still responds to client requests, there may be a problem. # # 2 If Slave-serve-stale data is set to ' No ' slave returns an error message such as "SYNC with Master in progress".
But the INFO and slaveof commands are excluded. # Slave-serve-stale-data Yes ############### Security ################################### # requires the client to specify AUTH before executing any command <p
Assword> # requirepass foobared # command Rename. # # Rename-command CONFIG ' "#################### limit #################################### # Set maximum connectionNumber. Default no Limit, ' 0 ' means no limit. # # MaxClients 128 #最大可使用内存.
If you exceed, Redis will attempt to delete the keys in the Expire collection by Redis attempting to release the #keys that is about to expire, while protecting the keys for a long life cycle.
# #如果这样还不行, Redis will complain, but query requests like get will be answered.
# #警告: If you want to see Redis as a real db, do not set <MAXMEMORY>, you only need to set Redis as cache or #有状态的server (' state ' server). # maxmemory <bytes> #内存清理策略: If MaxMemory is reached, you can take the following action: # VOLATILE-LRU-> use LRU algorithm to delete expired set # Allkeys-l RU-> deletes any key # Volatile-random-> following the LRU algorithm to randomly delete key # Allkeys->random-> in expired set--to randomly delete a key # Volatile-ttl-& Gt Delete the recently expiring key (the nearest expire time (minor TTL)) # Noeviction-> does not expire at all, write directly to the error # # # # # The default policy: # # Maxmemory-policy VOLATILE-LRU # for processing Redis memory, the LRU and minor TTL algorithms are not accurate, but approximate (estimated) algorithms. So we're going to check some sample # to achieve the purpose of the memory check.
The default number of samples is 3, you can modify it. # # Maxmemory-samples 3 ################# APPEND only MODE ############################### #默认情况下, Redis will asynchronously save the data to Hard disk. If your scenario allows for the latest data loss due to extreme conditions such as system crashes, this is already OK. Otherwise you should open the ' append only ' mode, and after this mode, Redis will add the #appendonly.aof file to eachA write operation, which is read at Redis startup to reconstruct the dataset in memory. # #注意: If you need to, you can open both ' append only ' and the asynchronous dumps mode (you need to comment out the ' save ' expression to stop dumps), in which case Redis will use appendonly.aof when rebuilding datasets and ignore Dump.rdb # appendonly No # append only filename (default: "Appendonly.aof") # Appendfilenam
E appendonly.aof # calls the Fsync () function to notify the operating system to immediately write data to the hard drive # # Redis support 3 mode: # No: No Fsync, just notify the OS can flush data, specifically whether the flush depends on the OS. Better performance. # always: Every time you write a append only log file, you will fsync.
Poor performance, but very safe. # everysec: No interval of 1 seconds for a fsync.
Compromise. # The default is "Everysec" # Appendfsync always appendfsync everysec # Appendfsync No # when the AOF fsync policy is set to always or everysec and
When the background save process (saving process) is performing a large number of I/O operations, # Redis may block the Fsync () call for a long time # No-appendfsync-on-rewrite No # Append automatic rewriting of only files
# when the AoF log file is about to grow to a specified percentage, Redis can override append only file by invoking bgrewriteaof. # That's what it does: Redis remembers the most recent rewrite of the aof file size. It then compares the size to the current size and triggers the rewrite if the current # size is larger than the specified percentage.
Again, you need to specify the minimum size that the aof file is rewritten, which is useful to avoid situations where the percentage is up to # but the file size is actually small (which is not necessary to be rewritten) and results in a aof file rewrite. # # # Auto-aof-rewrite-percentage set to 0 to turn off the aof rewrite feature auto-Aof-rewrite-percentage auto-aof-rewrite-min-size 64mb ################## slow LOG ###############################
# # Redis Slow log is used to record queries that exceed the specified execution time.
# you can specify two parameters: one is the threshold of the slow query, the unit is milliseconds, and the other is the length of the slow log, which is equivalent to a queue. # A negative number closes the slow log,0 and causes each command to be logged Slowlog-log-slower-than 10000 # is not set to consume too much memory, so set it up. You can use the Slowlog Reset command to reclaim the memory used by slow log slowlog-max-len 1024 ################ virtual memory ############################### #使用re
Dis don't use virtual memory, definitely not a good idea, add a machine, so here do not translate it. ### warning!
Virtual Memory is deprecated in Redis 2.4 ### the "use of" virtual Memory is strongly discouraged. # Virtual Memory allows Redis to work with datasets bigger than the actual # amount of RAM needed to hold the whole datas
ET in memory. # in order to does so very used keys are taken in memory while the other keys # are into a swap file, swapped to
What operating systems do # with memory pages. # to enable VM just set ' vm-enabled ' to Yes, and set the following three # VM parameters accordingly to your needs. vm-enabled No # vm-enabled Yes # This is the path of the Redis swap file. As you can guess, swap files # can ' t being shared by different redis instances, so make sure to use a swap # file for every Redis process you are running.
Redis would complain if the # swap file is already into use.
# The best kind of storage for the Redis swap file, that's accessed at random, # is a Solid state Disk (SSD). # * * * * * * * * WARNING * * * If you are using a shared hosting the default of putting # The swap file under/tmp are not secure.
Create a dir with access granted # Redis user and configure Redis to create the swap file there. Vm-swap-file/tmp/redis.swap # Vm-max-memory Configures the VM to use at max the specified amount of # RAM. Everything that DEOs isn't fit'll be swapped on disk *if* possible, which is, if there is still enough contiguous
In the swap file. # # with vm-max-memory 0 The system'll swap everything it can. Not a good # DEfault, just specify the max amount of RAM with can in bytes, but it's # better to leave some.
For instance specify a amount of RAM # that's more or less between and 80% of the Your free RAM. Vm-max-memory 0 # Redis swap files are split into pages.
An object can is saved using multiple # contiguous pages, but pages can ' t be shared between different. # So if your page was too big, small objects swapped out on disk would waste # A lot of space. If you are too small, there is less spaces in the swap # file (assuming your configured the same number of total swap f
Ile pages).
# If You use a lot of the small objects, use a page size of or bytes.
# If You use a lot of the objects, use a bigger page size.
# If unsure, use the ' default:) vm-page-size # Number of total memory pages in the swap file. # Given This page table (a bitmap of free/used pages) is taken in memory, # every 8 pages on disk would consume 1 byte
of RAM. # The total swap size IS vm-page-size * vm-pages # with the default of 32-bytes memory pages and 134217728 pages Redis'll # Use a 4 GB swa
P file, that'll use MB of RAM for the page table. # # It's better to-use the smallest acceptable value for your application, # but the ' default ' is large in order to work I
n Most conditions.
Vm-pages 134217728 # Max Number of VM I/O threads running at the same time. # This threads are used to read/write data from/to swap file, since they # also encode and decode objects from disk to me Mory or the reverse, a bigger # number of threads can help and big objects even if they can ' t help with # I/O itself as
The physical device may is able to couple with many # Reads/writes in the operations time.
# The special value of 0 turn off threaded I/O and enables the blocking # Virtual Memory. Vm-max-threads 4 ############### #高级配置 ############################### # hashes are encoded in a special way (much m Ore Memory efficientWhen they # have in Max a given numer of elements, and the biggest element does not # exceed a given threshold.
Can configure this limits with the following # configuration directives. Hash-max-zipmap-entries Hash-max-zipmap-value Similarly to hashes, small lists are also encoded in a special Way in order # to save a lot. The special representation is only used when # you are under the following limits:list-max-ziplist-entries X-ziplist-value # Sets have a special encoding in just one case:when a set are composed # of just strings that Hap
Pens to is integers in radix the range # of the bit signed integers. # The following configuration setting sets the limit in the size of the ' # set in ' order to ' use ' This special memory
Encoding. Set-max-intset-entries similarly to hashes and lists, sorted sets are also specially-encoded in # Order to save A lot of space. This encoding was only used the length and #Elements of a sorted set are below the following limits:zset-max-ziplist-entries 128 Zset-max-ziplist-value # Ac tive rehashing uses 1 millisecond every of CPU time in # Order to help milliseconds the main rehashing hash tab Le (The one mapping top-level # keys to values). The hash Table implementation Redis uses (DICT.C) # Performs a lazy rehashing:the more operation your run into a has H Table # That's rhashing, the more rehashing ' steps ' are performed, so if the # server was idle the rehashing is never
Complete and some more memory are used # by the hash table. # the ' is ' to ' millisecond ' every second in order # active rehashing the main dictionaries,
Freeing memory when possible. # If unsure: # use ' activerehashing no ' if you have hard latency requirements and it's # not ' a good thing in your en
Vironment that Redis can reply form time to time # to queries with 2 milliseconds delay. # use ' activerehashing yes ' if you don ' t have such hard requirements but # want to free memory ASAP. activerehashing Yes ################## INCLUDES ################################### # Include One or more other co Nfig files here. This is useful if your # have a standard template that goes to all Redis server but also need # to customize a few Per-se RVer settings.
Include files can include # Other files, and so with this wisely. # # include/path/to/local.conf # include/path/to/other.conf</span> #
Upon completion of configuration:
4. Open CMD->CD to Redis file-> input instructions: Redis-server.exe redis.conf
The following figure succeeds
Seven. Its practice
. NET supports Redis by using Servicestack
1. The list of items to be referenced in the project is as follows:
ServiceStack.Common.dll
ServiceStack.Interfaces.dll
ServiceStack.Text.dll
main: <span style= "COLOR: #FF0000; " >ServiceStack.Redis.dll</span>
2..net Source
static Redisclient Redis = new Redisclient ("127.0.0.1", 6379);//redis service IP and port static void Main (string[) args) {///Add a list of strings to Redis list<string> storemembers = new list<string> () {"One", "two", "th
Ree "};
Storemembers.foreach (x => redis.additemtolist ("Additemtolist", x));
Gets the value set corresponding to the specified key the var members = redis.getallitemsfromlist ("Additemtolist"); Members.
ForEach (S => Console.WriteLine ("<br/>additemtolist:" + s));
Gets the specified index location data var item = redis.getitemfromlist ("Addarrangetolist", 2);
Console.WriteLine (item);
Remove data var list = redis.lists["Addarrangetolist"]; List. Clear ();//Empty list. Remove ("two");//Removes the specified key value//list. RemoveAt (2)//Removing the specified index location data//Storage object (JSON serialization method) it is more efficient than the object serialization method Redis.set<userinfo> ("UserInfo", NE
W UserInfo () {UserName = "dick", age = 45}); UsErinfo userinfo = redis.get<userinfo> ("userinfo"); Console.WriteLine ("Name=" + userinfo). UserName + "age=" + userinfo.
Age);
Store value type data redis.set<int> ("My_age", 12);//or Redis.set ("My_age", 12);
int age = redis.get<int> ("My_age");
Console.WriteLine ("age=" + age); Object serialization Storage of var ser = new Objectserializer ();
Located in namespace ServiceStack.Redis.Support; BOOL result = redis.set<byte[]> ("Userinfo2", Ser).
Serialize (New UserInfo () {UserName = "john", age = 12})); UserInfo Userinfo2 = ser.
Deserialize (redis.get<byte[]> ("Userinfo2")) as UserInfo; Console.WriteLine ("Name=" + Userinfo2). UserName + "age=" + Userinfo2.
Age); also supports list list<userinfo> userinfolist = new List<userinfo> {new userinfo{username= "Zzl",
Age=1,id=1}, New Userinfo{username= "Zhz", age=3,id=2},}; Redis.set<byTe[]> ("Userinfolist_serialize", Ser.
Serialize (userinfolist)); list<userinfo> userlist = ser.
Deserialize (redis.get<byte[]> ("Userinfolist_serialize")) as list<userinfo>;
Userlist.foreach (i => {Console.WriteLine ("Name=" + i.username + "age=" + i.age);
}); }
Reference: http://jiangwenfeng762.iteye.com/blog/1283676
http://blog.jobbole.com/88383/
Http://www.aboutyun.com/thread-9223-1-1.html
Http://www.cnblogs.com/lori/p/3435483.html