:servicestack.googlecode.com/files/redis-2.0.2.zip
下載完之後解壓到你磁碟上的,我的是F:\redis-2.0.2
還需要在redis根目錄增加一個redis的設定檔redis.conf,檔案具體內容有:
# Redis configuration file example# By default Redis does not run as a daemon. Use 'yes' if you need it.# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.daemonize no# When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.# You can specify a custom pid file location here.pidfile /var/run/redis.pid# Accept connections on the specified port, default is 6379port 6379# If you want you can bind a single interface, if the bind option is not# specified all the interfaces will listen for connections.## bind 127.0.0.1# Close the connection after a client is idle for N seconds (0 to disable)timeout 300# Set server verbosity to 'debug'# it can be one of:# debug (a lot of information, useful for development/testing)# notice (moderately verbose, what you want in production probably)# warning (only very important / critical messages are logged)loglevel debug# Specify the log file name. Also 'stdout' can be used to force# the demon to log on the standard output. Note that if you use standard# output for logging but daemonize, logs will be sent to /dev/nulllogfile stdout# Set the number of databases. The default database is DB 0, you can select# a different one on a per-connection basis using SELECT <dbid> where# dbid is a number between 0 and 'databases'-1databases 16################################ SNAPSHOTTING ################################### Save the DB on disk:## save <seconds> <changes>## Will save the DB if both the given number of seconds and the given# number of write operations against the DB occurred.## In the example below the behaviour will be to save:# after 900 sec (15 min) if at least 1 key changed# after 300 sec (5 min) if at least 10 keys changed# after 60 sec if at least 10000 keys changedsave 900 1save 300 10save 60 10000# Compress string objects using LZF when dump .rdb databases?# For default that's set to 'yes' as it's almost always a win.# If you want to save some CPU in the saving child set it to 'no' but# the dataset will likely be bigger if you have compressible values or keys.rdbcompression yes# The filename where to dump the DBdbfilename dump.rdb# For default save/load DB in/from the working directory# Note that you must specify a directory not a file name.dir ./################################# REPLICATION ################################## Master-Slave replication. Use slaveof to make a Redis instance a copy of# another Redis server. Note that the configuration is local to the slave# so for example it is possible to configure the slave to save the DB with a# different interval, or to listen to another port, and so on.## slaveof <masterip> <masterport># If the master is password protected (using the "requirepass" configuration# directive below) it is possible to tell the slave to authenticate before# starting the replication synchronization process, otherwise the master will# refuse the slave request.## masterauth <master-password>################################## SECURITY #################################### Require clients to issue AUTH <PASSWORD> before processing any other# commands. This might be useful in environments in which you do not trust# others with access to the host running redis-server.## This should stay commented out for backward compatibility and because most# people do not need auth (e.g. they run their own servers).## requirepass foobared################################### LIMITS ##################################### Set the max number of connected clients at the same time. By default there# is no limit, and it's up to the number of file descriptors the Redis process# is able to open. The special value '0' means no limts.# Once the limit is reached Redis will close all the new connections sending# an error 'max number of clients reached'.## maxclients 128# Don't use more memory than the specified amount of bytes.# When the memory limit is reached Redis will try to remove keys with an# EXPIRE set. It will try to start freeing keys that are going to expire# in little time and preserve keys with a longer time to live.# Redis will also try to remove objects from free lists if possible.## If all this fails, Redis will start to reply with errors to commands# that will use more memory, like SET, LPUSH, and so on, and will continue# to reply to most read-only commands like GET.## WARNING: maxmemory can be a good idea mainly if you want to use Redis as a# 'state' server or cache, not as a real DB. When Redis is used as a real# database the memory usage will grow over the weeks, it will be obvious if# it is going to use too much memory in the long run, and you'll have the time# to upgrade. With maxmemory after the limit is reached you'll start to get# errors for write operations, and this may even lead to DB inconsistency.## maxmemory <bytes>############################## APPEND ONLY MODE ################################ By default Redis asynchronously dumps the dataset on disk. If you can live# with the idea that the latest records will be lost if something like a crash# happens this is the preferred way to run Redis. If instead you care a lot# about your data and don't want to that a single record can get lost you should# enable the append only mode: when this mode is enabled Redis will append# every write operation received in the file appendonly.log. This file will# be read on startup in order to rebuild the full dataset in memory.## Note that you can have both the async dumps and the append only file if you# like (you have to comment the "save" statements above to disable the dumps).# Still if append only mode is enabled Redis will load the data from the# log file at startup ignoring the dump.rdb file.## The name of the append only file is "appendonly.log"## IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append# log file in background when it gets too big.appendonly yes# The fsync() call tells the Operating System to actually write data on disk# instead to wait for more data in the output buffer. Some OS will really flush# data on disk, some other OS will just try to do it ASAP.## Redis supports three different modes:## no: don't fsync, just let the OS flush the data when it wants. Faster.# always: fsync after every write to the append only log . Slow, Safest.# everysec: fsync only if one second passed since the last fsync. Compromise.## The default is "always" that's the safer of the options. It's up to you to# understand if you can relax this to "everysec" that will fsync every second# or to "no" that will let the operating system flush the output buffer when# it want, for better performances (but if you can live with the idea of# some data loss consider the default persistence mode that's snapshotting).appendfsync always# appendfsync everysec# appendfsync no############################### ADVANCED CONFIG ################################ Glue small output buffers together in order to send small replies in a# single TCP packet. Uses a bit more CPU but most of the times it is a win# in terms of number of queries per second. Use 'yes' if unsure.glueoutputbuf yes# Use object sharing. Can save a lot of memory if you have many common# string in your dataset, but performs lookups against the shared objects# pool so it uses more CPU and can be a bit slower. Usually it's a good# idea.## When object sharing is enabled (shareobjects yes) you can use# shareobjectspoolsize to control the size of the pool used in order to try# object sharing. A bigger pool size will lead to better sharing capabilities.# In general you want this value to be at least the double of the number of# very common strings you have in your dataset.## WARNING: object sharing is experimental, don't enable this feature# in production before of Redis 1.0-stable. Still please try this feature in# your development environment so that we can test it better.# shareobjects no# shareobjectspoolsize 1024
啟動redis
開啟運行視窗
F:\>cd redis-2.0.2
F:\redis-2.0.2>redis-server.exe redis.conf
[2944] 15 Jun 22:44:29 * Server started, Redis version 2.0.2
[2944] 15 Jun 22:44:29 * DB loaded from append only file: 0 seconds
[2944] 15 Jun 22:44:29 * The server is now ready to accept connections on port
379
[2944] 15 Jun 22:44:30 - DB 0: 1 keys (0 volatile) in 4 slots HT.
[2944] 15 Jun 22:44:30 - 0 clients connected (0 slaves), 450888 bytes in use
在開啟一個視窗運行用戶端
F:\redis-2.0.2>redis-cli.exe
redis>
設定值:
redis> set ajun ajun
Reconnecting... OK
OK
取值:
redis> get ajun
"ajun"
停止redis服務
redis> shutdown
如果需要redis持久化資料 需要配置redis日誌開啟
在每次更新操作後進行日誌記錄,如果不開啟,可能會在斷電時導致一段時間內的資料丟失。因為redis本身同步資料檔案是按上面save條件來同步的,所以有的資料會在一段時間內只存在於記憶體中。預設值為no
此時要在redis.conf中修改或者添加
appendonly yes
更新記錄檔名,預設值為appendonly.aof
#更新日誌條件,共有3個可選值。no表示等作業系統進行資料緩衝同步到磁碟,always表示每次更新操作後手動調用fsync()將資料寫到磁碟,everysec表示每秒同步一次(預設值)。
# appendfsync always
appendfsync everysec
# appendfsync no
關閉redis 服務在重啟
F:\redis-2.0.2>redis-server.exe redis.conf
[2944] 15 Jun 22:44:29 * Server started, Redis version 2.0.2
[2944] 15 Jun 22:44:29 * DB loaded from append only file: 0 seconds
[2944] 15 Jun 22:44:29 * The server is now ready to accept connections on port
379
[2944] 15 Jun 22:44:30 - DB 0: 1 keys (0 volatile) in 4 slots HT.
[2944] 15 Jun 22:44:30 - 0 clients connected (0 slaves), 450888 bytes in use
此時redis根目錄會有產生一個appendonly.aof的檔案來記錄日誌
在用戶端重新串連
F:\redis-2.0.2>redis-cli.exe
redis>set ajun wahaha
然後在shutdown redis服務
查看appendonly.aof為1k
再啟動redis服務
F:\redis-2.0.2>redis-server.exe redis.conf
[2944] 15 Jun 22:44:29 * Server started, Redis version 2.0.2
[2944] 15 Jun 22:44:29 * DB loaded from append only file: 0 seconds
[2944] 15 Jun 22:44:29 * The server is now ready to accept connections on port
379
[2944] 15 Jun 22:44:30 - DB 0: 1 keys (0 volatile) in 4 slots HT.
[2944] 15 Jun 22:44:30 - 0 clients connected (0 slaves), 450888 bytes in use
再啟動用戶端
F:\redis-2.0.2>redis-cli.exe
redis>get ajun
"wahaha"
值還在,說明被持久化了
linux上的操作也是類似的
reids.conf參數配置參考,具體看官方設定檔參數註解
1. redis.conf配置參數:
#是否作為守護進程運行
daemonize yes
#如以後台進程運行,則需指定一個pid,預設為/var/run/redis.pid
pidfile redis.pid
#綁定主機IP,預設值為127.0.0.1
#bind 127.0.0.1
#Redis預設監聽連接埠
port 6379
#用戶端閑置多少秒後,中斷連線,預設為300(秒)
timeout 300
#日誌記錄等級,有4個可選值,debug,verbose(預設值),notice,warning
loglevel verbose
#指定日誌輸出的檔案名稱,預設值為stdout,也可設為/dev/null屏蔽日誌
logfile stdout
#可用資料庫數,預設值為16,預設資料庫為0
databases 16
#儲存資料到disk的策略
#當有一條Keys資料被改變是,900秒重新整理到disk一次
save 900 1
#當有10條Keys資料被改變時,300秒重新整理到disk一次
save 300 10
#當有1w條keys資料被改變時,60秒重新整理到disk一次
save 60 10000
#當dump.rdb資料庫的時候是否壓縮資料對象
rdbcompression yes
#本機資料庫檔案名稱,預設值為dump.rdb
dbfilename dump.rdb
#本機資料庫存放路徑,預設值為 ./
dir /usr/local/redis/var/
########### Replication #####################
#Redis的複製配置
# slaveof <masterip><masterport> 當本機為從服務時,設定主服務的IP及連接埠
# masterauth<master-password> 當本機為從服務時,設定主服務的串連密碼
#串連密碼
# requirepass foobared
#最大用戶端串連數,預設不限制
# maxclients 128
#最大記憶體使用量設定,達到最大記憶體設定後,Redis會先嘗試清除已到期或即將到期的Key,當此方法處理後,任到達最大記憶體設定,將無法再進行寫入操作。
# maxmemory <bytes>
#是否在每次更新操作後進行日誌記錄,如果不開啟,可能會在斷電時導致一段時間內的資料丟失。因為redis本身同步資料檔案是按上面save條件來同步的,所以有的資料會在一段時間內只存在於記憶體中。預設值為no
appendonly no
#更新記錄檔名,預設值為appendonly.aof
#appendfilename
#更新日誌條件,共有3個可選值。no表示等作業系統進行資料緩衝同步到磁碟,always表示每次更新操作後手動調用fsync()將資料寫到磁碟,everysec表示每秒同步一次(預設值)。
# appendfsync always
appendfsync everysec
# appendfsync no
################ VIRTUAL MEMORY ###########
#是否開啟VM功能,預設值為no
vm-enabled no
# vm-enabled yes
#虛擬記憶體檔案路徑,預設值為/tmp/redis.swap,不可多個Redis執行個體共用
vm-swap-file logs/redis.swap
# 將所有大於vm-max-memory的資料存入虛擬記憶體,無論vm-max-memory設定多小,所有索引資料都是記憶體儲存的 (Redis的索引資料就是keys),也就是說,當vm-max-memory設定為0的時候,其實是所有value都存在於磁碟。預設值為0。
vm-max-memory 0
vm-page-size 32
vm-pages 134217728
vm-max-threads 4
############# ADVANCED CONFIG ###############
glueoutputbuf yes
hash-max-zipmap-entries 64
hash-max-zipmap-value 512
#是否重設Hash表
activerehashing yes
注意:Redis官方文檔對VM的使用提出了一些建議:
** 當你的key很小而value很大時,使用VM的效果會比較好.因為這樣節約的記憶體比較大.
** 當你的key不小時,可以考慮使用一些非常方法將很大的key變成很大的value,比如你可以考慮將key,value組合成一個新的value.
** 最好使用linux ext3 等對疏鬆檔案支援比較好的檔案系統儲存你的swap檔案.
** vm-max-threads這個參數,可以設定訪問swap檔案的線程數,設定最好不要超過機器的核心數.如果設定為0,那麼所有對swap檔案的操作都是串列的.可能會造成比較長時間的延遲,但是對資料完整性有很好的保證.