標籤:des io ar os 使用 for sp strong 檔案
#是否以後台進程運行,預設為no,如果需要以後台進程運行則改為yes
daemonize no
#如果以後台進程啟動並執行話,就需要指定pid,你可以在此自訂redis.pid檔案的位置。
pidfile /var/run/redis.pid
#接受串連的連接埠號碼,如果連接埠是0則redis將不會監聽TCP socket串連
port 6379
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
#
# bind 127.0.0.1
# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755
#連線逾時時間,單位秒。(0 to disable)?
timeout 300000000
#記錄層級,預設是verbose(詳細),各種記錄層級:
#debug:很詳細的資訊,適合開發與測試
#verbose:包含許多不太有用的資訊,但比debug要清爽一些(many rarely useful info, but not a mess like #the debug level)
#notice:比較適合生產環境
#warning:警告資訊
loglevel verbose
#指定log檔案的名字,預設是stdout。stdout會讓redis把日誌輸出到標準輸出。但是如果使用stdout而又以後台進#程的方式運行redis,則日誌會輸出到/dev/null
logfile stdout
#‘syslog-enabled‘設定為yes會把日誌輸出到系統日誌,預設是no
# syslog-enabled no
#指定syslog的標示符,如果‘syslog-enabled‘是no,則這個選項無效。
# syslog-ident redis
#指定syslog 裝置(facility), 必須是USER或者LOCAL0到LOCAL7.
# syslog-facility local0
#設定資料庫數目。預設的資料庫是DB 0。可以通過SELECT <dbid>來選擇一個資料庫,dbid是[0,‘databases‘-1]的數字
databases 16
################## 快照#################################
#
# 硬碟上儲存資料:
#
# save <seconds> <changes>
#
# <seconds>和<changes>都滿足時就會觸發資料儲存動作。
#
#
# 以下面的例子來說明:
# 過了900秒並且有1個key發生了改變 就會觸發save動作
# 過了300秒並且有10個key發生了改變 就會觸發save動作
# 過了60秒並且至少有10000個key發生了改變 也會觸發save動作
#
# 注意:如果你不想讓redis自動儲存資料,那就把下面的配置注釋掉!
save 900 1
save 300 10
save 60 10000
#儲存資料時是否壓縮資料。預設是yes。
rdbcompression yes
# 儲存dump資料的檔案名稱
dbfilename dump.rdb
# 工作目錄.
#
# 資料會被持久化到這個目錄下的‘dbfilename’指定的檔案中。
#
#
# 注意,這裡指定的必須是目錄而不能是檔案。
dir ./
######## REPLICATION(複製,冗餘)#################################
# Master-Slave replication. 使用slaveof把一個 Redis 執行個體設定成為另一個Redis server的從庫(熱備). 注意: #配置只對當前slave有效。
# 因此可以把某個slave配置成使用不同的時間間隔來儲存資料或者監聽其他連接埠等等。
#命令格式:
# slaveof <masterip> <masterport>
#如果master有密碼保護,則在slave與master進行資料同步之前需要進行密碼校正,否則master會拒絕slave的請#求。
#
# masterauth <master-password>
#當slave丟失與master的串連時,或者slave仍然在於master進行資料同步時(還沒有與master保持一致),#slave可以有兩種方式來響應用戶端請求:
#
# 1) 如果 slave-serve-stale-data 設定成 ‘yes‘ (the default) slave會仍然響應用戶端請求,此時可能會有問題。
#
# 2) 如果 slave-serve-stale data設定成 ‘no‘ slave會返回"SYNC with master in progress"這樣的錯誤資訊。 但 INFO 和SLAVEOF命令除外。
#
slave-serve-stale-data yes
############### 安全 ###################################
# 需要用戶端在執行任何命令之前指定 AUTH <PASSWORD>
#
# requirepass foobared
# 命令重新命名.
#
#
# 例如:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# 同樣可以通過把一個命令重新命名為空白串來徹底kill掉這個命令,比如:
#
# rename-command CONFIG ""
#################### 限制 ####################################
# 設定最大串連數. 預設沒有限制, ‘0‘ 意味著不限制.
#
# maxclients 128
#最大可使用記憶體。如果超過,Redis會試圖刪除EXPIRE集合中的keys,具體做法是:Redis會試圖釋放即將到期的#keys,而保護還有很長生命週期的keys。
#
#如果這樣還不行,Redis就會報錯,但像GET之類的查詢請求還是會得到響應。
#
#警告:如果你想把Redis視為一個真正的DB的話,那不要設定<maxmemory>,只有你只想把Redis作為cache或者
#有狀態的server(‘state‘ server)時才需要設定。
#
# maxmemory <bytes>
#記憶體清理策略:如果達到了maxmemory,你可以採取如下動作:
#
# volatile-lru -> 使用LRU演算法來刪除到期的set
# allkeys-lru -> 刪除任何遵循LRU演算法的key
# volatile-random ->隨機地刪除到期set中的key
# allkeys->random -> 隨機地刪除一個key
# volatile-ttl -> 刪除最近即將到期的key(the nearest expire time (minor TTL))
# noeviction -> 根本不到期,寫操作直接報錯
#
#
# 預設策略:
#
# maxmemory-policy volatile-lru
# 對於處理redis記憶體來說,LRU和minor TTL演算法不是精確的,而是近似的(估計的)演算法。所以我們會檢查某些樣本#來達到記憶體檢查的目的。預設的樣本數是3,你可以修改它。
#
# maxmemory-samples 3
################# APPEND ONLY MODE ###############################
#預設情況下,Redis會非同步把資料儲存到硬碟。如果你的應用情境允許因為系統崩潰等極端情況而導致最新資料丟失#的話,那這種做法已經很ok了。否則你應該開啟‘append only’模式,開啟這種模式後,Redis會在#appendonly.aof檔案中添加每一個寫操作,這個檔案會在Redis啟動時被讀取來在記憶體中重新構建資料集。
#
#注意:如果你需要,你可以同時開啟‘append only’模式和非同步dumps模式(你需要注釋掉上面的‘save’運算式來禁#止dumps),這種情況下,Redis重建資料集時會優先使用appendonly.aof而忽略dump.rdb
#
appendonly no
# append only 檔案名稱 (預設: "appendonly.aof")
# appendfilename appendonly.aof
# 調用fsync()函數通知作業系統立刻向硬碟寫資料
#
# Redis支援3中模式:
#
# no:不fsync, 只是通知OS可以flush資料了,具體是否flush取決於OS.效能更好.
# always: 每次寫入append only 記錄檔後都會fsync . 效能差,但很安全.
# everysec: 沒間隔1秒進行一次fsync. 折中.
#
# 預設是 "everysec"
# appendfsync always
appendfsync everysec
# appendfsync no
# 當AOF fsync策略被設定為always或者everysec並且後台儲存進程(saving process)正在執行大量I/O操作時
# Redis可能會在fsync()調用上阻塞過長時間
#
no-appendfsync-on-rewrite no
# append only 檔案的自動重寫
# 當AOF 記錄檔即將增長到指定百分比時,Redis可以通過調用BGREWRITEAOF 來自動重寫append only檔案。
#
# 它是這麼乾的:Redis會記住最近一次重寫後的AOF 檔案size。然後它會把這個size與當前size進行比較,如果當前# size比指定的百分比大,就會觸發重寫。同樣,你需要指定AOF檔案被重寫的最小size,這對避免雖然百分比達到了# 但是實際上檔案size還是很小(這種情況沒有必要重寫)卻導致AOF檔案重寫的情況很有用。
#
#
# auto-aof-rewrite-percentage 設定為 0 可以關閉AOF重寫功能
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
################## SLOW LOG ###################################
# Redis slow log用來記錄超過指定執行時間的查詢。
#
# 你可以指定兩個參數:一個是慢查詢的閥值,單位是毫秒;另外一個是slow log的長度,相當於一個隊列。
# 負數則關閉slow log,0則會導致每個命令都被記錄
slowlog-log-slower-than 10000
# 不設定會消耗過多記憶體,所以還是要設定一下。可以使用SLOWLOG RESET命令來回收slow log使用的記憶體
slowlog-max-len 1024
################ 虛擬記憶體 ###############################
#使用redis 就別用虛擬記憶體了,絕對不是一個好主意,加個機器吧,所以這裡不翻譯啦!!
### WARNING! Virtual Memory is deprecated in Redis 2.4
### The use of Virtual Memory is strongly discouraged.
# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
#
# To enable VM just set ‘vm-enabled‘ to yes, and set the following three
# VM parameters accordingly to your needs.
vm-enabled no
# vm-enabled yes
# This is the path of the Redis swap file. As you can guess, swap files
# can‘t be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
#
# The best kind of storage for the Redis swap file (that‘s accessed at random)
# is a Solid State Disk (SSD).
#
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
vm-swap-file /tmp/redis.swap
# vm-max-memory configures the VM to use at max the specified amount of
# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
# is, if there is still enough contiguous space in the swap file.
#
# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it‘s
# better to leave some margin. For instance specify an amount of RAM
# that‘s more or less between 60 and 80% of your free RAM.
vm-max-memory 0
# Redis swap files is split into pages. An object can be saved using multiple
# contiguous pages, but pages can‘t be shared between different objects.
# So if your page is too big, small objects swapped out on disk will waste
# a lot of space. If you page is too small, there is less space in the swap
# file (assuming you configured the same number of total swap file pages).
#
# If you use a lot of small objects, use a page size of 64 or 32 bytes.
# If you use a lot of big objects, use a bigger page size.
# If unsure, use the default :)
vm-page-size 32
# Number of total memory pages in the swap file.
# Given that the page table (a bitmap of free/used pages) is taken in memory,
# every 8 pages on disk will consume 1 byte of RAM.
#
# The total swap size is vm-page-size * vm-pages
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
#
# It‘s better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
vm-pages 134217728
# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can‘t help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
#
# The special value of 0 turn off threaded I/O and enables the blocking
# Virtual Memory implementation.
vm-max-threads 4
################進階配置###############################
# Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happens to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don‘t have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all redis server but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# include /path/to/local.conf
# include /path/to/other.conf
redis.conf配置項說明