# Redis configuration file example
# By default redis does not run as a daemon. Use 'yes' if you need it.
# Note that redis will write a PID file in/var/run/redis. PID when daemonized.
Daemonize No
# When run as a daemon, redis write a PID file in/var/run/redis. PID by default.
# You can specify a custom PID file location here.
Pidfile/var/run/redis. PID
# Accept connections on the specified port, default is 6379
Port 6379
# If you want you can bind a single interface, if the BIND option is not
# Specified all the interfaces will listen for connections.
#
# Bind 127.0.0.1
# Close the connection after a client is idle for n seconds (0 to disable)
Timeout 300
# Set server verbosity to 'debug'
# It Can Be one:
# Debug (a lot of information, useful for development/Testing)
# Notice (moderately verbose, what you want in production probably)
# Warning (only very important/critical messages are logged)
Loglevel debug
# Specify the log file name. Also 'stdout' can be used to force
# The demon to log on the standard output. Note that if you use standard
# Output for logging but daemonize, logs will be sent to/dev/null
Logfile stdout
# Set the number of databases. The default database is db 0, you can select
# A different one on a per-connection basis using select <dbid> where
# Dbid is a number between 0 and 'databases'-1
Databases 16
################################ Snapshotting#################################
#
# Save the DB on disk:
#
#Save <seconds> <changes>
#
#Will Save the DB if both the given number of seconds and the given
#Number of write operations against the DB occurred.
#
#In the example below the behaviour will be to save:
#After 900 Sec (15 min) if at least 1 key changed
#After 300 Sec (5 min) if at least 10 keys changed
#After 60 sec if at least 10000 keys changed
Save 900 1
Save 300 10
Save 60 10000
# Compress string objects using lzf when dump. RDB databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no'
# The dataset will likely be bigger if you have compressible values or keys.
Rdbcompression Yes
# The filename where to dump the DB
Dbfilename dump. RDB
# For default save/load dB in/from the working directory
# Note that you must specify a directory not a file name.
Dir ./
################################# Replication ###### ###########################
# Master-slave replication. Use slaveof to make a redis instance a copy
# Another redis server. Note that the configuration is local to the slave
# So for example it is possible to configure the slave to save the DB with
# Different interval, or to listen to another port, and so on.
#
# Slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# Directive below) it is possible to tell the slave to authenticate before
# Starting the replication synchronization process, otherwise the master will
# Refuse the slave request.
#
# Masterauth <master-Password>
################################## Security ##### ##############################
# Require clients to issue auth <password> before processing any other
# Commands.This might be useful in environments in which you do not trust
# Others with access to the host running redis-server.
#
# This shoshould stay commented out for backward compatibility and because most
# People do not need auth (e.g. They run their own servers ).
#
# Requirepass foobared
################################### Limits #### ################################
# Set the Max number of connected clients at the same time. By default there
# Is no limit, and it's up to the number of file descriptors The redis Process
# Is able to open. The special value '0' means no limts.
# Once the limit is reached redis will close all the new connections sending
# An error 'max number of clients reached '.
#
# Maxclients 128
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached redis will try to remove keys with
# Expire set. It will try to start freeing keys that are going to expire
# In little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, redis will start to reply with errors to commands
# That will use more memory, like set, lpush, and so on, and will continue
# To reply to most read-only commands like get.
#
# Warning: maxmemory Can Be A Good Idea mainly if you want to use redis as
# 'State' server or cache, not as a real dB. When redis is used as a real
# Database the memory usage will grow over the weeks, it will be obvious if
# It is going to use too much memory in the long run, and you'll have the time
# To upgrade. With maxmemory after the limit is reached you'll start to get
# Errors for write operations, and this may even lead to DB inconsistency.
#
# Maxmemory <bytes>
############################# Append only mode ####### ########################
# By default redis asynchronously dumps the dataset on disk. If you can live
# With the idea that the latest records will be lost if something like a crash
# Happens this is the preferred way to run redis. If instead you care a lot
# About your data and don't want to that a single record can get lost you shoshould
# Enable the append only mode: When this mode is enabled redis will append
# Every write operation completed ed in the file appendonly. log. This file will
# Be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# Like (You have to comment the "save" statements above to disable the dumps ).
# Still if append only mode is enabled redis will load the data from
# Log file at startup ignoring the dump. RDB file.
#
# The Name Of The append only file is "appendonly. log"
#
# Important: Check the bgrewriteaof to check how to rewrite the append
# Log file in background when it gets too big.
Appendonly No
# The fsync () call tells the operating system to actually write data on disk
# instead to wait for more data in the output buffer. some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
# redis supports three different modes:
#< br> # No: Don't fsync, just let the OS flush the data when it wants. faster.
# Always: fsync after every write to the append only log. slow, safest.
# everysec: fsync only if one second passed since the last fsync. compromise.
# The default is "always" that's the safer of the options. it's up to you to
# understand if you can relax this to "everysec" that will fsync every second
# Or to "no" that will let the operating system flush the output buffer when
# It want, for better performances (but if you can live with the idea of
# Some data loss consider the default persistence mode that's snapshoces ).
Appendfsync always
# Appendfsync everysec
# Appendfsync No
############################## Advanced config ####### ########################
# Glue small output buffers together in order to send small replies in
# Single TCP packet. uses a bit more CPU but most of the times it is a win
# In terms of number of queries per second. Use 'yes' if unsure.
Glueoutputbuf Yes
# Use object sharing. can save a lot of memory if you have saved common
# string in your dataset, but performs lookups against the shared objects
# pool so it uses more CPU and can be a bit slower. usually it's a good
# idea.
# When object sharing is enabled (required objects yes) you can use
# define objectspoolsize to control the size of the pool used in order to try
# object sharing. A bigger pool size will lead to better sharing capabilities.
# in general you want this value to be at least the double of the number of
# Very common strings you have in your dataset.
# warning: Object sharing is experimental, don't enable this feature
# in production before of redis 1.0-stable. still please try this feature in
# Your development environment so that we can test it better.
# objects no
# define objectspoolsize 1024