Redis installation configuration in Linux

Source: Internet
Author: User
Tags auth flush memory usage mysql in redis redis server

Redis is a non relational database. An open source project in NoSQL. For simple key-value storage, you can use Redis instead of MySQL in very high speed access scenarios where replication is heavily backward.
Redis is installed as follows.
1, download the installation package, download the address is servicestack. If I download the version is Redis-2.0.2.rar.
2, extract files to the corresponding directory. You can see the files after decompression:

3, where the configuration file needs to create their own, the source code is attached below:

The code is as follows Copy Code
# Redis Configuration File Example

# By default Redis does not run as a daemon. Use ' yes ' if you need it.
# that Redis'll write a PID file in/var/run/redis.pid when daemonized.
Daemonize No

# when run as a daemon, Redis write a PID file in/var/run/redis.pid by default.
# can specify a custom PID file location here.
Pidfile/var/run/redis.pid

# Accept connections on the specified port, default is 6379
Port 6379

# If you want can bind a single interface, if the BIND option isn't
# specified all the interfaces'll listen for connections.
#
# bind 127.0.0.1

# Close the connection ' a client is idle for N seconds (0 to disable)
Timeout 300

# Set Server verbosity to ' debug '
# It can be one of:
# Debug (a lot of information, useful for development/testing)
# notice (moderately verbose, what want in production)
# warning (only very important/critical messages are logged)
LogLevel Debug

# Specify the log file name. Also ' stdout ' can be used to force
# The demon to log on the standard output. Note this if you use standard
# Output for logging but daemonize, logs'll be sent To/dev/null
LogFile stdout

# Set the number of databases. The default database is DB 0, can select
# A different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and ' databases '-1
Databases 16

################################ snapshotting #################################
#
# Save the DB on disk:
#
# Save <seconds> <changes>
#
# would save the DB if both the given number of seconds and the given
# Number of write operations against the DB occurred.
#
# in the example below the behaviour is to save:
# after 900 sec-if at least 1 key changed
# After the SEC (5 min) If at least keys changed
# after the SEC if at least 10000 keys changed
Save 900 1
Save 300 10
Save 60 10000

# Compress String objects using Lzf when dump. RDB databases?
# for default so ' s set to ' yes ' as it ' s almost always a win.
# If you want to save some CPU in the saving child set it to ' no ' but
# The dataset would likely be bigger if you have compressible values or keys.
Rdbcompression Yes

# The filename where to dump the DB
Dbfilename Dump.rdb

# for Default Save/load DB in/from the working directory
# That's you must specify a directory is not a file name.
Dir./

################################# REPLICATION #################################

# Master-slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. The configuration is the slave
# So for example it's possible to configure the slave to save ' DB with a
# different interval, or to listen to another port, and.
#
# slaveof <masterip> <masterport>

# If The master is password protected (using the "Requirepass" configuration
# directive below) It is possible to tell the slave to authenticate before
# Starting the replication synchronization process, otherwise the master would
# refuse the slave request.
#
# Masterauth <master-password>

################################## Security ###################################

# Require clients to issue AUTH <PASSWORD> before processing
# commands. This might is useful in environments in which
# Others with access to the host running Redis-server.
#
# This should stay commented out for backward compatibility and because most
# People do not need auth (e.g. they run their own).
#
# Requirepass Foobared

################################### LIMITS ####################################

# Set The max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value ' 0 ' means no limts.
# Once The limit is reached Redis'll close all the new connections sending
# an error ' max number of clients reached '.
#
# maxclients 128

# Don ' t use more memory than the specified amount of bytes.
# When the memory limit is reached Redis'll try to remove keys with an
# EXPIRE set. It'll try to start freeing keys this are going to expire
# in Little, and preserve keys with a longer.
# Redis would also try to remove objects from free lists if possible.
#
# If All this fails, Redis'll start to reply with errors to commands
# that would use the more memory, like SET, Lpush, and, and would continue
# to reply to most read-only commands like get.
#
# warning:maxmemory can is a good idea mainly if your want to use Redis as a
# "state" server or cache, not as a real DB. When Redis are used as a real
# Database The memory usage would grow over the weeks, it would be obvious if
# It is going to-use too a much memory in the long run, and I ll have the time
# to upgrade. With MaxMemory after the limit are reached you ' ll start to get
# Errors for write operations, and this may even leads to DB inconsistency.
#
# maxmemory <bytes>

############################## APPEND only MODE ###############################

# By default Redis asynchronously dumps of the dataset on disk. If You can live
# with the idea that the latest records would be lost if something like a crash
# happens this are the preferred way to run Redis. If instead you care a lot
# about your data and don ' t want to, a single, can get lost for you should
# enable the Append only mode:when this mode is enabled Redis'll append
# every write operation received in the file Appendonly.log. This file would
# is read on the ' startup in ' order to rebuild the ' full DataSet ' in memory.
#
# This can have both the async dumps and the append only file if you
# like (your have to comment the ' save ' statements above to disable the dumps).
# still if append only mode is enabled Redis'll load the data from the
# log file at startup ignoring the Dump.rdb file.
#
# The name of the append only file is ' Appendonly.log '
#
# Important:check The bgrewriteaof to Check how to rewrite the Append
# log file in background The it gets too big.

AppendOnly No

# the Fsync () call tells the operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS would really flush
# data on disk, some the other OS would just try to do it ASAP.
#
# Redis supports three different modes:
#
# No:don ' t Fsync, just let's the OS flush the data when it wants. Faster.
# Always:fsync after every write to the append only log. Slow, safest.
# Everysec:fsync only if one second passed since the last fsync. Compromise.
#
# The default is ' always ', that ' s the safer of the options. It ' s up to you
# Understand if can relax this to ' everysec ' that'll fsync every second
# or to ' no ' that'll let the operating system flush ' output buffer when
# It want, for better performances (but if your can live with the idea of
# Some data loss consider the default persistence mode that ' s snapshotting).

Appendfsync always
# Appendfsync Everysec
# Appendfsync No

############################### ADVANCED CONFIG ###############################

# Glue Small output buffers together in order to send small replies in a
# single TCP packet. Uses a bit more CPU but most's it is a win
# in terms of number of queries per second. Use ' yes ' if unsure.
Glueoutputbuf Yes

# Use object sharing. Can save a lot of memory if you have many common
# string in your dataset, but performs lookups against the shared objects
# Pool So it uses more CPUs and can be a bit slower. Usually it ' s a good
# idea.
#
# When object sharing are enabled (Shareobjects Yes) you can use
# Shareobjectspoolsize to control the size of the ' pool used in order to try
# object Sharing. A bigger pool size would lead to better sharing capabilities.
# In general you want this value to being at least the double of of
# Very common strings your have in your dataset.
# www.111cn.net
# Warning:object sharing is experimental, don ' t enable this feature
# in production before of Redis 1.0-stable. Still try this feature in
# Your development environment so we can test it better.
# Shareobjects No
# shareobjectspoolsize 1024


4, the command line into the installation directory (or configure environment variables). As pictured,


5, another open a cmd, input redis_cli.exe-h 127.0.0.1-p 6349. Next, you can play.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.