Linux Install Redis Specify a custom port __linux

Source: Internet
Author: User
Tags allkeys benchmark lua volatile install redis redis server

1. Download Redishttps://redis.io/download

2. Decompression, compilation:

$ tar xzf redis-3.2.9.tar.gz
$ cd redis-3.2.9
$ make
3.SRC now back out three executable files, redis-server redis-cli redis-benchmark, copy them out of a designated folder, follow up on this file to start and stop, go in Redis, SRC with the path under the Redis.config, the same and the above three put together.

CP./src/redis-server Redis-benchmark Redis-cli/usr/redis
CP Redis.conf/usr/redis

Open Redis.config:



Bind 127.0.0.1 Requests #指定Redis只接收来自于该IP addresses, and if not set, all requests are processed and are best set for security in a production environment. Version 3.2 defaults to open
Protected-mode Yes
Port 6379 #监听端口, defaults to 6379
Tcp-backlog 511
Timeout 0 #设置客户端连接时的超时时间, in seconds. When the client does not issue any instructions during this time, close the connection
Tcp-keepalive #指定TCP连接是否为长连接, "Detective" signal has server-side maintenance. The default is 0. Indicates disabled
################################# General #####################################
Daemonize Yes #是否已守护的形式启动
Supervised no
Pidfile/var/run/redis_6379.pid #pid文件所在位置
LogLevel Notice #log grades are divided into 4 levels, debug,verbose, notice, and warning. General open notice in production environment
LogFile ""
Databases #设置数据库的个数, you can use the Select command to switch databases. The database used by default is library number No. 0. Default 16 libraries


#保存数据快照的频率 the frequency with which data is persisted to the Dump.rdb file. Used to describe "at least how many change operations in seconds" triggers the snapshot data save action
Save 900 1
Save 300 10
Save 60 10000
Default setting, which means:
If (10,000 keys are changed within 60 seconds) {
Perform a mirrored backup
}else if (10 keys have changed within 300 seconds) {
Perform a mirrored backup
}else if (1 keys have changed within 900 seconds) {
Perform a mirrored backup
}


#当持久化出现错误时, whether to continue working and to terminate all client write requests. The default setting "Yes" indicates termination, and once the snapshot data is saved,
So this server is read-only service. If "No", this snapshot will fail, but the next snapshot will not be affected, but if a failure occurs, the data can only be restored to the "last success point"
Stop-writes-on-bgsave-error Yes


#在进行数据镜像备份时, whether the Rdb file compression method is enabled, and the default is yes. Compression may require additional CPU overhead, but this can effectively reduce the size of RDB files and facilitate storage/backup/transfer/Data recovery
Rdbcompression Yes

Rdbchecksum Yes

#镜像备份文件的文件名, defaults to Dump.rdb
Dbfilename Dump.rdb

The path where the file Rdb/aof file is placed by the database mirroring backup. The path and file name are configured separately because Redis writes the state of the current database to a temporary file while the backup is in progress.
When the backup completes, replace the temporary file with the file specified above, and the temporary files and the backup files configured above will be placed in this specified path.
Dir./

#当主master服务器挂机或主从复制在进行时 whether the client can still be allowed to access data that may expire. In the "yes" case, Slave continues to provide read-only service to the client,
It is possible that the data at this time has expired, and in the "no" case, any data request services sent to this server (both the client and the slave of this server) will be notified of the "error"
Slave-serve-stale-data Yes

#slave是否为 "Read Only", strongly recommended as "yes"
Slave-read-only Yes

Set the database to be a database from another database and specify master information for it.
Slaveof <masterip> <masterport>


When the primary database connection requires password authentication, specify here
Masterauth


Slave the interval (in seconds) to send ping messages to the specified master, by default of 10
Repl-ping-slave-period 10


Slave in the master communication, the maximum idle time, default 60 seconds. Timeout will cause connection shutdown
# Repl-timeout 60




Repl-diskless-sync No
Repl-diskless-sync-delay 5


Slave connection to master, disable the TCP nodelay option. "Yes" means that the data in the socket communication will be sent in packet (packet size is limited by the socket buffer).
You can increase the efficiency of socket traffic (TCP interactions), but small data will be buffer, will not be sent immediately, there may be latency for the recipient. "No" indicates that the TCP nodelay option is turned on, and any
The data will be sent immediately, the timeliness is good, but the efficiency is low, the proposal is set to No
Repl-disable-tcp-nodelay No


Suitable for Sentinel modules (unstable,m-s cluster management and monitoring), additional profile support is required. Slave weight value, default 100. When master expires, Sentinel will be from the slave list
The slave with the lowest weight value (>0) is found and promoted to master. If the weight value is 0, this slave is "observer" and does not participate in the master election
Slave-priority 100

Out of memory, the data cleanup policy defaults to "VOLATILE-LRU".
Volatile-lru-> takes a LRU (least recently used) algorithm for data in an "expired collection". If the key is specified with the "expire" directive, the key is added to the expired collection. Priority is given to removing data that has expired/LRU. If you remove all of the expired collections and still do not meet the memory requirements, you will oom.
ALLKEYS-LRU-> for all data, using LRU algorithm
Volatile-random-> takes the "then select" Algorithm for the data in the expired collection and removes the selected k-v until "memory is sufficient." If all removal is still not satisfied in the expired collection, the Oom
Allkeys-random-> to all data, take a "random selection" algorithm and remove the selected k-v until "Memory is sufficient"
Volatile-ttl-> takes the TTL algorithm (minimum surviving time) on the data in the expired collection to remove the data that is about to expire.
Noeviction-> do not do any interference operation, directly return Oom exception
Also, if the expiration of the data does not bring an exception to the "Application System", and the write operation is more intensive in the system, it is recommended to take "ALLKEYS-LRU"
# Maxmemory-policy VOLATILE-LRU


By default, Redis asynchronously backs up database mirroring to disk in the background, but the backup is time-consuming and backups are not frequent. So Redis offers another, more
Efficient database backup and disaster recovery approach. After the append only mode is turned on, Redis will append every write request received to appendonly.aof file, when Redis
When restarted, the previous state is recovered from the file. But this will cause appendonly.aof file is too large, so Redis also supported the BGREWRITEAOF directive, to appendonly.aof
For reorganization. If you do not frequently perform data migration operations, it is recommended that the practice in the production environment is to turn off mirroring, turn on appendonly.aof, and optionally at less than the time of visit
Appendonly.aof to rewrite it one time. In addition, on the master machine, mainly responsible for writing, recommended the use of aof, for Slave, mainly responsible for reading, pick out 1-2 sets open aof, the rest of the proposal to close
AppendOnly No




Appendfilename "Appendonly.aof"
Appendfsync everysec
No-appendfsync-on-rewrite No
Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64MB
aof-load-truncated Yes


Maximum time for LUA scripts to run
Lua-time-limit 5000
Slowlog-log-slower-than 10000
Slowlog-max-len 128
Latency-monitor-threshold 0
Notify-keyspace-events ""


############################### ADVANCED CONFIG ###############################
#数据量小于等于hash-max-ziplist-entries with ziplist, larger than hash-max-ziplist-entries hash
Hash-max-ziplist-entries 512
#value大小小于等于hash-max-ziplist-value with ziplist, larger than hash-max-ziplist-value with hash.
Hash-max-ziplist-value 64


#数据量小于等于list-max-ziplist-entries with Ziplist, greater than list-max-ziplist-entries with list.
List-max-ziplist-entries 512
#value大小小于等于list-max-ziplist-value with Ziplist, greater than list-max-ziplist-value with list.
List-max-ziplist-value 64


#数据量小于等于set-max-intset-entries with Iniset, larger than set-max-intset-entries with set.
Set-max-intset-entries 512


#数据量小于等于zset-max-ziplist-entries with Ziplist, greater than zset-max-ziplist-entries with Zset.
Zset-max-ziplist-entries 128
#value大小小于等于zset-max-ziplist-value with Ziplist, greater than zset-max-ziplist-value with Zset.
Zset-max-ziplist-value 64


#value大小小于等于hll-sparse-max-bytes uses sparse data structures (sparse), which are larger than hll-sparse-max-bytes using dense data structures (dense).
A value greater than 16000 is almost useless, and the recommended value is about 3000. If the CPU requirements are not high, high space requirements, the proposed set to 10000 or so.
Hll-sparse-max-bytes 3000
#Redis将在每100毫秒时使用1毫秒的CPU时间来对redis的hash表进行重新hash, you can reduce the use of memory. When you use the scene, there are very strict real-time needs,
It is not acceptable to redis the request with a 2 millisecond delay from time to time, configure this to No. If you do not have such stringent real-time requirements, you can set to Yes so that you can free up memory as quickly as possible.
activerehashing Yes


Client-output-buffer-limit Normal 0 0 0


#对于slave client and Moniter client, if Client-output-buffer is over 256MB, or more than 64MB lasts 60 seconds, the server disconnects clients immediately.
Client-output-buffer-limit slave 256MB 64MB 60


#对于pubsub client, the server disconnects the client immediately if the Client-output-buffer exceeds 32MB, or if it lasts 60 seconds over 8MB.
Client-output-buffer-limit pubsub 32MB 8MB 60


#redis执行任务的频率为1s除以hz.
Hz 10
#在aof重写的时候, if the aof-rewrite-incremental-fsync switch is turned on, the system will perform a fsync every 32MB. This is useful for writing files to disk and avoids excessive latency spikes
Aof-rewrite-incremental-fsync Yes



Start:./REDIS-CLI redis.conf

To start a client with a configuration file

multiple Redis instances need to be started:
A Redis server, divided into multiple nodes, each node assigned a port (6380,6381 ...) ), the default port is 6379.
Each node corresponds to a redis configuration file, such as: redis6380.conf, redis6381.conf

#cp redis.confredis6380.conf

#vi redis6380.conf

Pidfile:pidfile/var/run/redis/redis_6380.pid

Port 6380

Logfile:logfile/var/log/redis/redis_6380.log

Rdbfile:dbfilenamedump_6380.rdb

(Other configuration files are similar to modifications)

To start multiple Redis instances:

#redis-server/usr/local/redis/redis6380.conf

#redis-server/usr/local/redis/redis6381.conf


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.