Redis-related operations

Source: Internet
Author: User
Tags allkeys rehash strong password redis server value store

PHP DLL:
Https://github.com/phpredis/phpredis/downloads

Redis Remote Connection
Vim redis.conf
Requirepass {* *} #设置密码

#重启redis
Kill {Redis PID}
Redis-server

#php Auth Verification
$connect = $redis->connect ($cfg [' Host '], $cfg [' Port '], $cfg [' timeout '];
$cfg [' Password '] && $redis->auth ($cfg [' Password ']);

------------------------------------------------------------------------------
[Add Redis extensions]
1, installation Phpredis
wget https://github.com/nicolasff/phpredis/archive/2.2.4.tar.gz
Upload phpredis-2.2.4.tar.gz to/usr/local/src directory
CD/USR/LOCAL/SRC #进入软件包存放目录
Tar zxvf phpredis-2.2.4.tar.gz #解压
CD phpredis-2.2.4 #进入安装目录
/usr/local/php/bin/phpize #用phpize生成configure配置文件
./configure--with-php-config=/usr/local/php/bin/php-config #配置
Make #编译
Make install #安装
After the installation is complete, the following installation path appears
/usr/local/php/lib/php/extensions/no-debug-non-zts-20090626/

2. Configure PHP Support
Vi/usr/local/php/etc/php.ini #编辑配置文件, add the following on the last line
extension= "Redis.so"
: wq! #保存退出

3 Restart Service
sudo service nginx restart
SUDO/ETC/INIT.D/PHP-FPM restart

---------------------------------------a detailed Redis profile---------------------------------------
Redis is an open-source, high-performance key-value store (Key-value store), similar to memcached, which is often referred to as a Key-value memory storage system or in-memory database, and because it supports a rich data structure, Also known as a data structure server (data structure server).

After compiling Redis, its configuration file in the source directory redis.conf, copy it to the working directory can be used, the following specific explanation of the parameters in the redis.conf:


1 daemonize No

By default, Redis is not running in the background, and if you need to run in the background, change the value of the item to Yes.

2 Pidfile/var/run/redis.pid

When Redis is running in the background, Redis defaults to placing the PID file in/var/run/redis.pid, which you can configure to a different address. When running multiple Redis services, you need to specify a different PID file and port

3 Port

Listening port, default is 6379

4 #bind 127.0.0.1

Specifies that Redis receives only requests from that IP address, and if not set, all requests are processed and the item is best set in the production environment for security. Default comment off, do not open

5 Timeout 0

Sets the time-out period in seconds for client connections. When the client has not issued any instructions during this time, the connection is closed

6 Tcp-keepalive 0

Specifies whether the TCP connection is a long connection, and the "Detective" signal has server-side maintenance. The default is 0. Indicates disabled

7 LogLevel Notice

The log level is divided into 4 levels, debug,verbose, notice, and warning. General open notice under production environment

8 logfile stdout

Configure the log file address by default using standard output, which is printed on the command-line Terminal window, modified to log file directory

9 Databases 16

To set the number of databases, you can use the Select command to switch the database. The database used by default is library number No. 0. Default of 16 libraries

10

Save 900 1
Save 300 10
Save 60 10000

The frequency at which data snapshots are saved, and how frequently data is persisted to the Dump.rdb file. Used to describe "at least how many change actions during the number of seconds" trigger snapshot data Save action


The default setting, which means:

If (10,000 keys change within 60 seconds) {

Make a mirrored backup

}else if (10 keys have changed within 300 seconds) {

Make a mirrored backup

}else if (1 keys have changed within 900 seconds) {

Make a mirrored backup

}

Stop-writes-on-bgsave-error Yes

Whether or not to continue working when persistent errors occur, whether to terminate all client write requests. The default setting "Yes" means termination, and once the snapshot data is saved, this server is read-only. If "No", then this snapshot will fail, but the next snapshot will not be affected, but if there is a failure, the data can only be restored to "last success point"

Rdbcompression Yes

Whether the Rdb file compression method is enabled when you perform a data mirroring backup, the default is yes. Compression may require additional CPU overhead, but this can effectively reduce the size of the Rdb file, which facilitates storage/backup/transport/Data recovery

Rdbchecksum Yes

10% performance loss when reading and writing

Rdbchecksum Yes

Whether the checksum is performed, whether the CRC64 checksum is used for the Rdb file, and the default is "Yes", then the CRC checksum is appended at the end of each RDB file content to facilitate the third-party verification tool to detect file integrity

Dbfilename Dump.rdb

The file name of the mirrored backup file, which defaults to Dump.rdb

Dir./

The path to the file Rdb/aof file placement for database mirroring backup. The path and file name to be configured separately because Redis in the backup, the current state of the database is written to a temporary file, and so on when the backup is complete, and then replace the temporary file with the file specified above, and here the temporary files and the above configuration of the backup file will be placed in the specified path

# slaveof <masterip> <masterport>

Set the database as the slave database for the other database, and specify the master information for it.

Masterauth

When the primary database connection requires password authentication, specify the

Slave-serve-stale-data Yes

When the Master master server hangs or the master-slave copy is in progress, the client can still be allowed access to data that may be out of date. In the "yes" case, Slave continues to provide read-only services to the client, it is possible that the data has expired at this time, and in "No", any data request service sent to this server (both the client and the server's slave) will be told "Error"

Slave-read-only Yes

Slave is "read Only" and is strongly recommended as "yes"

# Repl-ping-slave-period 10

Slave the interval (in seconds) at which ping messages are sent to the specified master, which defaults to 10

# Repl-timeout 60

Slave and master communication, maximum idle time, default 60 seconds. Timeout will cause connection to close
Repl-disable-tcp-nodelay No

Slave the connection to master, whether the TCP nodelay option is disabled. "Yes" means disable, then the data in the socket communication will be sent in packet (packet size is limited by the socket buffer).
Can improve the efficiency of the socket Communication (TCP interactions), but the small data will be buffer, will not be sent immediately, there may be a delay for the recipient. "No" means to open the TCP nodelay option, any data will be sent immediately, good timeliness, but less efficient, recommended to set to No

100 slave-priority

For Sentinel modules (unstable,m-s cluster management and monitoring), additional configuration file support is required. The weight value of the slave, default 100. When Master fails, Sentinel will find the lowest weight value (>0) slave from the slave list and promote to master. If the weight value is 0, this slave is "observer" and does not participate in the master election

# Requirepass Foobared

Sets the password to be used before any other specified client connection is made. Warning: Because Redis is very fast, an external user can make a 150K password attempt in a second on a better server, which means you need to specify a very strong password to prevent brute force.

# Rename-command CONFIG 3ed984507a5dcd722aeade310065ce5d (way: MD5 (' config^! '))

Rename directives, for some instructions related to "server" control, may not want to be used by remote clients (non-administrator users), then you can rename these instructions to "hard to read" other strings

# maxclients 10000

Limit the number of customers who are connected at the same time. When the number of connections exceeds this value, Redis will no longer receive additional connection requests, and the client will receive an error message when attempting to connect. The default is 10000, to consider the system file descriptor restrictions, not too large, waste file descriptors, depending on the specific situation

# maxmemory <bytes>

The maximum memory (bytes) that can be used by the Redis-cache, which defaults to 0, means "No limit" and is ultimately determined by the OS physical memory size (swap may be used if there is not enough physical memory). This value should not exceed the physical memory size of the machine, from a performance and implementation perspective, which can be physical memory 3/4. This configuration needs to be used in conjunction with "Maxmemory-policy", which triggers a "purge policy" when the memory data in Redis reaches MaxMemory. In the case of "out of memory", any write operation (such as Set,lpush, etc.) will trigger the execution of the purge policy. In a real-world environment, it is recommended that the hardware configuration of all Redis physical machines be consistent (memory-consistent) while ensuring that the "maxmemory" policy configuration in Master/slave is consistent.

When the memory is full, if the SET command is also received, Redis will first attempt to remove the key that set the expire information, regardless of the key's expiration time has not arrived. When you delete,

will be deleted by the expiration time and the first key to be expired will be deleted first. If the key with expire information is erased and the memory is not enough, an error will be returned. In this way, Redis will no longer receive write requests and receive only get requests. The MaxMemory setting is more appropriate for using Redis as a memcached-like cache.

# Maxmemory-policy VOLATILE-LRU

"Out of Memory", the data Purge policy defaults to "VOLATILE-LRU".

Volatile-lru the LRU (least recently used) algorithm for data in the expired collection. If you specify an expiration time for key using the "expire" directive, the key will be added to the expired collection. Remove the data that has expired/LRU first. If all of the removed from the expired collection still does not meet the memory requirements, Oom will be done.
ALLKEYS-LRU all data, using the LRU algorithm
Volatile-random takes the "pick up" algorithm for the data in the expired collection and removes the selected k-v until "Memory is sufficient". If all removal is still not satisfied if all removed from the expired collection, the Oom
Allkeys-random all data, take a "random selection" algorithm and remove the selected k-v until "Memory is sufficient"
Volatile-ttl the TTL algorithm (minimum time to live) for the data in the expired collection to remove data that is about to expire.
Noeviction, do not do any interference operation, directly return Oom exception
In addition, if the expiration of the data does not bring an exception to the "Application System", and the write operation in the system is more dense, it is recommended to take "ALLKEYS-LRU"
# Maxmemory-samples 3

The default value of 3, above the LRU and minimum TTL policy is not a rigorous strategy, but the approximate way to estimate, so you can select the sampling value to check

AppendOnly No

By default, Redis backs up database mirroring to disk asynchronously in the background, but the backup is time-consuming and cannot be backed up very often. So Redis offers another more efficient way to database backup and disaster recovery. When append only mode is turned on, Redis appends every write request received to the Appendonly.aof file, and when Redis restarts, the previous state is recovered from the file. However, this will cause the appendonly.aof file to be too large, so Redis also supports the BGREWRITEAOF directive, and re-organizes the appendonly.aof. If the data migration operation is not performed frequently, it is recommended that the practice in the production environment is to turn off mirroring, turn on appendonly.aof, and optionally rewrite the appendonly.aof once a day for less access time.

In addition, the master machine, mainly responsible for writing, the proposed use of aof, for Slave, mainly responsible for reading, pick out 1-2 open aof, the rest of the recommendations closed
# Appendfilename Appendonly.aof

AOF file name, default is Appendonly.aof

31

# Appendfsync Always
Appendfsync everysec
# Appendfsync No

Sets the frequency at which the appendonly.aof files are synchronized. Always means that each time a write operation is synchronized, everysec indicates that the write operation is cumulative and synchronized once per second. No does not actively fsync, by the OS itself to complete. This needs to be configured according to the actual business scenario

No-appendfsync-on-rewrite No

During the aof rewrite, the use of file synchronization policy for append of AoF new records was taken into account, mainly considering disk IO expenditure and request blocking time. The default is no, which means "no delay" and the new AOF record will still be synchronized immediately
100 Auto-aof-rewrite-percentage

When aof log grows above a specified scale, overriding the log file, set to 0, means that the AOF log is not automatically rewritten to keep the aof volume to a minimum and ensure that the most complete data is saved.

64MB auto-aof-rewrite-min-size

Minimum file size for triggering aof rewrite

5000 Lua-time-limit

Maximum time the Lua script runs
Approx. Slowlog-log-slower-than 10000

"Slow Log" record, in microseconds (one out of 10,000 seconds, 1000 * 1000), if the operation time exceeds this value, the command message will be "recorded" up. (Memory, non-file). Where "Operation Time" does not include network IO expenses, only the time that the "Memory implementation" is requested after the server is reached. " 0 "means record all operations

Panax Notoginseng Slowlog-max-len 128

The maximum number of bars retained by the slow operation log will be queued and the old records will be removed if this length is exceeded. Slow record information can be viewed via "Slowlog <subcommand> args" (Slowlog get 10,slowlog Reset)

38

Hash-max-ziplist-entries 512

Hash-type data structures can be encoded using ziplist and Hashtable. Ziplist is characterized by a small amount of space required for file storage (as well as memory storage), where performance and Hashtable are almost identical when content is small. Therefore, Redis takes ziplist by default on hash types. If the number of entries in the hash or value length reaches the threshold, it will be reconstructed to Hashtable.

This parameter refers to the maximum number of entries allowed to be stored in the ziplist, which defaults to 512 and is recommended to 128
Hash-max-ziplist-value 64

The maximum number of bytes allowed in the Ziplist entry value, default is 64, 1024 is recommended

39

List-max-ziplist-entries 512
List-max-ziplist-value 64

For the list type, ziplist,linkedlist two encoding types will be taken. Explanation Ibid.

512 Set-max-intset-entries

The maximum number of entries allowed in the Intset, Intset will be refactored to Hashtable if the threshold is reached

41

Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64

Zset is an ordered set with 2 encoding types: ziplist,skiplist. Because "sort" will consume additional performance, it will be refactored to skiplist when there is more data in the Zset.

activerehashing Yes

Whether to open the rehash function of the top level data structure, if memory allows, please open. Rehash can improve the efficiency of k-v access to a large extent

43

Client-output-buffer-limit Normal 0 0 0
Client-output-buffer-limit slave 256MB 64MB 60
Client-output-buffer-limit pubsub 32MB 8MB 60

Client buffer control. In the interaction between the client and server, each connection is associated with a buffer, which is used to queue the response information that waits for the client to accept. If the client is unable to consume the response information in a timely manner, then buffer will be constantly accumulating memory pressure on the server. If the backlog of data in buffer reaches the threshold, it will cause the connection to be closed and buffer removed.

The buffer control types include normal connection, slave, and slave connection, pubsub->pub/sub type connection, this type of connection often produces this kind of problem; Because pub ends up with dense publishing messages, But the sub side may not be consuming enough.
Instruction format: client-output-buffer-limit <class> Soft represents "tolerable value", which mates with seconds, if the buffer value exceeds soft and the duration reaches seconds, the connection is immediately closed, if the soft is exceeded but after seconds, the buffer data is less than soft and the connection is retained.
Where both hard and soft are set to 0, the buffer control is disabled. Usually the hard value is greater than soft.

10 Hz

The frequency at which Redis server performs background tasks, by default, is 10, and the larger this value indicates the more frequent the Redis performs for intermittent task (in times per second). "Intermittent task" includes "expired collection" detection, closing of the "Idle timeout" connection, etc., this value must be greater than 0 and less than 500. This value is too small to mean more CPU cycles, and the background task is polled more frequently. This value is too large to mean "memory sensitivity" is poor. It is recommended that you use default values.

45

# include/path/to/local.conf
# include/path/to/other.conf

Load the configuration file in additional.

Redis-related operations

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.