There are too many kv cache, the most commonly used is memcache, but there is a single point of memcache problem, but small Japan has a copy version, but the use of less people, the emergence of Redis to make KV memory storage ideas become a reality. Today the main content is the Redis master-slave Implementation of a simple cluster, in fact, Redis installation configuration hit the door Ttlsa before there is an article, nonsense less, get to the point
About Redis
Redis is a key-value storage system. Similar to memcached, it supports storing more value types, including string (string), list (linked list), set (set), and Zset (ordered collection). These data types support Push/pop, Add/remove, and intersection-set and difference sets, and richer operations, and these operations are atomic. Based on this, Redis supports sorting in a variety of different ways. As with memcached, data is cached in memory to ensure efficiency. The difference is that Redis periodically writes the updated data to disk or writes the modified operation to the appended record file, and Master-slave (Master-Slave) synchronization is implemented on this basis.
Redis is a high-performance Key-value database. The emergence of Redis, to a large extent, compensates for the lack of memcached such key/value storage, in some cases can be a good complement to the relational database. It provides the python,ruby,erlang,php client, which is very convenient to use.
1. Download the package
# cd/usr/local/src/
# wget http://redis.googlecode.com/files/redis-2.6.11.tar.gz
2. Redis Installation
Both master and slave need to install
# TAR-XZVF Redis-2.6.11.tar.gz
# MV redis-2.6.11/usr/local/
# cd/usr/local/redis-2.6.11/
# make
Note: Do not install on this side, directly use make good file
3. Redis Configuration
Locate the configuration file/usr/local/redis-2.6.11/redis.conf
Modify the following content:
Daemonize No to Yes # whether the background runs
Port 6379 changed to 12002 # ports
Dir./change to/data/redis_12002/or/www/redis_12002/# Data Directory
See the documentation for other configurations, with all configuration parameters appended to the end of the article
4. Redis Startup and shutdown
Start
/usr/local/redis-2.6.11/src/redis-server/usr/local/redis-2.6.11/redis.conf
Stop it
/usr/local/redis-2.6.11/src/redis-cli-n 12002 shutdown
5. Redis Command Test
Log in to the shell client first
/usr/local/redis-2.6.11/src/redis-cli-p 12002
Set test
Redis 127.0.0.1:12002> set name ABC
OK <---Success
Get test
Redis 127.0.0.1:12002> Get Name
"ABC"
About List,hash and so on is not a demonstration, see the relevant documents in detail
6. Redis Master-slave configuration
6.1 Only need to modify the configuration of slave
Locate the configuration file/usr/local/redis-2.6.11/redis.conf
Modify the following content:
slaveof 192.168.77.211 12002 # slaveof Master's port of IP master
6.2 Master-Slave testing
In the master set
Redis 192.168.77.211:12002> Set Testms GoGoGo
Ok
In slave get
Redis 192.168.77.197:12002> Get Testms
"GoGoGo" <----Gets the value
7. Attach: Redis configuration file
Daemonize Yes
Pidfile/var/run/redis.pid
Port 12002
Timeout 0
Tcp-keepalive 0
LogLevel Notice
LogFile stdout
Databases 16
Save 900 1
Save 300 10
Save 60 10000
Stop-writes-on-bgsave-error Yes
Rdbcompression Yes
Rdbchecksum Yes
Dbfilename Dump.rdb
dir/www/redis_12002/
Slave-serve-stale-data Yes
Slave-read-only Yes
Repl-disable-tcp-nodelay No
Slave-priority 100
AppendOnly No
Appendfsync everysec
No-appendfsync-on-rewrite No
Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64MB
Lua-time-limit 5000
Slowlog-log-slower-than 10000
Slowlog-max-len 128
Hash-max-ziplist-entries 512
Hash-max-ziplist-value 64
List-max-ziplist-entries 512
List-max-ziplist-value 64
Set-max-intset-entries 512
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64
activerehashing Yes
Client-output-buffer-limit Normal 0 0 0
Client-output-buffer-limit slave 256MB 64MB 60
Client-output-buffer-limit pubsub 32MB 8MB 60
Hz 10
If you want to change the configuration file for a standalone version of Redis, you only need to add
Slaveof 192.168.77.211 (Redis master IP) 12002 (Redis master port)
7. Concluding remarks
Of course, this is only the first step of the cluster, you can use KeepAlive to implement the main failover function. The most common use of our work is Redis master-slave, so the keepalive + Redis implementation of high availability clustering is not covered here.
According to a number of tests organized a plan:
1. Redis Performance
Some simple tests for Redis are for informational purposes only:
Test environment: Redhat6.2, Xeon E5520 (4-core) *2/8g,1000m NIC
Redis version: 2.6.9
The client machine uses the Redis-benchmark simple GET, set operation:
1.1 single-instance test
1. Value Size: 10byte~1390byte
Processing speed: 7.5 w/s, Speed limited by single-threading capability
2. Value Size: Around 1400
The processing speed drops to 5w/s appearance, the network card does not run full, because the request packet is larger than the MTU causes the TCP sub-packet, the service side interrupt processing request doubles, causes the business to drop sharply.
3. Value Size: >1.5 k
1000M network card runs full, speed is limited by network card speed
The processing speed is roughly related to the packet size as follows:
1.2 Multi-instance testing
The premise is that the system network card soft interrupt equalization to multi-CPU core processing, test machine network card open RSS, there are 16 queues:
Operation: 10 bytes Value SET, the server opened 8 instances, four client servers each open two Redis-benchmark, every client speed near 4w/s, the total processing of the server 30w/s around.
Network card traffic:
8 of the singular core CPUs are exhausted, such as hyper-threading is not utilized, the test has achieved a good result, did not continue to test. Run from a single instance of a core 7.5w/s,8 instance run full of 8 cores, 30W/S, CPU usage and performance increase is not directly proportional to, RSS will cause redis-server thread basic every request to switch CPU core, soft interrupt CPU occupied too high. This situation rps/rfs function may be appropriate, RSS only need to map one or two cores, and then the soft interrupt based on the Redis-server port dynamic forwarding, to ensure that the Redis process on a core execution, reduce the process unnecessary switching.
Open multi-instance can take full advantage of the system CPU, network card processing packet capacity. Look at the business scenario, consider the average packet size, processing CPU consumption, business volume. If multi-instance is to improve the processing power, need to pay attention to configure the network card Soft interrupt equalization, otherwise processing power can not be improved.
2. Redis Persistence
Test strategy: AOF + timed Rewriteaof
1. Amount of data prepared:
100 million, key:12 bytes value:15 bytes, stored as String, process consumes 12G of memory
2. Dump
File size 2.8G, execution time: 95s, restart load time: 112s
2. bgrewriteaof
File size 5.1G, execution time: 95s, restart load time: 165s
3. Performance impact after opening aof (Fsync once per second):
8K/S SET Operation: Cup increased from 20% to 40%
4. Modify 1KW Data:
File size: 5.6G, restart load time: 194s
5. Modify 2K Data
File size: 6.1G, restart load time: 200s
Another: Redis2.4 version of the Fsync has done a lot of optimization, bgrewriteaof,bgsave during the external service of Redis has no impact.
3. Redis Master-slave replication
Because the current version does not have the MySQL master-slave incremental backup, the network stability requirements are very high, if frequent TCP connections to meet the server and the network is a heavy burden.
In the current production environment, the master-slave machine is deployed under the same rack, and for several months it will not be connected again.
4. Keepalived Introduction
Reference Official document: Http://keepalived.org/pdf/sery-lvs-cluster.pdf
Keepalived is a C-write routing software, with Ipvs Load balancing is practical and provides high availability through the VRRP protocol. Currently the latest version of the 1.2.7.Keepalived machine between the practical VRRP routing protocol switching VIP, switching speed of the second level, and there is no brain crack problem. Can implement the
can achieve a master multi-standby, the main hanging back automatic election, drift VIP, switching speed of the second level; You can change the business service state by running the specified script when switching.
such as two hosts A, B, you can switch between the following:
1. A, B to start, a for the main, b for the
2. The main A hangs, B takes over the business, as the main
3.A up, as from slaveof B
4.B hang off, a cut back to the main
will be a full master, you can achieve master-slave , can do read and write separation, can also be through a number of VIP, in a machine on one half of the main, half from, to achieve mutual backup, two machines at the same time responsible for part of the business, a downtime after the business is centralized on a platform
Installation configuration is relatively simple:
Dependent packages Required: Openssl-devel (Libssl-dev in Ubuntu), Popt-devel (Libpopt-dev in Ubuntu).
Profile Default path:/etc/keepalived/keepalived.conf You can also specify the path manually, but be aware that you manually specify that you need to use an absolute path. Primarily to ensure that the configuration file is correct, keepalived does not check whether the configuration conforms to the rules. The
runs with keepalived-d to start 3 daemons: A parent process, a check health check, a vrrp,-d writes the log to/var/log/message, and a log view to toggle the status.
Note the problem:
1. The VRRP protocol is a multicast protocol that requires that the primary, standby, and VIP are in the same VLAN
2. Different VIPs need to correspond to different vrid, Vrid in one VLAN cannot conflict with other groups
3. There are two roles in keepalived: Master (one), Backup (multiple), if you set one for master, but Master hangs up again, it is inevitable that the business will switch again again, which is unacceptable for stateful services. The solution is that two machines are set to backup, and the high priority backup is set to NOPREEMT not preempted.
5. High-availability scenarios through keepalived
Switching process:
1. When master hangs up, VIP drifts to Slave;slave on keepalived notify Redis execution: Slaveof no one, start providing business
2. When Master gets up, the VIP address is unchanged, and master's keepalived notifies Redis to perform slaveof slave IP host, starting as a sync data from
3. In turn
master-Slave simultaneous down Machine Condition:
1. Non-planned, not to be considered, generally there is no such problem
2., scheduled restart, restart before restarting through the operation to save the DUMP main library data, need to note the order:
1. Close all Redis on one of the machines, master all cut to another machine (multi-instance deployment, one machine with both master and slave), and shut down the machine
2. Then dump the Lord Redis service
3. Close the main
4. Start the master and wait for the data load to complete
5. Start the
Delete dump file (avoid restarting load slow)
6. Implementing cluster scenarios using Twemproxy
An open source version of the C proxy, supported by memcached and Redis, is currently the latest: 0.2.4, ongoing development, https://github.com/twitter/twemproxy. Twitter uses it primarily to reduce the number of network connections between front-end and cache services.
Features: Fast, lightweight, reduce the number of backend cache server connections, easy to configure, support Ketama, Modula, random, commonly used hash shard algorithm.
Here the use of keepalived to achieve a high availability of the main preparation scheme, to solve the single point of the proxy problem;
Advantages:
1. For clients, the Redis cluster is transparent, the client is simple, and the dynamic expansion
2. Proxy is a single point, processing consistency hash, cluster node usability detection no brain crack problem
3. High-performance, CPU-intensive, and Redis node cluster multi-CPU resource redundancy, can be deployed on the Redis node cluster, no additional equipment required
7. Consistent Hash
Use zookeeper to achieve consistent hashing.
When the Redis service starts, it writes its own routing information to ZK through a temporary node, and the client reads the available routing information through the ZK client.
Concrete realization See me another article: Redis consistent hash
8. Monitoring Tools
Historical Redis Run Query: CPU, memory, hit ratio, request volume, master-slave switchover, etc.
Real-time monitoring curve
SMS Alarm
With the use of the open source Redis Live modification tool for batch instance monitoring, the basic functionality has been realized and details will be gradually improved.
The source address is as follows:
Https://github.com/LittlePeng/redis-monitor
Redis cluster (master-slave configuration)