Redis Cluster Scenario

Source: Internet
Author: User
Tags redis version redis cluster

According to a number of tests organized a plan:

1. Redis Performance

Some simple tests for Redis are for informational purposes only:

Test environment: Redhat6.2, Xeon E5520 (4-core) *2/8g,1000m NIC

Redis version: 2.6.9

The client machine uses the Redis-benchmark simple GET, set operation:

1.1 single-instance test

1. Value Size: 10byte~1390byte

Processing speed: 7.5 w/s, Speed limited by single-threading capability

2. Value Size: Around 1400

The processing speed drops to 5w/s appearance, the network card does not run full, because the request packet is larger than the MTU causes the TCP sub-packet, the service side interrupt processing request doubles, causes the business to drop sharply.

3. Value Size: >1.5 k

1000M network card runs full, speed is limited by network card speed

The processing speed is roughly related to the packet size as follows:

1.2 Multi-instance testing

The premise is that the system network card soft interrupt equalization to multi-CPU core processing, test machine network card open RSS, there are 16 queues:

Operation: 10 bytes Value SET, the server opened 8 instances, four client servers each open two Redis-benchmark, every client speed near 4w/s, the total processing of the server 30w/s around.

Network card traffic:

8 of the singular core CPUs are exhausted, such as hyper-threading is not utilized, the test has achieved a good result, did not continue to test. Run from a single instance of a core 7.5w/s,8 instance run full of 8 cores, 30W/S, CPU usage and performance increase is not directly proportional to, RSS will cause redis-server thread basic every request to switch CPU core, soft interrupt CPU occupied too high. This situation rps/rfs function may be appropriate, RSS only need to map one or two cores, and then the soft interrupt based on the Redis-server port dynamic forwarding, to ensure that the Redis process on a core execution, reduce the process unnecessary switching.

Open multi-instance can take full advantage of the system CPU, network card processing packet capacity. Look at the business scenario, consider the average packet size, processing CPU consumption, business volume. If multi-instance is to improve the processing power, need to pay attention to configure the network card Soft interrupt equalization, otherwise processing power can not be improved.

2. Redis Persistence

Test strategy: AOF + timed Rewriteaof

1. Amount of data prepared:

100 million, key:12 bytes value:15 bytes, stored as String, process consumes 12G of memory

2. Dump

File size 2.8G, execution time: 95s, restart load time: 112s

2. bgrewriteaof

File size 5.1G, execution time: 95s, restart load time: 165s

3. Performance impact after opening aof (Fsync once per second):

8K/S SET Operation: Cup increased from 20% to 40%

4. Modify 1KW Data:

File size: 5.6G, restart load time: 194s

5. Modify 2K Data

File size: 6.1G, restart load time: 200s

Another: Redis2.4 version of the Fsync has done a lot of optimization, bgrewriteaof,bgsave during the external service of Redis has no impact.

3. Redis Master-slave replication

Because the current version does not have the MySQL master-slave incremental backup, the network stability requirements are very high, if frequent TCP connections to meet the server and the network is a heavy burden.

In the current production environment, the master-slave machine is deployed under the same rack, and for several months it will not be connected again.

4. Keepalived Introduction

Reference Official document: Http://keepalived.org/pdf/sery-lvs-cluster.pdf

Keepalived is a C-write routing software, with Ipvs Load balancing is practical and provides high availability through the VRRP protocol. Currently the latest version of the 1.2.7.Keepalived machine between the practical VRRP routing protocol switching VIP, switching speed of the second level, and there is no brain crack problem. Can implement the

can achieve a master multi-standby, the main hanging back automatic election, drift VIP, switching speed of the second level; You can change the business service state by running the specified script when switching.

such as two hosts A, B, you can switch between the following:

1. A, B to start, a for the main, b for the

2. The main A hangs, B takes over the business, as the main

3.A up, as from slaveof B

4.B hang off, a cut back to the main

will be a master of all, you can achieve master-slave, readable  Write separation, can also be through a number of VIP, in one machine multiple instances of half master, half from, to achieve mutual backup, two machines at the same time responsible for part of the business, a downtime after the business is centralized on a platform

Installation configuration is relatively simple:

Dependent packages Required: Openssl-devel (Libssl-dev in Ubuntu), Popt-devel (Libpopt-dev in Ubuntu).

Profile Default path:/etc/keepalived/keepalived.conf You can also specify the path manually, but be aware that you manually specify that you need to use an absolute path. Primarily to ensure that the configuration file is correct, keepalived does not check whether the configuration conforms to the rules. The

runs with keepalived-d to start 3 daemons: A parent process, a check health check, a vrrp,-d writes the log to/var/log/message, and a log view to toggle the status.

Note the problem:

1. The VRRP protocol is a multicast protocol that requires that the primary, standby, and VIP are in the same VLAN

2. Different VIPs need to correspond to different vrid, Vrid in one VLAN cannot conflict with other groups

3. There are two roles in keepalived: Master (one), Backup (multiple), if you set one for master, but Master hangs up again, it is inevitable that the business will switch again again, which is unacceptable for stateful services. The solution is that two machines are set to backup, and the high priority backup is set to NOPREEMT not preempted.

5. High-availability scenarios through keepalived

Switching process:

1. When master hangs up, VIP drifts to Slave;slave on keepalived notify Redis execution: Slaveof no one, start providing business

2. When Master gets up, the VIP address is unchanged, and master's keepalived notifies Redis to perform slaveof slave IP host, starting as a sync data from

3. In turn

master-Slave simultaneous down Machine Condition:

1. Non-planned, not to be considered, generally there is no such problem

2., scheduled restart, restart before restarting through the operation to save the DUMP main library data, need to note the order:

1. Close all Redis on one of the machines, master all cut to another machine (multi-instance deployment, one machine with both master and slave), and shut down the machine

2. Then dump the Lord Redis service

3. Close the main

4. Start the master and wait for the data load to complete

5. Start the

Delete dump file (avoid restarting load slow)

6. Implementing cluster scenarios using Twemproxy

An open source version of the C proxy, supported by memcached and Redis, is currently the latest: 0.2.4, ongoing development, https://github.com/twitter/twemproxy. Twitter uses it primarily to reduce the number of network connections between front-end and cache services.

Features: Fast, lightweight, reduce the number of backend cache server connections, easy to configure, support Ketama, Modula, random, commonly used hash shard algorithm.

Here the use of keepalived to achieve a high availability of the main preparation scheme, to solve the single point of the proxy problem;

Advantages:

1. For clients, the Redis cluster is transparent, the client is simple, and the dynamic expansion

2. Proxy is a single point, processing consistency hash, cluster node usability detection no brain crack problem

3. High-performance, CPU-intensive, and Redis node cluster multi-CPU resource redundancy, can be deployed on the Redis node cluster, no additional equipment required

7. Consistent Hash

Use zookeeper to achieve consistent hashing.

When the Redis service starts, it writes its own routing information to ZK through a temporary node, and the client reads the available routing information through the ZK client.

Concrete realization See me another article: Redis consistent hash

8. Monitoring Tools

Historical Redis Run Query: CPU, memory, hit ratio, request volume, master-slave switchover, etc.

Real-time monitoring curve

SMS Alarm

With the use of the open source Redis Live modification tool for batch instance monitoring, the basic functionality has been realized and details will be gradually improved.

The source address is as follows:

Https://github.com/LittlePeng/redis-monitor

Http://www.cnblogs.com/lulu/archive/2013/06/10/3130878.html

Redis Cluster Scenario

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.