Redis Sentinel-based Redis cluster (master-slave sharding) high-availability scenario (RPM)

Source: Internet
Author: User
Tags redis cluster

This paper mainly introduces a high availability scheme of Redis cluster through Jedis&sentinel, this scheme needs to use Jedis2.2.2 and above (mandatory), Redis2.8 and above (optional, Sentinel first appeared in the Redis2.4, Redis2.8 Sentinel more stable, Redis cluster is a shard (sharding) plus master-slave way to build, to meet the requirements of scalability;

About Redis Sentinel

Redis Sentinel is the official Redis Cluster management tool with three main features:
Monitoring, can continuously monitor Redis master and slave instances are working properly;
Notification that the system administrator or other program can be notified through the API when there is a problem with the redis instance being monitored;
Automatic failback, if the primary instance does not work, Sentinel initiates a recovery mechanism that promotes a primary instance from an instance, and the other slave instances will be reconfigured to the new primary instance, and the application will be notified of a new replacement address.
Redis Sentinel is a distributed system that can deploy multiple Sentinel instances to monitor the same set of Redis instances through the gossip protocol to determine a primary instance outage and to perform failback and configuration changes through the Agreement protocol. Generally deploying multiple instances in a production environment to improve system availability, as long as one Sentinel instance is functioning properly, it ensures that the monitored Redis instances are running normally (like zookeeper, with multiple zookeeper to improve system availability);
This article does not involve the implementation details and working principles of Sentinel, readers can read other articles to understand;

Redis ha Scenario

The key to HA is to avoid single point of failure and failure recovery, before Redis cluster is released, Redis is generally deployed in master/slave mode (the applications discussed here are mainly used for backup from the instance, the primary instance provides read and write, many applications are read and write separated, and the read and write operations need to take different redis instances. The scheme can also be used for this kind of application, the principle is the same, the difference is how the data Operation layer encapsulation, the way to achieve HA mainly has the following several scenarios:
1,keepalived: Through the keepalived virtual IP, provides the master-slave unified access, in the main problem, through the keepalived run script will be promoted from the main, after the primary recovery after synchronization automatically master, the benefits of this scheme is master-slave switching, the application does not need to know ( Because the virtual IP access is not changed, the disadvantage is the introduction of keepalived to increase deployment complexity;
2,zookeeper: Through the zookeeper to monitor master-slave instances, maintain the latest effective IP, the application through zookeeper acquisition of IP, access to Redis;
3,sentinel: Automatic failback with Sentinel monitoring of Master and slave instances, this scheme has a flaw: because the master-slave instance address (ip& PORT) is different, and the application cannot know the new address when the fault occurs after a master-slave switch The support for Sentinel is added to the Jedis2.2.2, and the Jedis instances obtained by Redis.clients.jedis.JedisSentinelPool.getResource () are updated to the new primary instance address in a timely manner.
The author of the company first used the program 11 period of time, found that keepalived in some cases will lead to data loss, keepalived through the shell script for master-slave switching, configuration complex, and keepalived become a new single point, and then chose the Scheme 3, Using the Redis official solution, (scenario 2 requires a lot of code to be written, no scenario 3 is easy, online use scenario 2 readers can see for themselves)


Problems with Sentinel Selection

Sentinel&jedis looks like a perfect solution, which is only half right, in the case of non-sharding, but our application uses data sharding-sharing, the data is distributed evenly across 4 different instances, each of which is deployed in a master-slave structure, Jedis does not provide Sentinel-based Shardedjedispool, meaning that in 4 shards, if one of the shards has a master-slave switchover, the shardedjedispool used by the application cannot be notified, and all operations on that Shard will fail.
This article provides a Sentinel-based Shardedjedispool, can be timely aware of all the partition master-slave switching behavior, Connection pool reconstruction, source code see Shardedjedissentinelpool. Java

Shardedjedissentinelpool Implementing analytic constructors


Similar to the previous construction method of the Jedis pool, parameters are required Poolc


Get the master address of all current shards (ip&port), for each shard, by connecting the Sentinel instance sequentially, get the master address of the Shard, if not available, that is, all Sentinel can not connect, hibernate 1 seconds to continue to retry, Until the master address of all shards is obtained, the code block is as follows:


Pass

Initializes the connection pool to which all connections in this connection pool point to the Shard's master;


Monitor each Sentinel

In the method

Finally, a thread is started for each sentinel to monitor changes made by Sentinel:

The Run method of the thread subscribes to the Sentinel instance via the Jedis pub/sub API (which implements the Jedispubsub interface and subscribes via Jedis.subscribe) to the "+switch-master" channel. When Sentinel makes a master-slave switchover, the thread gets a notification of the new master address, determines which Shard is toggled by master name, replaces the new master address with the address of the original location, and calls Initpool (List Masters) Jedis connection pool reconstruction; All subsequent connections made through the connection pool point to the new master address and are transparent to the application;


Application examples
Summarize


This article through the real problem, that is, in the case of Redis data sharding, when using Sentinel to do ha, how to do the master-slave switch to the application transparent, through the Jedis pub/sub function, can simultaneously monitor the master-slave switching of multiple shards, and re-constructs the connection pool from the new address heard, and all subsequent connections from the connection pool point to the new address. The key to this scenario is that the Ha,jedis version using Sentinel must be 2.2.2 and above, and all connections to the Redis instance must be fetched from the connection pool;


GitHub home of the project: Https://github.com/warmbreeze/sharded-jedis-sentinel-pool

Http://www.07net01.com/linux/jiyuRedis_SentineldeRedisjiqun_zhucong_amp_amp_Sharding_gaokeyongfangan_720296_ 1393002463.html

Redis Sentinel-based Redis cluster (master-slave sharding) high-availability scenario (RPM)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.