Redis Sentinel First Experience

Source: Internet
Author: User
Tags failover redis cluster install redis

since Redis added Sentinel cluster Tools, the blogger has never tried to use the tool. Recently in the current mainstream Redis cluster deployment scheme, so detailed look at the official Sentinel introduction and completed on their own desktop three Redis instance + three Sentinel instance deployment, here to do a simple summary.      First, download and install Redis. The Sentinel version currently released with Redis 2.8 is Antirez called Sentinel 2 and is rewritten on Sentinel 1. Because Sentinel 1 is obsolete and there are too many bugs, Antirez strongly recommends upgrading Redis and Sentinel to version 2.8, with the latest version of the blogger installed at 2.8.17.      Second, configure and start the Redis instance. Start three Redis instances on 6379, 6380, and 63,813 local ports, where 6379 is master and the remaining two are slave. The master-slave configuration of Redis is not mentioned here, but it should be noted that the difference between the two slave on the configuration parameter slave-priority: 6380 instance the configuration parameter is 50,6381 instance that configuration parameter is 100, When Master hangs, Sentinel chooses the slave-priority value as the new master.      Finally, configure and launch the Sentinel instance. Start three Sentinel instances on 26379, 26380, and 263,813 local ports, and three Sentinel instances are used to monitor the three Redis instances that have been started above. Here are the configuration file contents for the Sentinel instance on 26379, refer to the official document to configure only a few main parameters, the other two instances of the configuration file is only the port number and data directory is different. Port 26379dir/home/liangzhichao/data/redis/sentinels/26379sentinel Monitor MyMaster 127.0.0.1 6379 2sentinel Down-after-milliseconds mymaster 30000sentinel parallel-syncs master 1sentinel failover-timeout MyMaster 180000when launching a Sentinel instance, because you want to print log information to a file, but do not find a way to set the log file in the configuration file, start directly in the following way:./redis-sentinel/home/liangzhichao/data/ redis/confs/sentinel.26379.conf >>/home/liangzhichao/data/redis/logs/26379.log 2>&1 &. after the launch of the Sentinel instance, we might as well take a look at Sentinel's log to see what exactly did it do after it started, with the following 26379 instances of log content, From the log content you can see at least four things after Sentinel Startup: 1) generate a Runid for yourself to uniquely identify this instance, 2) Start monitoring master Redis instances running on 6379 ports, and 3) get all slave for master Redis instances Redis instance information so that you can select a new master;4 from all Salve Redis instances after the master Redis instance is hung up to publish your own presence to Sentinel instances that monitor the same master Redis instance, So that all Sentinel instances recognize and remember each other. [8229] 11:18:46.677 # requested maxclients of 10000 requiring at least 10032 max file descriptors.[8229] 11:18:46.677 # Redis can ' t set maximum open files to 10032 because of OS error:operation not permitted. [8229] 11:18:46.677 # Current maximum open files are 1024. MaxClients have been reduced to 992 to compensate for low ulimit. If you need higher maxclients increase ' ulimit-n '. [8229] 11:18:46.679 # Sentinel Runid is 2262ed911e9414208af4b1c48ad2b449fd4e0b89[8229] 11:18:46.679 # +monitor master mymaster 127.0.0.1 6379 Quorum 2[8229] 11:18:46.679 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379[8229] 11:18:46.679 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379[8229] 11:19:27.260 * +sentinel Sentinel 127.0.0.1:26380 127.0.0.1 26380 @ mymaster 127.0.0.1 6379[8229] 11:19:36.069 * +sentinel Sentinel 127.0.0.1:26381 127.0.0.1 26381 @ mymaster 127.0.0.1 6379It is important to note that, as we observe the above log content, each Sentinel instance also updates its own configuration file to record the current configuration information, at which point the profile content of each Sentinel instance is very different from before it was started. The following is a list of the main contents of the configuration file for the 26379 instance, from which all information about the slave Redis instance and other Sentinel instances have been added. In addition, the current profile has a version number of 0. Sentinel Monitor MyMaster 127.0.0.1 6379 2Sentinel Known-slave mymaster 127.0.0.1 6380Sentinel Known-slave mymaster 127.0.0.1 6381Sentinel Known-sentinel mymaster 127.0.0.1 26381 22b65a4796e6ece6b76284558a071cc83df71098Sentinel Known-sentinel mymaster 127.0.0.1 26380 59616326f3c539ff3301098e1bf708350e6dd45dSentinel Current-epoch 0at this point, a master two-slave redis cluster and a three instance Sentinel cluster are all started and working properly. Next we may wish to verify the correctness of the Sentinel cluster through the Jedis client, the following is the test code, the function is simple: first establish a connection to the Sentinel cluster, and then through the Sentinel cluster to obtain the current master Redis instance information, Finally, write a piece of data to the master Redis instance and query the data to ensure that the data is written successfully.
 Packageredis.clients.mytest;ImportJava.util.HashSet;ImportJava.util.Set;ImportRedis.clients.jedis.HostAndPort;ImportRedis.clients.jedis.Jedis;ImportRedis.clients.jedis.JedisSentinelPool; Public classMyjedissentineltest { Public Static voidMain (string[] args) {Set sentinels=NewHashSet (); Sentinels.add (NewHostandport ("localhost", 26379). toString ()); Sentinels.add (NewHostandport ("localhost", 26380). toString ()); Sentinels.add (NewHostandport ("localhost", 26381). toString ()); Jedissentinelpool Sentinelpool=NewJedissentinelpool ("MyMaster", Sentinels); System.out.println ("Current Master:" +Sentinelpool.getcurrenthostmaster (). toString ()); Jedis Master=Sentinelpool.getresource (); Master.set ("username", "Liangzhichao");        Sentinelpool.returnresource (master); Jedis Master2=Sentinelpool.getresource (); String value= Master2.get ("username"); System.out.println ("Username:" +value);        Master2.close ();    Sentinelpool.destroy (); }}

Executing the above code, we will get the following result information, so that the Sentinel cluster successfully obtained the master Redis instance information, to master Redis instance read and write requests can be handled normally.

2014-11-20 16:39:00 Redis.clients.jedis.JedisSentinelPool initsentinels
Info: Trying to find master from available Sentinels ...
2014-11-20 16:39:00 Redis.clients.jedis.JedisSentinelPool Initsentinels
Info: Redis Master running at 127.0.0.1:6379, starting Sentinel listeners ...
2014-11-20 16:39:00 Redis.clients.jedis.JedisSentinelPool Initpool
Info: Created Jedispool to master at 127.0.0.1:6379
Current master:127.0.0.1:6379
Username:liangzhichao so far, sentinel clusters are working, so let's take a look at how the Sentinel cluster handles the master Redis instance hanging off. We trigger this by killing the Redis instance process running on port 6379 and observing the log information for each instance of the Sentinel cluster, with each instance processing the log information that the master Redis instance hangs. 26379 Examples:[8229] 14:41:32.033 # +sdown Master mymaster 127.0.0.1 6379[8229] Nov 14:41:32.116 # +odown Master MyMaster 127. 0.0.1 6379 #quorum 2/2[8229] 14:41:32.116 # +new-epoch 1[8229] Nov 14:41:32.116 # +try-failover Master MyMaster 127.0.0.1 6379[8229] 14:41:32.286 # +vote-for-leader 2262ed911e9414208af4b1c48ad2b449fd4e0b89 1[8229] 14:41 : 32.286 # 127.0.0.1:26381 voted for 22b65a4796e6ece6b76284558a071cc83df71098 1[8229] Nov 14:41:32.387 # 127.0.0.1:2638 0 voted for 22b65a4796e6ece6b76284558a071cc83df71098 1[8229] Nov 14:41:33.326 # +config-update-from Sentinel 127.0.0.1 : 26381 127.0.0.1 26381 @ mymaster 127.0.0.1 6379[8229] Nov 14:41:33.326 # +switch-master mymaster 127.0.0.1 6379 127.0. 0.1 6380[8229] 14:41:33.326 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380[8229] 14: 41:33.430 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380[8229] Nov 14:42:03.507 # +sdown slave 127.0.0.1:6379 127.0.0.1 63@ mymaster 127.0.0.1 6380 26380 Examples:[8243] 14:41:32.023 # +sdown Master mymaster 127.0.0.1 6379[8243] Nov 14:41:32.336 # +new-epoch 1[8243] 1 4:41:32.386 # +vote-for-leader 22b65a4796e6ece6b76284558a071cc83df71098 1[8243] Nov 14:41:33.151 # +odown Master Mymas ter 127.0.0.1 6379 #quorum 3/2[8243] 14:41:33.151 # Next failover delay:i would not start a failover before Wed Nov 14:47:32 2014[8243] 14:41:33.327 # +config-update-from Sentinel 127.0.0.1:26381 127.0.0.1 26381 @ MyMaster 127. 0.0.1 6379[8243] 14:41:33.328 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6380[8243] Nov 14:41:33.328 * + Slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380[8243] Nov 14:41:33.558 * +slave slave 127.0.0.1:637 9 127.0.0.1 6379 @ mymaster 127.0.0.1 6380[8243] Nov 14:42:03.616 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymast ER 127.0.0.1 6380 26381 Examples:[8247] 14:41:32.042 # +sdown Master mymaster 127.0.0.1 6379[8247] Nov 14:41:32.094 # +odown Master MyMaster 127. 0.0.1 6379 #quorum 3/2[8247] 14:41:32.094 # +new-epoch 1[8247] Nov 14:41:32.094 # +try-failover Master MyMaster 127.0.0.1 6379[8247] 14:41:32.194 # +vote-for-leader 22b65a4796e6ece6b76284558a071cc83df71098 1[8247] 14:41 : 32.286 # 127.0.0.1:26379 voted for 2262ed911e9414208af4b1c48ad2b449fd4e0b89 1[8247] Nov 14:41:32.387 # 127.0.0.1:2638 0 voted for 22b65a4796e6ece6b76284558a071cc83df71098 1[8247] Nov 14:41:32.396 # +elected-leader Master MyMaster 127.0.0 .1 6379[8247] 14:41:32.396 # +failover-state-select-slave Master mymaster 127.0.0.1 6379[8247] Nov 14:41:32.459 # +selected-slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379[8247] Nov 14:41:32.459 * +failover-stat E-send-slaveof-noone slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379[8247] Nov 14:41:32.522 * +failover-s Tate-wait-promotion slave127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379[8247] Nov 14:41:33.307 # +promoted-slave slave 127.0.0.1:6380 1 27.0.0.1 6380 @ mymaster 127.0.0.1 6379[8247] Nov 14:41:33.307 # +failover-state-reconf-slaves Master MyMaster 127.0.0. 1 6379[8247] 14:41:33.326 * +slave-reconf-sent slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379[8247] 14:41:33.851 #-odown Master mymaster 127.0.0.1 6379[8247] Nov 14:41:34.356 * +slave-reconf-inprog slave 127.0.0 .1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379[8247] Nov 14:41:34.356 * +slave-reconf-done slave 127.0.0.1:6381 127.0 .0.1 6381 @ mymaster 127.0.0.1 6379[8247] Nov 14:41:34.426 # +failover-end Master mymaster 127.0.0.1 6379[8247] Nov 14:41:34.426 # +switch-master MyMaster 127.0.0.1 6379 127.0.0.1 6380[8247] Nov 14:41:34.427 * +slave slave 127.0.0.1:63 Bayi 127.0.0.1 6381 @ mymaster 127.0.0.1 6380[8247] Nov 14:41:34.479 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymas ter 127.0.0.1 6380[8247] 14:42:04.531 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380     by the above log content We can see roughly how the Sentinel cluster handles the basic process of the master Redis instance being hung: 1) Each Sentinel instance does not work by monitoring discovery of the 6379 port master Redis instance and sets the status of the instance to Sdown 2) communicate with each other through Sentinel to confirm that most sentinel instances are considered to have master Redis hung, and then set the state of the instance to odown;3) to prepare the failover that triggers the master Redis instance, To elect a Sentinel instance for the first failover operation; 4) An elected Sentinel instance chooses one from the slave Redis instance to become the new master Redis instance, and 5) after the master Redis instance has been switched, Synchronize the latest configuration information between each Sentinel instance, 6) Let the unsuccessful slave redis instance switch to the new master Redis instance and start synchronizing the data.      specific to our environment is that the Sentinel instance running on port 26381 gets the permission to perform this failover, so it chooses to run slave on port 6380 The Redis instance becomes the new master Redis instance (since the slave-priority of 6380 instances is smaller than the value of the 6381 instance), and the 6381 instances that were unsuccessful after the switchover begin to back up the data of the 6380 instance. Now let's look at the configuration file for the Sentinel instance to confirm that the configuration information is actually updated. The following is also the main content of the configuration file for 26379 instances, compared to the previous configuration file contents we can know that the master Redis instance has actually switched, the current configuration information version has changed to 1. Sentinel Monitor MyMaster 127.0.0.1 6380 2Sentinel Known-slave mymaster 127.0.0.1 6381Sentinel Known-slave mymaster 127.0.0.1 6379Sentinel Known-sentinel mymaster 127.0.0.1 26381 22b65a4796e6ece6b76284558a071cc83df71098Sentinel Known-sentinel mymaster 127.0.0.1 26380 59616326f3c539ff3301098e1bf708350e6dd45dSentinel Current-epoch 1   We execute the above Jedis test program again, get the following results, from the Sentinel cluster is indeed already a new master Redis instance!2014-11-20 16:39:00 Redis.clients.jedis.JedisSentinelPool Initsentinels
Info: Trying to find master from available Sentinels ...
2014-11-20 16:39:00 Redis.clients.jedis.JedisSentinelPool Initsentinels
Info: Redis Master running at 127.0.0.1:6380, starting Sentinel listeners ...
2014-11-20 16:39:00 Redis.clients.jedis.JedisSentinelPool Initpool
Info: Created Jedispool to master at 127.0.0.1:6380
Current master:127.0.0.1:6380
Username:liangzhichao

Redis Sentinel First Experience

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.