Redis Redundancy Scheme (keepalived, HAProxy, Redis Sentinel)

Source: Internet
Author: User
Tags auth failover redis haproxy redis cluster ansible playbook
Redis Redundancy Scheme (keepalived, HAProxy, Redis Sentinel)


If you're looking for a redis redundancy solution, I've found the following scenario, and I think I can try this out.
"highly Available Redis Cluster | Simplicity is the keynote of all true elegance"
Initially, it was considered to be built using pacemaker, because the Redis author launched Sentinel's scenario, so this article uses the Sentinel scenario.



Software list and version:
Os:centos 6.6
redis:2.8.19
haproxy:1.5.11
keepalived:1.2.13



Sentinel controls automatic failover between Redis's master/slave.
Sentinel's monitoring and management of master/slave works very well, but because master transfers do not allow the client to always connect to the same IP address. The following need to use Haproxy to provide a VIP scheme to achieve.
Haproxy all of the Redis servers (in our example, Master and slave two) as members of LB and ensure the health check of master.

Slave server is here as Master's standby, and cannot do distributed access.
The client can provide the IP address of the Sentinel query master, and changes to the master IP address cannot be directly notified to the client by Sentinel.
After slave is promoted to master, if the old master suddenly starts, the old master can be assigned to access directly through Haproxy. Canceling server auto-start is a good idea if you want Sentinel to immediately turn the old master into slave instead of temporary.
In the Haproxy scenario, the client uses the VRRP protocol in keepalived to provide a virtual IP to access Redis.
Let's hit setup. 

Redis Sentinel



Redis-sentinel.conf




Port 2379
logfile/var/log/redis/sentinel.log
dir/tmp
Sentinel Monitor MyMaster 192.168.1.20 6379 2
Sentinel Auth-pass mymaster my-redis-password
Sentinel down-after-milliseconds mymaster 30000
Sentinel Parallel-syncs MyMaster 1
Sentinel failover-timeout mymaster 180000
# Sentinel Notification-script MyMaster/ var/redis/notify.sh
# Sentinel Client-reconfig-script mymaster/var/redis/reconfig.sh


Master's name MyMaster, which you can specify with any string. In addition, you can set multiple master names, such as Mymaster1 and Mymaster2. This allows you to manage multiple Redis clusters with a set of Sentinel.
You only need to specify that Master,sentinel will get slave information from master in turn.

HAProxy



Haproxy.conf




global
    log 127.0.0.1 local2 notice
    maxconn 4096
    chroot /var/lib/haproxy
    user nobody
    group nobody
    daemon

defaults
    log global
    mode tcp
    retries 3
    option redispatch
    maxconn 2000
    timeout connect 2s
    timeout client 120s
    timeout server 120s

frontend redis
    bind :6379
    default_backend redis_backend

backend redis_backend
    option tcp-check
    tcp-check send AUTH\ my-redis-password\r\n
    tcp-check expect string +OK
    tcp-check send PING\r\n
    tcp-check expect string +PONG
    tcp-check send INFO\ REPLICATION\r\n
    tcp-check expect string role:master
    tcp-check send QUIT\r\n
    tcp-check expect string +OK
    server redis1 192.168.1.21:6379 check inter 1s
    server redis2 192.168.1.22:6379 check inter 1s


Tcp-check sends a string that expects a matching string to be answered. The above configuration is executed sequentially in a TCP session and the backslash needs to precede the space. Auth command sends a password, once the correct will answer an OK, send a string containing ping, will answer pong, send info replication will answer the master containing the OK string, you can see a live master. If you do not need authentication and ping, you can also not send them. 

keepalived



Unicast (unicast) is set here because of the packet traffic problem with multicast (multicast).



Keepalived.conf


Global_defs {
    Notification_email {
        Admin@example.com
    }
    Notification_email_from keepalived@example.com
    Smtp_server 127.0.0.1
    Smtp_connect_timeout 30
    Router_id hostname
}

Vrrp_script check_haproxy {
    Script "pkill -0 -x haproxy" #haproxy exists in the process of confirmation
    Interval 1 # runs every 1 second
    Fall 2 #Continuous judgment 2 failures
    Raise 2 # Successfully judge 2 times as normal
}

Vrrp_instance REDIS {
    State BACKUP # nopreempt 2 sets as BACKUP
    Interface eth0
    Smtp_alert
    Virtualrouter_id 51 # unique number in the same subnet
    Priority 101 # Larger number for Master
    Advert_int 1 # VRRP packet transmission interval
    Nopreermpt # Do not auto-failure
    Unicast_peer { # Configure unicast instead of multicast
        192.168.1.32 # Specify the IP address of the partner
    }
    Authentication {
        Auth_type PASS
        Auth_pass hogehoge # string of up to 8 characters
    }
    Virtual_ipaddress {
        192.168.1.30 # VIP
    }
    Track_script {
        Check_haproxy
    }
    # Script executed when promoting Master (of course, there is no need to set anything here)
    # notify_master /some/where/notify_master.sh
Is it a little exaggerated?


Although can provide Ansible playbook to construct the configuration, is not feeling a bit exaggerated. I think this is just a active/standby configuration.
Effectively use Sentinel to distribute the load by increasing the number of Redis replications. A pair of haproxy to manage multiple Redis clusters (Sentinel can be a group), although this is a good configuration, but too complex for simple active/standby.
In addition, for high-speed Redis, there is a real additional burden compared to the long wait time for haproxy.
By Redis-benchmark simply test the results, the performance is only half the original. (It was very fast, hoping to achieve similar performance)
You can consider removing haproxy, but keepalived is very necessary, and the VIP returned by keepalived is always available with Redis's master, which is managed by Redis Sentinel for Redis failover.
When the master of Redis is switched, the host that switched Redis by Keepalived is the new master.

This scenario is my reference to the HA scenario recommended by the following article "redis using keepalived."
Fujiwara writes that Keepalived still does not support VRRP unicast, but I can use it in a cloud service, such as EC2, because unicast can be used.
In Keepalived's notify_master, call Redis-cli's slaveof NO one command, promote a Redis to master, and write the changes to redis.conf to reflect the change.
CONFIG Rewrite is a command added in version 2.8 and is not supported by Fujiwara when writing this article.
When switching to backup, you can also use Notify_backup to switch Redis to slave.



Keepalived.conf


Global_defs {
    Notification_email {
        Admin@example.com
    }
    Notification_email_from keepalived@example.com
    Smtp_server 127.0.0.1
    Smtp_connect_timeout 30
    Router_id hostname
}

Vrrp_script check_redis {
    Script "/some/where/check_redis.sh" # check redis
    Interval 2 # runs every 2 seconds
    Fall 2 #Continuous judgment 2 failures
    Raise 2 # Successfully judge 2 times as normal
}

Vrrp_instance REDIS {
    State BACKUP # nopreempt 2 sets as BACKUP
    Interface eth0
    Smtp_alert
    Virtualrouter_id 51 # unique number in the same subnet
    Priority 101 # Larger number for Master
    Advert_int 1 # VRRP packet transmission interval
    Nopreermpt # Do not auto-failure
    Unicast_peer { # Configure unicast instead of multicast
        192.168.1.32 # Specify the IP address of the partner
    }
    Authentication {
        Auth_type PASS
        Auth_pass hogehoge # string of up to 8 characters
    }
    Virtual_ipaddress {
        192.168.1.30 # VIP
    }
    Track_script {
        Check_redis
    }
    # Master Script executed when promoting Master
    Notify_master /some/where/notify_master.sh
    # notify_backup /some/where/notify_backup.sh
} 


This is simple, if the master/standby method, you can refer to these configurations.



Original:
http://blog.1q77.com/2015/02/redis-ha/


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.