Use LVS keepalived to configure redis high availability and load balancing

Source: Internet
Author: User
Requirement

We use elasticsearch for log service. The architecture is upstream data source => redis => logstash => es

Redis is still a single point of use, and there is no high availability. Now there is more and more data. If downstream consumption is not a problem, it will take place if data in redis comes, but once the downstream has a problem, the memory allocated to redis is full in half an hour.

We can see that the redis3.0 beta version has provided the cluster function, but the client needs to be connected in cluster mode. Therefore, it is unlikely that we require the transformation of so many upstream users.

The company also has a hardware LB, which is used by colleagues in Company E. however, you still need to apply for access, and the redis structure is not yet determined, and the changes are quite large. You need to change the structure in the future. The company's process is quite troublesome... Second, I want to do it myself ~ Select the LVS solution.

Target
  1. High availability. each server runs one (or more) redis-server instance, one instance fails, or one server is ready, and can be seamlessly handed over to another instance/server. data may be lost. If there are high requirements for data reliability in the future, it will work with dump and master slave.
  2. Server Load balancer. if you only consider high availability, you can actually use keepalived. When a redis-server/server crashes, the VIP will be transferred to another server, but the backup server will be idle, our company is small and cannot be so wasteful ..
  3. Customers do not need to make any changes, and they do not need to restart, so that they do not feel it !.. Let them change things... or forget it. Second, it's hard to count the number of customers. I blame myself for not writing document records ..
Design

Two real servers, 192.168.81.51 and 192.168.81.234

One VIP 192.168.81.229

Each of the above redis-server instances starts with port 6379.

The VIP is on the master, and the round robin is switched to one of the servers. (The data volume of each customer is different, and redis is basically a persistent connection. Unlike HTTP, it is not completely load balancing)

Master Slave can be considered in the future. For example, you can run a 17379 instance on the B server to create a 6379 slave on the server. vice versa.

Environment
#uname -aLinux VMS02564 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux#cat /etc/*releaseCentOS release 5.8 (Final)[[email protected] etc]$
Prepare the implementation software and install the LVS kernel module. This is already installed by default.
 modprobe -l|grep -i ipvs/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_dh.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_ftp.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_lblc.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_lblcr.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_lc.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_nq.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_rr.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_sed.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_sh.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_wlc.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_wrr.ko
Install ipvsadm.

Yum can be installed. There is no such thing. It is used to manage LVS. You have not carefully looked at the usage and will see it later.

Install keepalived.

The latest version is 1.2.13, but the source code has been less dependent. If it is not done, pull it down. change to 1.2.8. keepalived has a pitfall, that is, if the configuration file is wrong or there is no configuration file, no error will be reported during startup. the default configuration file uses/etc/keepalived. conf. If it is installed elsewhere, take the test.

Configure

Before configuration, you must understand the principle. This principle is also the most learned and the most important reason. if you understand the principle and encounter difficulties, you can quickly diagnose and solve the problem. Otherwise, you can only guess in a black box. It is luck and a pitfall.

Keepalived Configuration
Vrrp_instance vi_1 {state master # Start with Master. If the priority of other nodes is high, convert to backup interface eth0 virtual_router_id 51 # ID between nodes should be the same as priority 100 # Master advert_int 1 authentication {auth_type pass # authentication method between nodes auth_pass 1111 # consistency between nodes} virtual_ipaddress {192.168.81.229 }# virtual host configuration virtual_server 192.168.81.229 6379 {# Set VIP port delay_loop 6 # Check real_server status lb_algo RR # LVS scheduling algorithm here using Weighted Round Robin: rr | WRR | LC | wlc | lblc | sh | DH lb_kind Dr # Server Load balancer forwarding rule Nat | Dr | Tun # persistence_timeout 60 # session persistence time protocol TCP # use protocol TCP or UDP real_server 192.168.81.51 6379 {weight 50 tcp_check {# TCP health check # connect_timeout 3 # connection timeout # nb_get_retry 2 # reconnection times # delay_before_retry 3 # reconnection interval connect_port 6379 # Health Check Port} real_server 192.168.81.234 6379 {weight 50 tcp_check {# TCP health check connect_port 6379 # Health Check port }}}

In two parts, the upper part is to create a vrrp instance (what is vrrp ?). If the following virtual host configuration is not required, the redis client will be connected to the node where the current VIP is located. when keepalived is down, the backup will become the master, and the VIP will be switched to the new master. however, load balance cannot be implemented.

Manually configure virtual IP addresses
Configuration:
Ifconfig eth0: 1 VIP netmask 255.255.255.0
Delete:
Ifconfig ethos: 1 down

The lower part is the load balance configuration. I think keepalived configures LVS according to this configuration. You can see it through ipvsadm.

#ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  192.168.81.229:6379 rr  -> 192.168.81.234:6379          Route   1      0          0  -> VMS02245:6379                Local   1      0          0

Keepalived is enough for the two nodes, right? If lb is not required, only ha is required. You only need to configure the upper part and run it. However, if lb is required, the following system configuration is required.

System Configuration

Before you understand the system configuration, you must first understand the principles of LVS. If you are eager to configure it, do not continue reading it. Otherwise, it is a pitfall.

Dr forwarding Principle

I use the direct routing (DR) forwarding method in LVS.

When a client sends a Web request to the VIP, The LVS server selects the corresponding real-server pool based on the VIP, and selects a real-server in the pool based on the algorithm, LVS records the connection in the hash table and sends the client request package to the selected real-server (only the target MAC address of the package is modified ), finally, the selected real-server sends the response packet directly to the client. When the client continues to send the packet, LVS records the hash table information based on the updated information, send the request that belongs to this connection directly to the selected real-server. When the connection is terminated or times out, the records in the hash table will be deleted.
Details about the three modes of from LVS-Jason Wu's thoughts and writings

Because Dr forwarding only changes the destination MAC address and the destination IP address is not changed or VIP, if this VIP address is not configured on the RealServer, the packet will be discarded directly. therefore, you must configure a VIP with a 32 mask on the RealServer, as shown below:

ifconfig lo:1 VIP netmask 255.255.255.0 up

However, when someone asks who's IP address is 192.168.81.229, both network adapters say that I am my network adapter. who should I send it?FirstNow, see the figure:

#tcpdump -e -nn host 192.168.81.229tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes22:27:50.720431 00:50:56:92:05:b9 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: arp who-has 192.168.81.229 tell 192.168.81.15622:27:50.720858 00:50:56:92:4d:6d > 00:50:56:92:05:b9, ethertype ARP (0x0806), length 60: arp reply 192.168.81.229 is-at 00:50:56:92:4d:6d22:27:50.720881 00:50:56:92:05:b9 > 00:50:56:92:4d:6d, ethertype IPv4 (0x0800), length 98: 192.168.81.156 > 192.168.81.229: ICMP echo request, id 31307, seq 1, length 6422:27:50.721040 00:50:56:92:36:44 > 00:50:56:92:05:b9, ethertype ARP (0x0806), length 60: arp reply 192.168.81.229 is-at 00:50:56:92:36:4422:27:50.721130 00:50:56:92:4d:6d > 00:50:56:92:05:b9, ethertype IPv4 (0x0800), length 98: 192.168.81.229 > 192.168.81.156: ICMP echo reply, id 31307, seq 1, length 64

When we Ping 192.168.81.229 on host C, both nodes say 229 is here. host C selects the first host to send an ICMP packet. this is too unreliable. We must send our package to the real host.

Fortunately, the Linux system has a configuration for ARP request response ~

        echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore        echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce        echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore        echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce  

The configuration and its meaning are as follows:

arp_announce - INTEGER    Define different restriction levels for announcing the local    source IP address from IP packets in ARP requests sent on    interface:    0 - (default) Use any local address, configured on any interface    1 - Try to avoid local addresses that are not in the target‘s    subnet for this interface. This mode is useful when target    hosts reachable via this interface require the source IP    address in ARP requests to be part of their logical network    configured on the receiving interface. When we generate the    request we will check all our subnets that include the    target IP and will preserve the source address if it is from    such subnet. If there is no such subnet we select source    address according to the rules for level 2.    2 - Always use the best local address for this target.    In this mode we ignore the source address in the IP packet    and try to select local address that we prefer for talks with    the target host. Such local address is selected by looking    for primary IP addresses on all our subnets on the outgoing    interface that include the target IP address. If no suitable    local address is found we select the first local address    we have on the outgoing interface or on all other interfaces,    with the hope we will receive reply for our request and    even sometimes no matter the source IP address we announce.    The max value from conf/{all,interface}/arp_announce is used.    Increasing the restriction level gives more chance for    receiving answer from the resolved target while decreasing    the level announces more valid sender‘s information.arp_ignore - INTEGER    Define different modes for sending replies in response to    received ARP requests that resolve local target IP addresses:    0 - (default): reply for any local target IP address, configured    on any interface    1 - reply only if the target IP address is local address    configured on the incoming interface    2 - reply only if the target IP address is local address    configured on the incoming interface and both with the    sender‘s IP address are part from same subnet on this interface    3 - do not reply for local addresses configured with scope host,    only resolutions for global and link addresses are replied    4-7 - reserved    8 - do not reply for all local addresses    The max value from conf/{all,interface}/arp_ignore is used    when ARP request is received on the {interface}

From using ARP announce/arp ignore to disable ARP-lvskb

Configure Lo

Ifconfig lo: 1 VIP netmask 255.255.255.0 up

Configure ARP

Echo "1">/proc/sys/NET/IPv4/CONF/LO/arp_ignore

Echo "2">/proc/sys/NET/IPv4/CONF/LO/arp_announce

Echo "1">/proc/sys/NET/IPv4/CONF/All/arp_ignore

Echo "2">/proc/sys/NET/IPv4/CONF/All/arp_announce

At the same time, only one node can run LVS forwarding

If the keepalived with the same configuration is run on both of them, then data from A to B is forwarded to a, data from A to B, and data from B to... Therefore, you can only run LVS on one host.

I originally thought that keepalive should support this configuration, that is, some configurations (such as virtual server) are activated only when the configuration is changed to master, but it does not seem to work. so we can only use one method of comparison. If there are not many words, the final configuration is as follows:

# Cat master. conf

global_defs {   router_id LVS_DEVEL}vrrp_instance VI_1 {    state BACKUP    interface eth0    virtual_router_id 51    priority 99    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.81.229/24    }    notify_master "/etc/keepalived/notify_master.sh"    notify_backup "/etc/keepalived/notify_backup.sh"}virtual_server 192.168.81.229 6379 {    delay_loop 6    lb_algo rr    lb_kind DR    persistence_timeout 0    protocol TCP    real_server 192.168.81.51 6379 {        weight 1        TCP_CHECK {          connect_port    6379          connect_timeout 3        }    }    real_server 192.168.81.234 6379 {        weight 1        TCP_CHECK {          connect_port    6379          connect_timeout 3        }    }}

# Cat policy_master.sh

#!/bin/shecho “0” >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho “0” >/proc/sys/net/ipv4/conf/lo/arp_announceecho “0” >/proc/sys/net/ipv4/conf/all/arp_ignoreecho “0” >/proc/sys/net/ipv4/conf/all/arp_announce  diff /etc/keepalived/keepalived.conf /etc/keepalived/master.confif test "$?" != "0"; then    cp /etc/keepalived/master.conf /etc/keepalived/keepalived.conf    killall -HUP keepalivedfi

# Cat notify_backup.sh

#!/bin/shecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce  diff /etc/keepalived/keepalived.conf /etc/keepalived/backup.confif test "$?" != "0"; then    cp /etc/keepalived/backup.conf /etc/keepalived/keepalived.conf    killall -HUP keepalivedfi
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.