Use lvs keepalived to configure redis high availability and load balancing, lvsredis

Source: Internet
Author: User

Use lvs keepalived to configure redis high availability and load balancing, lvsredis
Requirement

We use elasticsearch for log service. The architecture is upstream data source => redis => logstash => ES

Redis is still a single point of use, and there is no high availability. Now there is more and more data. If downstream consumption is not a problem, it will take place if data in redis comes, but once the downstream has a problem, the memory allocated to redis is full in half an hour.

We can see that the redis3.0 beta version has provided the cluster function, but the client needs to be connected in cluster mode. Therefore, it is unlikely that we require the transformation of so many upstream users.

The company also has a hardware LB, which is used by colleagues in Company E. however, you still need to apply for access, and the redis structure is not yet determined, and the changes are quite large. You need to change the structure in the future. The company's process is quite troublesome... Second, I want to do it myself ~ Select the LVS solution.

Target Design

Two real servers, 192.168.81.51 and 192.168.81.234

One VIP 192.168.81.229

Each of the above redis-server instances starts with port 6379.

The VIP is on the master, and the round robin is switched to one of the servers. (The data volume of each customer is different, and redis is basically a persistent connection. Unlike Http, it is not completely load balancing)

Master slave can be considered in the future. For example, you can run A 17379 instance on the B server to create A 6379 slave on the server. vice versa.

Environment
#uname -aLinux VMS02564 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux#cat /etc/*releaseCentOS release 5.8 (Final)[op1@VMS02564 etc]$
Prepare the implementation software and install the lvs kernel module. This is already installed by default.
 modprobe -l|grep -i ipvs/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_dh.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_ftp.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_lblc.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_lblcr.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_lc.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_nq.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_rr.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_sed.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_sh.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_wlc.ko/lib/modules/2.6.18-308.el5/kernel/net/ipv4/ipvs/ip_vs_wrr.ko
Install ipvsadm.

Yum can be installed. There is no such thing. It is used to manage lvs. You have not carefully looked at the usage and will see it later.

Install keepalived.

The latest version is 1.2.13, but the source code has been less dependent. If it is not done, pull it down. change to 1.2.8. keepalived has a pitfall, that is, if the configuration file is wrong or there is no configuration file, no error will be reported during startup. the default configuration file uses/etc/keepalived. conf. If it is installed elsewhere, take the test.

Configure

Before configuration, you must understand the principle. This principle is also the most learned and the most important reason. if you understand the principle and encounter difficulties, you can quickly diagnose and solve the problem. Otherwise, you can only guess in a black box. It is luck and a pitfall.

Keepalived Configuration
Vrrp_instance VI_1 {state MASTER # Start with master. If the priority of other nodes is high, convert to backup interface eth0 virtual_router_id 51 # ID between nodes should be the same as priority 100 # master advert_int 1 authentication {auth_type PASS # authentication method between nodes auth_pass 1111 # consistency between nodes} virtual_ipaddress {192.168.81.229 }# virtual host configuration virtual_server 192.168.81.229 6379 {# Set VIP port delay_loop 6 # Check real_server status lb_algo rr # lvs scheduling algorithm here using Weighted Round Robin: rr | wrr | lc | wlc | lblc | sh | dh lb_kind DR # Server Load balancer forwarding rule NAT | DR | TUN # persistence_timeout 60 # session persistence time protocol TCP # use protocol TCP or UDP real_server 192.168.81.51 6379 {weight 50 TCP_CHECK {# tcp health check # connect_timeout 3 # connection timeout # nb_get_retry 2 # reconnection times # delay_before_retry 3 # reconnection interval connect_port 6379 # Health Check Port} real_server 192.168.81.234 6379 {weight 50 TCP_CHECK {# tcp health check connect_port 6379 # Health Check port }}}

In two parts, the upper part is to create a vrrp instance (what is vrrp ?). If the following virtual host configuration is not required, the redis client will be connected to the node where the current VIP is located. when keepalived is down, the backup will become the master, and the VIP will be switched to the new master. however, Load balance cannot be implemented.

Manually configure virtual IP addresses
Configuration:
Ifconfig eth0: 1 VIP netmask 255.255.255.0
Delete:
Ifconfig ethos: 1 down

The lower part is the load balance configuration. I think keepalived configures lvs according to this configuration. You can see it through ipvsadm.

#ipvsadmIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  192.168.81.229:6379 rr  -> 192.168.81.234:6379          Route   1      0          0  -> VMS02245:6379                Local   1      0          0

Keepalived is enough for the two nodes, right? If LB is not required, only HA is required. You only need to configure the upper part and run it. However, if LB is required, the following system configuration is required.

System Configuration

Before you understand the system configuration, you must first understand the principles of lvs. If you are eager to configure it, do not continue reading it. Otherwise, it is a pitfall.

DR forwarding Principle

I use the direct routing (DR) forwarding method in lvs.

When a client sends a WEB request to the VIP, The LVS server selects the corresponding real-server Pool based on the VIP, and selects a Real-server in the Pool based on the algorithm, LVS records the connection in the hash table and sends the client request package to the selected Real-server (only the target mac address of the package is modified ), finally, the selected Real-server sends the response packet directly to the client. When the client continues to send the packet, LVS records the hash table information based on the updated information, send the request that belongs to this connection directly to the selected Real-server. When the connection is terminated or times out, the records in the hash table will be deleted.
Details about the three modes of from LVS-Jason Wu's Thoughts and Writings

Because DR forwarding only changes the destination MAC address and the destination IP address is not changed or VIP, if this VIP address is not configured on the realserver, the packet will be discarded directly. therefore, you must configure a VIP with a 32 mask on the realserver, as shown below:

ifconfig lo:1 VIP netmask 255.255.255.0 up

However, when someone asks who's IP address is 192.168.81.229, both network adapters say that I am my network adapter. who should I send it?FirstNow, see the figure:

#tcpdump -e -nn host 192.168.81.229tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes22:27:50.720431 00:50:56:92:05:b9 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: arp who-has 192.168.81.229 tell 192.168.81.15622:27:50.720858 00:50:56:92:4d:6d > 00:50:56:92:05:b9, ethertype ARP (0x0806), length 60: arp reply 192.168.81.229 is-at 00:50:56:92:4d:6d22:27:50.720881 00:50:56:92:05:b9 > 00:50:56:92:4d:6d, ethertype IPv4 (0x0800), length 98: 192.168.81.156 > 192.168.81.229: ICMP echo request, id 31307, seq 1, length 6422:27:50.721040 00:50:56:92:36:44 > 00:50:56:92:05:b9, ethertype ARP (0x0806), length 60: arp reply 192.168.81.229 is-at 00:50:56:92:36:4422:27:50.721130 00:50:56:92:4d:6d > 00:50:56:92:05:b9, ethertype IPv4 (0x0800), length 98: 192.168.81.229 > 192.168.81.156: ICMP echo reply, id 31307, seq 1, length 64

When we Ping 192.168.81.229 on host C, both nodes say 229 is here. host C selects the first host to send an icmp packet. this is too unreliable. We must send our package to the real host.

Fortunately, the Linux system has a configuration for arp request response ~

        echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore        echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce        echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore        echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce  

The configuration and its meaning are as follows:

arp_announce - INTEGER    Define different restriction levels for announcing the local    source IP address from IP packets in ARP requests sent on    interface:    0 - (default) Use any local address, configured on any interface    1 - Try to avoid local addresses that are not in the target's    subnet for this interface. This mode is useful when target    hosts reachable via this interface require the source IP    address in ARP requests to be part of their logical network    configured on the receiving interface. When we generate the    request we will check all our subnets that include the    target IP and will preserve the source address if it is from    such subnet. If there is no such subnet we select source    address according to the rules for level 2.    2 - Always use the best local address for this target.    In this mode we ignore the source address in the IP packet    and try to select local address that we prefer for talks with    the target host. Such local address is selected by looking    for primary IP addresses on all our subnets on the outgoing    interface that include the target IP address. If no suitable    local address is found we select the first local address    we have on the outgoing interface or on all other interfaces,    with the hope we will receive reply for our request and    even sometimes no matter the source IP address we announce.    The max value from conf/{all,interface}/arp_announce is used.    Increasing the restriction level gives more chance for    receiving answer from the resolved target while decreasing    the level announces more valid sender's information.arp_ignore - INTEGER    Define different modes for sending replies in response to    received ARP requests that resolve local target IP addresses:    0 - (default): reply for any local target IP address, configured    on any interface    1 - reply only if the target IP address is local address    configured on the incoming interface    2 - reply only if the target IP address is local address    configured on the incoming interface and both with the    sender's IP address are part from same subnet on this interface    3 - do not reply for local addresses configured with scope host,    only resolutions for global and link addresses are replied    4-7 - reserved    8 - do not reply for all local addresses    The max value from conf/{all,interface}/arp_ignore is used    when ARP request is received on the {interface}

from Using arp announce/arp ignore to disable ARP - LVSKB

Configure lo

Ifconfig lo: 1 VIP netmask 255.255.255.0 up

Configure arp

Echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce

Echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore

Echo "2">/proc/sys/net/ipv4/conf/all/arp_announce

At the same time, only one node can run lvs forwarding

If the keepalived with the same configuration is run on both of them, then data from A to B is forwarded to A, data from A to B, and data from B to... Therefore, you can only run lvs on one host.

I originally thought that keepalive should support this configuration, that is, some configurations (such as virtual server) are activated only when the configuration is changed to master, but it does not seem to work. so we can only use one method of comparison. If there are not many words, the final configuration is as follows:

# Cat master. conf

global_defs {   router_id LVS_DEVEL}vrrp_instance VI_1 {    state BACKUP    interface eth0    virtual_router_id 51    priority 99    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.81.229/24    }    notify_master "/etc/keepalived/notify_master.sh"    notify_backup "/etc/keepalived/notify_backup.sh"}virtual_server 192.168.81.229 6379 {    delay_loop 6    lb_algo rr    lb_kind DR    persistence_timeout 0    protocol TCP    real_server 192.168.81.51 6379 {        weight 1        TCP_CHECK {          connect_port    6379          connect_timeout 3        }    }    real_server 192.168.81.234 6379 {        weight 1        TCP_CHECK {          connect_port    6379          connect_timeout 3        }    }}

# Cat policy_master.sh

#!/bin/shecho “0” >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho “0” >/proc/sys/net/ipv4/conf/lo/arp_announceecho “0” >/proc/sys/net/ipv4/conf/all/arp_ignoreecho “0” >/proc/sys/net/ipv4/conf/all/arp_announce  diff /etc/keepalived/keepalived.conf /etc/keepalived/master.confif test "$?" != "0"; then    cp /etc/keepalived/master.conf /etc/keepalived/keepalived.conf    killall -HUP keepalivedfi

# Cat notify_backup.sh

#!/bin/shecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce  diff /etc/keepalived/keepalived.conf /etc/keepalived/backup.confif test "$?" != "0"; then    cp /etc/keepalived/backup.conf /etc/keepalived/keepalived.conf    killall -HUP keepalivedfi

How to configure Master/Slave for LVS + keepalived?

This server Load balancer and high availability solution have many principles and many application scenarios. You must consider the actual situation.
Take ubuntu as an example (RHEL and centos are similar)
If you are worried about being filtered by Baidu, you have to make an image. Pay attention to the number of rows. Delete duplicate entries. text content will not be duplicated and sent offline to you.


Reference: man 5 keepalived. conf


For linux Clusters, nginx, lvs, and keepalived, how can we choose to achieve load balancing?

Khan, let's take a look at what nginx, lvs, and keepalived are doing.
The cluster does not connect machines to a cluster ..

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.