Implement LVS high-availability clusters based on Keepalive

Source: Internet
Author: User

1. Introduction to keepalived

1. The core function of keepalivd is to achieve high LVS availability through vrrp protocol in linux.

2. vrrp virtual redundancy routing protocol) Multiple gateways can be virtualized into one gateway, and a group of IP addresses can be virtualized into VIP addresses and Their MAC addresses can be virtualized at the same time.

3. keepalived implements failover through the vrrp protocol to avoid spof. When the master node fails, the slave node can replace the master node to continue providing services. When the faulty node returns to normal, the node can be automatically added to the service.

4. vrrp Protocol Status Mechanism

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/045Q42523-0.jpg "title =" keep alive1.jpg "alt =" 170410999.jpg"/>

5. Install the keepalived service. Based on the Centos6.4 experimental environment, you can directly use the rpm package of version 1.2.7 to install keepalived.

6. Master configuration file/etc/keepalived. conf of keepalived

Keepalived service script/etc/rc. d/init. d/keepalived

Ii. configuration file of keepalived

1. Global configuration section

GLOBAL CONFIGURATION


global_defs {   notification_email {     acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }   notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30   router_id LVS_DEVEL}

Define mail sending and receiving, Static Routing

2. Configuration segment of the keepalived vrpp instance

VRRPD CONFIGURATION


vrrp_instance VI_1 {    state MASTER    interface eth0    virtual_router_id 51    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.200.16        192.168.200.17        192.168.200.18    }}

The vro configuration instance is the core configuration segment.

3. keepalived LVS virtual server configuration segment

LVS CONFIGURATION


virtual_server 192.168.200.100 443 {    delay_loop 6    lb_algo rr    lb_kind NAT    nat_mask 255.255.255.0    persistence_timeout 50    protocol TCP                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               real_server 192.168.201.100 443 {        weight 1        SSL_GET {            url {              path /              digest ff20ad2481f97b1754ef3e12ecd3a9cc            }            url {              path /mrtg/              digest 9b3a0c85a887a256d6939da88aabd8cd            }            connect_timeout 3            nb_get_retry 3            delay_before_retry 3        }    }}

Iii. keepalived implements a high-availability preparation environment for LVS

1. Prepare three node MS/node1/node2.

2. Install the ansible service in node ms to achieve mutual trust between node node1 and node2.

[root@ms ~]# yum -y install ansible[root@ms ~]# ssh-keygen -t rsa -P ''[root@ms ~]# ssh-copy-id -i .ssh/id.rsa.pub root@node1.xiaozheng.com[root@ms ~]# ssh-copy-id -i .ssh/id.rsa.pub root@node2.xiaozheng.com

3. Install the keepalived service on node1/node2.

[root@ms ~]# ansible all -m shell -a "yum -y install keepalived"

4. Go to node node1/node2 to view the keepalived configuration.

[root@node1 ~]# cd /etc/keepalived[root@node1 keepalived]# vim keepalived.conf[root@node2 ~]# cd /etc/keepalived[root@node2 keepalivd]# vim keepalived.conf

5. Enable log notification on the terminal that starts node1/node2 at any time.

[root@node1 ~]# tail -f /var/log/message[root@node2 ~]# tail -f /var/log/message

4. How does keepalived implement notification during state transition?

1. Notification location

Vrrp_sync_group {

}

The most common location

Vrrp_instance {

}

1) first define the global configuration segment

global_defs {      notification_email {           root@localhost      }      notification_email_from msadmin@localhost      smtp_server 127.0.0.1      smtp_connect_timeout 30      router_id LVS_DEVEL

2) define relevant control mechanisms

vrrp_script chk_main {          script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"          interval 1          weight -2    }

3) then define the vrrp instance segment

Vrrp instance Segment configuration of node node1

[root@node1 keepalived]# vim keepalived.confvrrp_instance VI_1 {    state MASTER    interface eth0    virtual_router_id 63    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }     virtual_ipaddress {        172.16.200.100    }     track_script {        chk_main    }

Node node2vrrp instance Segment configuration

[root@node2 keepalived]# vim keepalived.confvrrp_instance VI_1 {    state BACKUP    interface eth0    virtual_router_id 63    priority 99    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }     virtual_ipaddress {        172.16.200.100    }     track_script {        chk_main    }

2. Notification Method

Notify_master master node notification

Notify_backup slave node notification

Notify_fault fault point notification

4) You can define how to control the notification using the notify. sh script in the instance.

notify_master "/etc/keepalived/notify.sh master"notify_backup "/etc/keepalived/notify.sh backup"notify_fault "/etc/keepalived/notify.sh fault"

* Notify. sh instance script

#!/bin/bash# Author: MageEdu <linuxedu@foxmail.com># description: An example of notify scriptvip=172.16.200.100contact='root@localhost'                                                                                                                                                                                                                                         notify() {    mailsubject="`hostname` to be $1: $vip floating"    mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"    echo $mailbody | mail -s "$mailsubject" $contact}                                                                                                                                                                                                                                         case "$1" in    master)        notify master        exit 0    ;;    backup)        notify backup        exit 0    ;;    fault)        notify fault        exit 0    ;;    *)        echo 'Usage: `basename $0` {master|backup|fault}'        exit 1    ;;esac

5) restart the keepalived service of Node 1/node 2 on node ms and view the node where virtual_ipaddress is located.

[root@ms ~]# ansible all -a "service keepalived restart"[root@ms ~]# ansible alol -m shell -a "ip addr show | grep eth0"

6) Compile the down file on the master node node1 to migrate virtual_ipaddress from the master node node1 to node2. check the VIP transfer status between nodes in node ms.

[root@node1 keepalived]# touch down[root@ms ~]# ansible all -m shell -a "ip addr show | grep eth0"

7) restore the master node node2 and check the VIP transfer status again.

[root@node1 keepalived]# rm -rf down[root@ms ~]# ansible all -m shell -a "ip addr show | grep eth0"

5. How to configure ipvs

The core configuration section defines virtual hosts for virtual servers.

1. virtual_server IP port defines the Virtual Host IP address and port

2. virtual_server fwmark int ipvs firewall tagging to implement firewall-based LVS

3. virtual_server group string

4. lb_algo {rr | wrr | lc | wlc | lblc | lblcr} defines the LVS scheduling algorithm.

5. lb_kind {NAT | DR | TUN} defines the LVS model.

6. presitence_timeout <INT> defines the duration of persistent connections.

7. protocols supported by the protocol ipvs rule

1) configure the ipvs instance in the vrrp_server segment

Ipvs configuration in master node node1 vrrp_server

[root@node1 keepalived]# vim keepalived.confvirtual_server 172.16.200.100 80 {    delay_loop 6    lb_algo rr    lb_kind DR    nat_mask 255.255.0.0    persistence_timeout 0    protocol TCP    real_server 172.16.200.8 80{        weight 1        HTTP_GET {            url {              path /            status_code 200            }            connect_timeout 3            nb_get_retry 3            delay_before_retry 3        }     }}

Ipvs configuration in the standby node node2 vrrp_server

[root@node2 keepalived]# vim keepalived.confvirtual_server 172.16.200.100 80 {    delay_loop 6    lb_algo rr    lb_kind DR    nat_mask 255.255.0.0    persistence_timeout 0    protocol TCP    real_server 172.16.200.9 80{        weight 1        HTTP_GET {            url {              path /            status_code 200            }            connect_timeout 3            nb_get_retry 3            delay_before_retry 3        }     }}

2) install the ipvsadm service for node1/node2 nodes in node ms and start the httpd service for the master and slave nodes.

[root@ms ~]# ansible all -m shell -a "yum -y install ipvsadm"[root@ms ~]# ansible all -a "service httpd start"

3) Go to node node1/node2 to view the corresponding ipvs rules.

[root@node1 keepalived]# ipvsadm -L -n[root@node2 keepalived]# ipvsadm -L -n

6. High Availability for specific services

1. Monitoring Service

Vrrp_script {

}

2. Tracking Services in vrrp instances

Track_script {

}

7. Implement Dual master Model Based on Multi-virtual Routing

To implement the master/master Model Based on Multiple virtual routes, You need to define the configuration of multiple vrrp_intance segments.

1. Configure the vrrp_intance segment on node node1 and define two


[root@node1 keepalived]# vim keepalived.conf                                                                                                                                      vrrp_instance VI_1 {    state MASTER    interface eth0    virtual_router_id 63    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }     virtual_ipaddress {        172.16.200.100    }     track_script {        chk_main    } vrrp_instance VI_2 {    state BACKUP    interface eth0    virtual_router_id 65    priority 99    advert_int 1    authentication {        auth_type PASS        auth_pass 21112    }    virtual_ipaddress {       172.16.200.200    }    track_srcipt {       chk_main    }

2. Configure the vrrp_intance segment on node2 and define two

[root@node2 keepalived]# vim keepalived.confvrrp_instance VI_1 {    state BACKUP    interface eth0    virtual_router_id 63    priority 99    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        172.16.200.100    }    track_script {        chk_main    }vrrp_instance VI_2 {    state MASTER    interface eth0    virtual_router_id 65    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 21112    }    virtual_ipaddress {       172.16.200.200    }    track_srcipt {       chk_main    }

3. Stop the keepalived Service of the master node node1, and check the VIP forwarding between the master and slave nodes in node ms, in the same way, the keepalived service of the slave node node2 is stopped and the keepalived service of node1 is started. The VIP transfer between the master and slave nodes is viewed on the node ms.

[root@node1 keepalived]# servive keepalived stop[root@ms ~]# ansible all -m shell -a "ip addr show | grep eth0"[root@node2 keepalived]# servive keepalived stop[root@node1 keepalived]# servive keepalived start[root@ms ~]# ansible all -m shell -a "ip addr show | grep eth0"











This article is from the author's blog "broad Sky Summit one peice". For more information, contact the author!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.