Keepalived+lvs for high-availability web load Balancing

Source: Internet
Author: User

Data Flow schema diagram:

650) this.width=650; "src=" Https://s4.51cto.com/wyfs02/M02/98/93/wKiom1k-T9OhQHDDAACF_BZBj48079.png "title=" Qq20170612162418.png "alt=" Wkiom1k-t9ohqhddaacf_bzbj48079.png "/>

First, the test environment

Host Name Ip Vip
Lvs01 192.168.137.150


192.168.137.80

Lvs02 192.168.137.130
Web01 192.168.137.128 --
Web02 192.168.137.134 --

Ii. installation configuration LVs, keepalived

1. Install Ipvsadm keepalived on the LVS01,LVS02 host respectively

Yum Install Ipvsadm keepalived-y

Installed:

ipvsadm.x86_64 0:1.27-7.el7 keepalived.x86_64 0:1.2.13-9.el7_3

keepalived configuration file on 2.lvs01, modify it as follows, configure LVS01 as Master node, and set the load Balancing mode of LVS to Dr mode

Lvs01 ~]# vi/etc/keepalived/keepalived.conf

! Configuration filefor keepalived

Global_defs {

Notification_email {

[Email protected]

[Email protected]

[Email protected]

}

[Email protected]

Smtp_server 192.168.137.150

Smtp_connect_timeout 30

router_id Lvs_devel

}

Vrrp_instance Vi_1 {

State MASTER #MASTER

Interface Ens33

VIRTUAL_ROUTER_ID 52

Priority #必须比BACKUP的值大

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

virtual_ipaddress {

192.168.137.80 #VIP

}

}

Virtual_server 192.168.137.80 80 {

Delay_loop 6

Lb_algo RR #轮询算法

Lb_kind DR #DR模式

#persistence_timeout 50

Protocol TCP

Real_server 192.168.137.128 80 {

Weight 1

Tcp_check {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

Real_server 192.168.137.134 80 {

Weight 1

Tcp_check {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

}

3, modify the LVS02 on the keepalived configuration file, according to the following changes, in fact, only 2 places and the main node configuration file is different, that is, the state to be modified to backup,priority value is smaller than the master
LVS02 ~]# vi/etc/keepalived/keepalived.conf

! Configuration filefor keepalived

Global_defs {

Notification_email {

[Email protected]

[Email protected]

[Email protected]

}

[Email protected]

Smtp_server 192.168.137.130

Smtp_connect_timeout 30

router_id Lvs_devel

}

Vrrp_instance Vi_1 {

State BACKUP #BACKUP

Interface eth0

VIRTUAL_ROUTER_ID 52

Priority #必须比MASTER的值小

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

virtual_ipaddress {

192.168.137.80 #VIP

}

}

Virtual_server 192.168.137.80 80 {

Delay_loop 6

Lb_algo RR #轮询算法

Lb_kind DR #DR模式

#persistence_timeout 50

Protocol TCP

Real_server 192.168.137.128 80 {

Weight 1

Tcp_check {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

Real_server 192.168.137.134 80 {

Weight 1

Tcp_check {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

}

4.lvs01 and LVS02 host set up keepalived boot automatically, and start the keepalived service


Lvs01 ~]# Systemctl Enable keepalived

Created symlink From/etc/systemd/system/multi-user.target.wants/keepalived.service to/usr/lib/systemd/system/ Keepalived.service.


Lvs01 ~]# systemctl Start keepalived


Note: See if the log has related log files/var/log/messages output

systemd:started LVS and VRRP high availability Monitor.

Keepalived_vrrp[2416]:vrrp_instance (vi_1) Transition to MATER State

KEEPALIVED_HEALTHCHECKERS[2415]: NetLink Reflector reports IP 192.168.137.80 added.


June 17:07:26 Server2 keepalived_vrrp[15654]: vrrp_instance (vi_1) Entering BACKUP State


5. Check if the VIP is already bound to the network card.

Lvs01 ~]# IP A

inet 192.168.137.150/24 BRD 192.168.137.255 Scope Global ENS33

Valid_lft Forever Preferred_lft Forever

inet 192.168.137.80/32 Scope Global ENS33

Valid_lft Forever Preferred_lft Forever


LVS02 ~]# IP A # #介意可以看到vip不在lvs02上

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast State up Qlen 1000

Link/ether 00:0c:29:a5:b4:85 BRD FF:FF:FF:FF:FF:FF

inet 192.168.137.130/24 BRD 192.168.137.255 Scope Global eth0

Inet6 FE80::20C:29FF:FEA5:B485/64 Scope link

Valid_lft Forever Preferred_lft Forever


6. Check the status of the LVS to see information about the VIP and two Realserver

Lvs01 ~]# ipvsadm-l-N

IP Virtual Server version 1.2.1 (size=4096)

Prot Localaddress:port Scheduler Flags

Remoteaddress:port Forward Weight activeconn inactconn

TCP 192.168.137.80:80 RR

-192.168.137.128:80 Route 1 0 0

-192.168.137.134:80 Route 1 0 0


7. Since the DR mode is the back-end two realserver in response to the request of the packet directly to the client, no longer through the LVS, thus reducing the burden of LVS, improve efficiency, but because the LVS distributed to the Realserver packet destination address is the VIP address, Therefore, the VIP address must be bound to the Realserver loopback Nic Lo, otherwise realserver will assume that the packet is not sent to itself and therefore will not be discarded as a response. In addition, because the network interface will be ARP broadcast response, so when other machines also have VIP address conflict, it is necessary to the Realserver Lo Interface ARP response to shut down. We can use the following script to implement the VIP binding to the Lo interface and to turn off the ARP response.


WEB01 ~]# vim/etc/init.d/lvsrs.sh

#!/bin/bash

#chkconfig: 2345 80 90

vip=192.168.137.80

Ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

Route Add-host $vip lo:0

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

Sysctl-p


Execute the script to set the script to start automatically, view the IP address, and find that the LO interface has been bound to the VIP address

@web01 ~]# IP A

1:lo: <LOOPBACK,UP,LOWER_UP> MTU 16436 qdisc noqueue State UNKNOWN

Link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:00

inet 127.0.0.1/8 Scope host Lo

inet 192.168.137.80/32 BRD 192.168.137.80 Scope Global lo:0

INET6:: 1/128 Scope Host

Valid_lft Forever Preferred_lft Forever


@web02 ~]# IP A

1:lo: <LOOPBACK,UP,LOWER_UP> MTU 16436 qdisc noqueue State UNKNOWN

Link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:00

inet 127.0.0.1/8 Scope host Lo

inet 192.168.137.80/32 BRD 192.168.137.80 Scope Global lo:0

INET6:: 1/128 Scope Host

Valid_lft Forever Preferred_lft Forever


Three, LVS load Balancing test

650) this.width=650; "src=" Https://s4.51cto.com/wyfs02/M02/98/98/wKioL1k-caijkuw5AABGSnPzwps970.png "title=" Qq20170612184746.png "alt=" Wkiol1k-caijkuw5aabgsnpzwps970.png "/>

650) this.width=650; "src=" Https://s1.51cto.com/wyfs02/M01/98/98/wKioL1k-ciGzXCdHAABE5QbnEJU855.png "title=" Qq20170612185059.png "alt=" Wkiol1k-cigzxcdhaabe5qbneju855.png "/>

3, look at the status of LVs, you can see two realserver each have 2 inactive connections, indicating that 1:1 of the weight of the poll also has effective, inactive connection is because we just access a static page, Access will soon be inactive

Lvs01 ~]# ipvsadm-l-N

IP Virtual Server version 1.2.1 (size=4096)

Prot Localaddress:port Scheduler Flags

Remoteaddress:port Forward Weight activeconn inactconn

TCP 192.168.137.80:80 RR

-192.168.137.128:80 Route 1 0 2

-192.168.137.134:80 Route 1 0 2


Iv. keepalived High-availability test

1, stop lvs01 on the keepalived service, and then observe its log, you can find its bound VIP is removed, two realserver nodes have been removed

Lvs01 ~]# IP A

1:lo: <LOOPBACK,UP,LOWER_UP> MTU 65536 qdisc noqueue State UNKNOWN Qlen 1

Link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:00

inet 127.0.0.1/8 Scope host Lo

Valid_lft Forever Preferred_lft Forever

INET6:: 1/128 Scope Host

Valid_lft Forever Preferred_lft Forever

2:ENS33: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast State up Qlen 1000

Link/ether 00:0c:29:77:71:4e BRD FF:FF:FF:FF:FF:FF

inet 192.168.137.150/24 BRD 192.168.137.255 Scope Global ENS33

Valid_lft Forever Preferred_lft Forever

Inet6 FE80::1565:761B:D9A2:42E4/64 Scope link

Valid_lft Forever Preferred_lft Forever

Lvs01 ~]# ipvsadm-l-N

IP Virtual Server version 1.2.1 (size=4096)

Prot Localaddress:port Scheduler Flags

Remoteaddress:port Forward Weight activeconn inactconn


At this point, the VIP and RS nodes appear on the LVS02: And with the VIP can also be normal access, indicating drift success, and if after lvs01 return to normal, VIP will still drift to lvs01, the original because keepalived configuration file state is master,

LVS02 ~]# IP A

1:lo: <LOOPBACK,UP,LOWER_UP> MTU 16436 qdisc noqueue State UNKNOWN

Link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:00

inet 127.0.0.1/8 Scope host Lo

INET6:: 1/128 Scope Host

Valid_lft Forever Preferred_lft Forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast State up Qlen 1000

Link/ether 00:0c:29:a5:b4:85 BRD FF:FF:FF:FF:FF:FF

inet 192.168.137.130/24 BRD 192.168.137.255 Scope Global eth0

inet 192.168.137.80/32 Scope Global eth0

Inet6 FE80::20C:29FF:FEA5:B485/64 Scope link

Valid_lft Forever Preferred_lft Forever

[Email protected] ~]# ipvsadm-l-N

IP Virtual Server version 1.2.1 (size=4096)

Prot Localaddress:port Scheduler Flags

Remoteaddress:port Forward Weight activeconn inactconn

TCP 192.168.137.80:80 RR

-192.168.137.128:80 Route 1 0 0

-192.168.137.134:80 Route 1 0 0


2. We will stop WEB01 httpd service, simulate WEB01 failure can not provide Web services, test keepalived can be timely monitoring and remove web01 from LVS, no longer distribute the request to WEB01, prevent users from accessing the failed Web server

@web01 ~]#/usr/local/apache24/bin/httpd-k Stop


@lvs02 ~]# ipvsadm-l-N

IP Virtual Server version 1.2.1 (size=4096)

Prot Localaddress:port Scheduler Flags

Remoteaddress:port Forward Weight activeconn inactconn

TCP 192.168.137.80:80 RR

-192.168.137.134:80 Route 1 0 0

@lvs02 ~]#


When the WEB01 HTTP service is started, the KeepAlive process detects the RS recovery and automatically adds it to the RS pool.


This article is from the "Gavin" blog, make sure to keep this source http://guopeng7216.blog.51cto.com/9377374/1934678

Keepalived+lvs for high-availability web load Balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.