標籤:keepalived lvs
資料流架構圖:
650) this.width=650;" src="https://s4.51cto.com/wyfs02/M02/98/93/wKiom1k-T9OhQHDDAACF_BZBj48079.png" title="QQ20170612162418.png" alt="wKiom1k-T9OhQHDDAACF_BZBj48079.png" />
一、測試環境
| 主機名稱 |
ip |
vip |
| lvs01 |
192.168.137.150 |
192.168.137.80 |
| lvs02 |
192.168.137.130 |
| web01 |
192.168.137.128 |
-- |
| web02 |
192.168.137.134 |
-- |
二、安裝配置lvs、keepalived
1.分別在lvs01,lvs02主機上安裝ipvsadm keepalived
yum install ipvsadm keepalived -y
Installed:
ipvsadm.x86_64 0:1.27-7.el7 keepalived.x86_64 0:1.2.13-9.el7_3
2.lvs01上的keepalived設定檔,按以下內容進行修改,將lvs01配置為MASTER節點,並設定LVS的負載平衡模式為DR模式
lvs01 ~]# vi /etc/keepalived/keepalived.conf
! Configuration Filefor keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
[email protected]
smtp_server 192.168.137.150
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER #MASTER
interface ens33
virtual_router_id 52
priority 100 #必須比BACKUP的值大
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.80 #VIP
}
}
virtual_server 192.168.137.80 80 {
delay_loop 6
lb_algo rr #輪詢演算法
lb_kind DR #DR模式
#persistence_timeout 50
protocol TCP
real_server 192.168.137.128 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.137.134 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
3、修改lvs02上的keepalived設定檔,按以下內容進行修改,其實只有2處地方與主節點的設定檔不同,即state 要修改為Backup,priority數值要比master的小
lvs02 ~]# vi /etc/keepalived/keepalived.conf
! Configuration Filefor keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
[email protected]
smtp_server 192.168.137.130
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP #BACKUP
interface eth0
virtual_router_id 52
priority 90 #必須比MASTER的值小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.80 #VIP
}
}
virtual_server 192.168.137.80 80 {
delay_loop 6
lb_algo rr #輪詢演算法
lb_kind DR #DR模式
#persistence_timeout 50
protocol TCP
real_server 192.168.137.128 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.137.134 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
4.lvs01和lvs02主機上上設定keepalived開機自動啟動,並啟動keepalived服務
lvs01 ~]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
lvs01 ~]# systemctl start keepalived
註:查看日誌是否有相關記錄檔/var/log/messages輸出
systemd: Started LVS and VRRP High Availability Monitor.
Keepalived_vrrp[2416]:VRRP_Instance(VI_1) Transition to MATER STATE
Keepalived_healthcheckers[2415]: Netlink reflector reports IP 192.168.137.80 added.
Jun 12 17:07:26 server2 Keepalived_vrrp[15654]: VRRP_Instance(VI_1) Entering BACKUP STATE
5.查看vip是否已經綁定到網卡,
lvs01 ~]# ip a
inet 192.168.137.150/24 brd 192.168.137.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.137.80/32 scope global ens33
valid_lft forever preferred_lft forever
lvs02 ~]# ip a ##介意可以看到vip不在lvs02上
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a5:b4:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.137.130/24 brd 192.168.137.255 scope global eth0
inet6 fe80::20c:29ff:fea5:b485/64 scope link
valid_lft forever preferred_lft forever
6.查看LVS的狀態,可以看到VIP和兩台Realserver的相關資訊
lvs01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.137.80:80 rr
-> 192.168.137.128:80 Route 1 0 0
-> 192.168.137.134:80 Route 1 0 0
7.由於DR模式是後端兩台realserver在響應請求時直接將資料包發給用戶端,無需再經過LVS,這樣減輕了LVS的負擔、提高了效率,但由於LVS分發給realserver的資料包的目的地址是VIP地址,因此必須把VIP地址綁定到realserver的迴環網卡lo上,否則realserver會認為該資料包不是發給自己因此會丟棄不作響應。另外由於網路介面都會進行ARP廣播響應,因此當其他機器也有VIP地址時會發生衝突,故需要把realserver的lo介面的ARP響應關閉掉。我們可以用以下指令碼來實現VIP綁定到lo介面和關閉ARP響應。
web01 ~]# vim /etc/init.d/lvsrs.sh
#!/bin/bash
#chkconfig: 2345 80 90
vip=192.168.137.80
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p
執行該指令碼設定該指令碼開機自動執行,查看IP地址,發現lo介面已經綁定了VIP地址
@web01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 192.168.137.80/32 brd 192.168.137.80 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
@web02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 192.168.137.80/32 brd 192.168.137.80 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
三、LVS負載平衡測試
650) this.width=650;" src="https://s4.51cto.com/wyfs02/M02/98/98/wKioL1k-caijkuw5AABGSnPzwps970.png" title="QQ20170612184746.png" alt="wKioL1k-caijkuw5AABGSnPzwps970.png" />
650) this.width=650;" src="https://s1.51cto.com/wyfs02/M01/98/98/wKioL1k-ciGzXCdHAABE5QbnEJU855.png" title="QQ20170612185059.png" alt="wKioL1k-ciGzXCdHAABE5QbnEJU855.png" />
3、查看LVS的狀態,可以看到兩台realserver各有2個不活動的串連,說明按1:1權重的輪詢也有生效,不活動串連是因為我們只是訪問一個靜態頁面,訪問過後很快就會處於不活動狀態
lvs01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.137.80:80 rr
-> 192.168.137.128:80 Route 1 0 2
-> 192.168.137.134:80 Route 1 0 2
四、Keepalived高可用測試
1、停止lvs01上的keepalived服務,再觀察它的日誌,可以發現其綁定的VIP被移除,兩個realserver節點也被移除了
lvs01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:77:71:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.137.150/24 brd 192.168.137.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::1565:761b:d9a2:42e4/64 scope link
valid_lft forever preferred_lft forever
lvs01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
此時發現vip及rs節點出現在lvs02上:並且用vip可還可正常訪問,表示漂移成功,並且如果之後lvs01恢複正常,vip依然會漂到lvs01上,原因為keepalived的設定檔裡狀態為master,
lvs02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a5:b4:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.137.130/24 brd 192.168.137.255 scope global eth0
inet 192.168.137.80/32 scope global eth0
inet6 fe80::20c:29ff:fea5:b485/64 scope link
valid_lft forever preferred_lft forever
[[email protected] ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.137.80:80 rr
-> 192.168.137.128:80 Route 1 0 0
-> 192.168.137.134:80 Route 1 0 0
2.我們將web01的httpd服務停止,類比web01出現故障不能提供web服務,測試keepalived能否及時監控到並將web01從LVS中剔除,不再分發請求給web01,防止使用者訪問到故障的web伺服器
@web01 ~]# /usr/local/apache24/bin/httpd -k stop
@lvs02 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.137.80:80 rr
-> 192.168.137.134:80 Route 1 0 0
@lvs02 ~]#
此時把web01的http服務啟動,keepalive的進程會檢測到rs恢複,自動添加到rs池中。
本文出自 “Gavin” 部落格,請務必保留此出處http://guopeng7216.blog.51cto.com/9377374/1934678
keepalived+LVS實現高可用的Web負載平衡