Linux Learning Summary (54) Keepalived+lvs dual-machine hot standby load Balancing architecture

Source: Internet
Author: User

Introduction of a LVS IP tunnel mode

IP tunneling (IP tunneling) is the technique of encapsulating an IP message in another IP packet, which enables a packet of data that is targeted for one IP address to be encapsulated and forwarded to another IP address. IP tunneling technology is also known as IP encapsulation Technology (IP encapsulation). IP tunneling is primarily used for mobile hosts and virtual private networks where tunnels are statically established, one end of the tunnel has an IP address, and the other end has a unique IP address. Its connection scheduling and management is the same as in Vs/nat, but its message forwarding method is different. According to the load situation of each server, the scheduler chooses a server dynamically, encapsulates the request message in another IP packet, forwards the encapsulated IP message to the selected server, and the server receives the message, the message is first unpacked to obtain the original target address of the VIP message, The server discovers that the VIP address is configured on the local IP tunneling device, so it processes the request and then returns the response message directly to the client based on the routing table.

Introduction of two LVS DR mode

Dr Mode is the implementation of virtual server with direct routing technology. Its connection scheduling and management is the same as in Vs/nat and Vs/tun, but its message forwarding method is different, vs/dr by overwriting the request message's MAC address, send the request to real server, and real server to return the response directly to the customer, eliminating the vs/ The IP tunneling overhead in the Tun. This is the best performance in three load scheduling mechanisms, but it must be required that both the Director server and the real server have a NIC attached to the same physical network segment. The director and Realserver must physically have a network card connected through an uninterrupted LAN. Realserver binding VIP configuration on their respective NON-ARP network devices (such as Lo or TUNL), the director's VIP address is visible externally, and the VIP of realserver is outside the invisible. The Realserver address can be either an internal address or a real address.

The specific implementation process is:
The DR mode routes the message directly to the target real server. In DR Mode, according to the load situation of each real server, the dispatcher chooses a server dynamically, does not modify the target IP address and destination port, and does not encapsulate the IP packet, but instead the target MAC address of the data frame of the request message to the MAC address of the real server. The modified data frame is then sent on the local area network of the server group. Because the MAC address of the data frame is the MAC address of the real server, and it is on the same LAN. Then according to the network communication principle, the real reset is bound to receive the packet sent by Lb. When the real server receives the request packet, it is the VIP to unlock the IP header to see the target IP. (At this point only your own IP will be received in accordance with the target IP, so we need to configure the VIP on the local loopback pretext.) Another: Because the network interface will be ARP broadcast response, but the other machines in the cluster have the VIP LO interface, the response will conflict. So we need to shut down the ARP response to the LO interface of the real server. The real server then responds to the request, then sends the response packet back to the customer based on its own routing information, and the source IP address is the VIP.

Three LVs DR mode construction

The structure diagram is as follows


is a request message from a client, and we know that in Dr Mode, both LB and Rs are bundled with VIPs, so how do customers request to LB instead of directly to RS? We know that the IP packet is actually sent through the data link layer, as long as the above question mark at the MAC address of the LB MAC address can be, how to do it, the use of the ARP protocol, the simple is to resolve the MAC address by IP a protocol, we put the VIP address to broadcast out, A machine with this IP will reply to its MAC address, but here both lb and Rs have VIP, so it is necessary to suppress the ARP corresponding to the VIP address. This way, you can determine the unique lb MAC address.
Since load balancer obtains this IP packet, it can use a policy to choose a server from RS1, RS2,RS3, such as RS1 (192.168.226.130), the IP datagram intact, Packets encapsulated into the Data link layer (destination is RS1 's MAC address), direct forwarding is possible, see


RS1 (192.168.226.130) This server received the packet, take a look, the destination IP is 192.168.226.100, is its own IP, then it can be processed. After the
has finished processing, RS1 can respond directly to the client, without having to pass the load Balancer at all. Because my address is 192.168.226.100.
for the client, it is still the only VIP address 192.168.226.100, and does not know what is going on in the background.
Because load Balancer does not modify the IP datagram at all, TCP's port number is not modified naturally, which requires RS1, the port number on the RS2,RS3 must be consistent with load Balancer
Data flow to:
clients--Load Balancer---RS--Client
Following a specific instance to understand the entire process
prepare three machines
DIR ens33 192.168.226.129 ens37
RS1 ens33 192.168.226.130
RS2 ens33 192.168.226.131
Specifies that the VIP is 192.168.226.100,dir VIP binding on the ENS37, of course, you can directly bind a virtual network card, such as Ens33:2
above each machine requires a public network IP
Dir writes the script vim/usr/local/sbin/lvs_dr.sh//content as follows

#! /bin/bashecho 1 > /proc/sys/net/ipv4/ip_forwardipv=/usr/sbin/ipvsadmvip=192.168.226.100rs1=192.168.226.130rs2=192.168.226.131#注意这里的网卡名字ifconfig ens37 $vip broadcast $vip netmask 255.255.255.255 uproute add -host $vip dev ens37$ipv -C$ipv -A -t $vip:80 -s wrr$ipv -a -t $vip:80 -r $rs1:80 -g -w 1$ipv -a -t $vip:80 -r $rs2:80 -g -w 1

Two RS also write script vim/usr/local/sbin/lvs_rs.sh//content as follows

#/bin/bashvip=192.168.226.100#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 uproute add -host $vip lo:0#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端#参考文档www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.htmlecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce

Run the above script separately after the browser test

After refresh

Four keepalived +lvs

Before we tried the load balancing of the LVS NAT and Dr Two models, keepalived could not only be highly available based on the VRRP (Virtual Router redundancy Protocol) protocol, but he was inherently integrated with the LVS function. Load balancing can be achieved. Below we use four machines to do the experiment, the basic topology structure is as follows

We first prepare four servers
LB1 LB2 RS1 RS2
LB1 Two network connection: Ens33 192.168.226.129
Ens37 192.168.199.200
LB2 Two network adapter: Ens33 192.168.226.132
ens37 192.168.199.201
RS1 One network card ens33 192.168.226.130
RS2 One network card Ens33 192.168.226.131
Ready to work:
Four machines are turned off SELinux, empty firewall rules
Two scheduler installed on the keepalived, two RS installed on the Web Nginx
1 LB1 as the main scheduler
Edit keepalived configuration file
vim/etc/keepalived/keepalived.conf

  vrrp_instance vi_1 {///standby server for backup state MASTER//Bind VIP NIC is ens37, interface ens37 Virtual_rou TER_ID 51//Standby server on Advert_int 1 authentication {auth_type PASS auth_pass Lvlinu    X} virtual_ipaddress {192.168.226.100}}virtual_server 192.168.226.100 80 {//(query Realserver status every 10 seconds) Delay_loop//(LVS algorithm) Lb_algo WRR//(DR mode) Lb_kind Dr//(connection of the same IP is assigned to the same realserver within 60 seconds) persistence_t        Imeout 60//(check Realserver status with TCP protocol) protocol TCP Real_server 192.168.226.130 80 {//(weight) weight 100        Tcp_check {//(10 seconds No response timeout) connect_timeout Nb_get_retry 3 Delay_before_retry 3 Connect_port}} real_server 192.168.226.131 {weight Tcp_check {connect_time Out of Nb_get_retry 3 Delay_before_retry 3 Connect_port}}}  

Note: Above we define the VIP as 192.168.226.100, and bind on the network card ENS37

LB2 action
vim/etc/keepalived/keepalived.conf

  vrrp_instance vi_1 {///standby server for backup state backup//Bind VIP NIC is ens37 interface ens37 virtual_rout    ER_ID 51//Advert_int 1 Authentication {auth_type PASS auth_pass lvlinux on standby server    } virtual_ipaddress {192.168.226.100}}virtual_server 192.168.226.100 80 {//(query Realserver status every 10 seconds) Delay_loop//(LVS algorithm) Lb_algo WRR//(DR mode) Lb_kind Dr//(connection of the same IP is assigned to the same realserver within 60 seconds) Persistence_ Timeout 60//(check Realserver status with TCP protocol) protocol TCP Real_server 192.168.226.130 80 {# (weight) weight        Tcp_check {# (10 seconds No response timeout) connect_timeout Nb_get_retry 3 Delay_before_retry 3 Connect_port}} real_server 192.168.226.131 {weight Tcp_check {connect_t Imeout nb_get_retry 3 delay_before_retry 3 Connect_port}}}  

That is, change state master to backup, priority 100 to 90
Editing scripts on RS1 and RS2
vim/usr/local/sbin/lv_dr_rs.sh

#/bin/bashvip=192.168.226.100#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 uproute add -host $vip lo:0#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端#参考文档www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.htmlecho "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce

After you run the script on the RS on the LB1 and on the LB2, start the keepalived
Ipvsadm-ln View RS Link status
IP Add view VIP binding situation
LB1 on close keepalived found, LB2 on the re-bound VIP took over the service, in/var/log/messages can see VRRP master-Slave changes
LB2 on Ipvsadem-ln view RS link status
Close an RS nginx to test high availability

Linux Learning Summary (54) Keepalived+lvs dual-machine hot standby load Balancing architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.