Build an LVS load balancing test environment
There are many ways to achieve Server Load balancer: local tyrants directly F5, the best performance, the most expensive
If you have no money, you can use Apache. Nginx works on the fourth layer of the network. Although its performance is normal, it is flexible. For example, you can map port 80 to port 8080 of The Real Server.
Another option is LVS, which works on the third layer of the network, with good performance and high stability. however, it cannot implement port re ing. because the port information is not clear on the third layer of the network.
The following experiment sets up an LVS load balancing test environment, using the DR method.
LVS-NAT for Linux Server LB Clusters
LB cluster-LVS-NAT mode for cluster services in Linux
LVS-NAT + ipvsadm implement service cluster on RHEL 5.7
Realization of LVS-NAT and LVS-DR model of Linux Virtual Server
Client Access to LVS Front-End Server
The request is as follows:
Source MAC (client mac) Destination MAC (DR mac) Source IP (client IP) Destination IP (dr ip, VIP)
LVS forwards packets to the server after they are rewritten.
Rewrite as follows:
Source MAC (client mac) Destination MAX (Real Server MAC) Source IP (client IP) Destination IP (dr ip, VIP)
Because the real server binds the VIP address to the loopback address, this request is processed and a response message is returned.
Source object pair at the network layer
Source MAC (Real Server MAC) Target MAC (client mac) Source IP (dr ip, VIP) Target IP (client IP)
Therefore, the essence of lvs dr is network-layer spoofing.
The experiment uses VirtualBox Virtual Machine and configures the internal network to disable SELinux and firewall.
First, install the ipvsadm command on the lvs dr front-end machine.
Yum install ipvsadm-y
Then configure the Http service of two real servers (RealServer)
Yum install httpd-y
Service httpd start
Chkconfig httpd on
And rewrite/var/www/html/index.html to "real server 1" and "real server 2" respectively"
Then execute the following script on the two real servers:
Vim lvs_real.sh
#! /Bin/bash
# Description: Config realserver lo and apply noarp
SNS_VIP = 192.168.16.199
Source/etc/rc. d/init. d/functions
Case "$1" in
Start)
Ifconfig lo: 0 $ SNS_VIP netmask 255.255.255.255 broadcast $ SNS_VIP
/Sbin/route add-host $ SNS_VIP dev lo: 0
Echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
Echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
Echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
Echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
Sysctl-p>/dev/null 2> & 1
Echo "RealServer Start OK"
;;
Stop)
Ifconfig lo: 0 down
Route del $ SNS_VIP>/dev/null 2> & 1
Echo "0">/proc/sys/net/ipv4/conf/lo/arp_ignore
Echo "0">/proc/sys/net/ipv4/conf/lo/arp_announce
Echo "0">/proc/sys/net/ipv4/conf/all/arp_ignore
Echo "0">/proc/sys/net/ipv4/conf/all/arp_announce
Echo "RealServer Stoped"
;;
*)
Echo "Usage: $0 {start | stop }"
Exit 1
Esac
Exit 0
Finally, execute the following script on the DR front-end machine.
Vim lvs_dr.sh
#! /Bin/bash
VIP1 = 192.168.16.199
RIP1 = 192.168.16.3
RIP2 = 192.168.16.4
Case "$1" in
Start)
Echo "start LVS of DirectorServer"
/Sbin/ifconfig eth1: 0 $ VIP1 broadcast $ VIP1 netmask 255.255.255.255 broadcast $ VIP1 up
/Sbin/route add-host $ VIP1 dev eth1: 0
Echo "1">/proc/sys/net/ipv4/ip_forward
/Sbin/ipvsadm-C
/Sbin/ipvsadm-A-t $ VIP1: 80-s rr
/Sbin/ipvsadm-a-t $ VIP1: 80-r $ RIP1: 80-g-w 1
/Sbin/ipvsadm-a-t $ VIP1: 80-r $ RIP2: 80-g-w 1
/Sbin/ipvsadm
;;
Stop)
Echo "close LVS Directorserver"
Echo "0">/proc/sys/net/ipv4/ip_forward
/Sbin/ipvsadm-C
/Sbin/ifconfig eth1: 0 down
;;
*)
Echo "Usage: $0 {start | stop }"
Exit 1
Esac
Access the LVS front-end server through the client, and you can see that the load balancing has been achieved.
For more details, please continue to read the highlights on the next page: