Our LVS-DR cluster is built with an LVS scheduler, and the entire cluster will be paralyzed if the scheduler fails in the production environment. The two-machine hot standby of the LVS Scheduler by keepalived can solve this problem very well. Keepalived uses the VRRP virtual Routing Redundancy Protocol to realize the multi-machine hot-standby function of Linux server in the way of software.
Case topology: This lvs+keepalived cluster requires two Nginx Web server and two LVS load Balancer Scheduler. Such as
IP address planning: Two nginx use 172.16.16.177 and 172.16.16.178 respectively, two schedulers use 172.16.16.21 and 172.16.16.22 respectively as the address.
Operation steps: 1. Deploy Nginx as a clustered Web server
1) Installation and deployment of Nginx can refer to the following http://blog.51cto.com/13434336/
2102925
2) Configure the VIP for Nginx, this address is used as the source address of the sending Web response packet, do not need to listen to the client's access request, so you can configure lo:0, and need to add a route record for this machine, limit the access to the VIP data locally, avoid communication disorder.
3) Adjust the/proc response parameter vim/etc/sysctl.conf, add 6 rows. Then perform the sysctl-p update
4) for making a distinction I have modified the Web page of the second Nginx server
(Two nginx configuration except Web page and IP address the rest is identical)
2, scheduler Configuration.
1) First configure the eth0 NIC for the scheduler with an address 172.16.16.172 as the VIP (cluster address) vim/etc/sysconfig/network-scripts/ifcfg-eth0:0 (this address is used to respond to cluster access, So you need to configure eth0:0).
2) Adjust the/proc response parameters
Because the LVS load scheduler and the nodes need to share VIP addresses, the Linux kernel redirection parameter response should be turned off, opening the vi/etc/sysctl.conf and adding three rows. Perform the sysctl-p update.
3) Mount system disk installation ipvsadm cluster scheduling tool
4) Configure the Load allocation policy
3, configure the keepalived on the scheduler.
1) Install the Keepalived support package
2) Compile and install keepalived
3) chkconfig command set keepalived boot from boot
4) Configure Keepalived. Vim/etc/keepalived/keepalived.conf
After modifying the configuration restart Keepalived service.
5) from the scheduler configuration
Router-id LVS2
State BACKUP
Priority 99
The remaining configuration items are the same, and the configuration restart Keepalived service is completed after modification.
3. Verifying the Cluster
1) Login 172.16.16.172
Change a computer to log in
This successfully verifies the load balancing of the polling scheduling algorithm (RR)
2) Verify the LVS cluster. Disconnect the first Nginx network card and log in to 172.16.16.172.
The discovery can still be landed, but the second Nginx page appears.
3) View the drift address of the keepalived via the IP addr show dev eth0 on the primary scheduler with priority 100.
Disconnect the NIC from the scheduler and view it again.
View another scheduler
The address has drifted from the scheduler. Landed 172.16.16.172 again.
Find that you can still log into the website.
Summary: In this case, using LVS-DR to build a load-balanced Web cluster, in order to avoid the load Balancer Scheduler failure caused by the cluster paralysis, the use of keepalived to achieve the scheduler's dual-machine hot standby, The successful implementation of the Web cluster of high availability and high concurrency processing (build LVS-DR can use more than one Nginx server).
Documents are used only in experimental environments.
Dr+keepalived load balancing and high availability for Web clusters