lvs+keepalived build High Availability load balancing (test piece) _linux

Source: Internet
Author: User
Tags failover joins reflector system log
First, the launch of the LVS high availability cluster services

First, start the services for each real server node:
[Root@localhost ~]#/etc/init.d/lvsrs start
Start LVS of Realserver
Then, start the keepalived service in the primary standby Director server:
[Root@dr1 ~]#/etc/init.d/keepalived start
[Root@dr1 ~]#/Ipvsadm-l
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
-> remoteaddress:port Forward Weight activeconn inactconn
TCP Bogon:http RR
-> real-server1:http Route 1 1 0
-> real-server2:http Route 1 1 0
View the System log information for the keepalived service at this time as follows:
[Root@localhost ~]# tail-f/var/log/messages
Feb 10:01:56 localhost keepalived:starting keepalived v1.1.19 (02/27,2011)
Feb 10:01:56 localhost keepalived_healthcheckers:netlink reflector reports IP 192.168.12.25 added
Feb 10:01:56 localhost keepalived_healthcheckers:opening file '/etc/keepalived/keepalived.conf '.
Feb 10:01:56 localhost keepalived_healthcheckers:configuration is using:12063 Bytes
Feb 10:01:56 localhost keepalived:starting healthcheck child process, pid=4623
Feb 10:01:56 localhost keepalived_vrrp:netlink reflector reports IP 192.168.12.25 added
Feb 10:01:56 localhost keepalived:starting VRRP child process, pid=4624
Feb 10:01:56 localhost keepalived_healthcheckers:activating healtchecker for service [192.168.12.246:80]
Feb 10:01:56 localhost keepalived_vrrp:opening file '/etc/keepalived/keepalived.conf '.
Feb 10:01:56 localhost keepalived_healthcheckers:activating healtchecker for service [192.168.12.237:80]
Feb 10:01:57 localhost keepalived_vrrp:vrrp_instance (vi_1) Transition to MASTER State
Feb 10:01:58 localhost keepalived_vrrp:vrrp_instance (vi_1) entering MASTER state
Feb 10:01:58 localhost keepalived_vrrp:vrrp_instance (vi_1) setting protocol.
Feb 10:01:58 localhost keepalived_healthcheckers:netlink reflector reports IP 192.168.12.135 added
Feb 10:01:58 localhost avahi-daemon[2778]: Registering new address record for 192.168.12.135 on eth0.

second, high availability functional testing

High availability is done through two director servers in LVs, in order to simulate failure, we first stop the keepalived service on the main director server, and then watch the standby director The keepalived run log on the server, with information as follows:
Feb 10:08:52 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) Transition to MASTER State
Feb 10:08:54 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) entering MASTER state
Feb 10:08:54 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) setting protocol.
Feb 10:08:54 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) sending gratuitous (ARPs on eth0 for 192.168.12.135
Feb 10:08:54 lvs-backup keepalived_vrrp:netlink Reflector reports IP 192.168.12.135 added
Feb 10:08:54 lvs-backup keepalived_healthcheckers:netlink Reflector reports IP 192.168.12.135 added
Feb 10:08:54 lvs-backup avahi-daemon[3349]: Registering new address record for 192.168.12.135 on eth0.
Feb 10:08:59 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) sending gratuitous (ARPs on eth0 for 192.168.12.135
From the log can be seen, the host failed, the standby machine immediately detected, at this time the standby became master role, and took over the host's virtual IP resources, and finally the virtual IP binding on the eth0 device.
Next, restart the Keepalived service on the primary director server and continue to observe the log status of the Standby Director server:
Log status for Standby director server:
Feb 10:12:11 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) Received higher, Prio advert
Feb 10:12:11 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) Entering backup state
Feb 10:12:11 lvs-backup keepalived_vrrp:vrrp_instance (vi_1) removing protocol.
Feb 10:12:11 lvs-backup keepalived_vrrp:netlink Reflector reports IP 192.168.12.135 removed
Feb 10:12:11 lvs-backup keepalived_healthcheckers:netlink Reflector reports IP 192.168.12.135 removed
Feb 10:12:11 lvs-backup avahi-daemon[3349]: Withdrawing address record for 192.168.12.135 on eth0.
From the log, the standby machine returns to the backup role after it detects that the host is back to normal and frees up the virtual IP resources.

third, load balance test

This assumes that the Web page file root directory for the two real server nodes configuration WWW service is/webdata/www directory, and then do the following:
Executing in real Server1:
echo "This are real server1"/webdata/www/index.html
Executing in real Server2:
echo "This are real server2"/webdata/www/index.html
Then open the browser, visit http://192.168.12.135 this address, and then constantly refresh this page, if you can see the "This is real server1" and "This is real Server2" indicates that LVS is already in load balancing.

Quad -Failover test

Failover is the test when a node fails, keepalived monitoring module can be found in time, and then shielding the failed node, while the service to the normal node to perform.
Here we stop the Real server 1 node service, assuming that this node fails, and then view the main, standby log information, the relevant log is as follows:
Feb 10:14:12 localhost keepalived_healthcheckers:tcp connection to [192.168.12.246:80] failed!!!
Feb 10:14:12 localhost keepalived_healthcheckers:removing service [192.168.12.246:80] from VS [192.168.12.135:80]
Feb 10:14:12 localhost keepalived_healthcheckers:remote SMTP server [192.168.12.1:25] connected.
Feb 10:14:12 localhost keepalived_healthcheckers:smtp alert successfully sent.
Through the log can be seen, keepalived monitoring module detected 192.168.12.246 This host failure, the node from the cluster system removed.
To access this address at this point, you should only see "This was real Server2" because node 1 is faulted, and the Keepalived monitoring module http://192.168.12.135 Node 1 from the cluster system.
The following restarts the real server 1 node service to see the keepalived log information as follows:
Feb 10:15:48 localhost keepalived_healthcheckers:tcp connection to [192.168.12.246:80] success.
Feb 10:15:48 localhost keepalived_healthcheckers:adding service [192.168.12.246:80] to VS [192.168.12.135:80]
Feb 10:15:48 localhost keepalived_healthcheckers:remote SMTP server [192.168.12.1:25] connected.
Feb 10:15:48 localhost keepalived_healthcheckers:smtp alert successfully sent.
From the log, the Keepalived monitoring module detects that the 192.168.12.246 is back to normal and joins this node in the cluster system.
Once again, access to http://192.168.12.135 this address, and then constantly refresh this page, you should be able to see the "This are real server1" and "This is the real Server2" page, which shows that in real server After 1 nodes return to normal, the Keepalived monitoring module joins this node in the cluster system.

This article comes from the "Technology Achievement Dream" blog

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.