Day 28th: High-Availability load Balancing cluster setup

Source: Internet
Author: User
Tags scp file

Small Q: have been trying hard, and occasionally stop to rest, but also cheer up the spirit, continue to work hard,

Try my best to realize my desire, dare not hope.


We used heartbeat to configure a high-availability cluster, and we configured the load-balancing cluster with LVS, and now we can combine the functions of the two, with the introduction of the high-availability of the other open source software keepalived and LVs ipvsadm cooperation completed;




I. Introduction:

Keepalived is a software similar to the Layer3, 4 & 7 switch, which is what we normally call the 3rd, 4th, and 7th layers of exchange. The role is to detect the state of the Web server, if a Web server fails to work, keepalived will be the failure of the Web server from the system, when the Web server is working properly keepalived automatically add the Web server to the server farm, fully automatic;


Ipvs is called an IP virtual server (IP Vsan, abbreviated as IPVS). is a technology that runs under LVS to provide load balancing, and the LVS core software is ipvasadm.



Because I have built a load Balancer cluster, can not guarantee that no one is so efficient work, if one of the two multiple real server failure, the client requests will always show the connection error, what should I do?

What if our load balancer fails and the customer is not connected to the service?

Perhaps you would think that the front of the heartbeat high-availability is not to solve this problem, but he can not implement the load cluster function, we can not configure each machine heartbeat host hot standby;

Keepalived came into being, he also has a highly available host hot spare and load balancing mediation function, and it can seamlessly expand the realserver, simply change the keepalived configuration and increase the startup script.




Two. Pre-preparation:

We need at least four machines to build this high-availability + load-balancing service; But since we are doing experiments, three can be, RS2 with the standby machine, in fact, the device is a separate computer.


director+ Host: 192.168.11.190

Real Server 1:192.168.11.20

Real server+ Preparation: 192.168.11.30

Virtual host vip:192.168.11.100 Because of the DR mode used, so to configure the VIP


Change the hostname to dir rs1 rs2:hostname dir >>> bash rs1 with rs2 empathy change

Empty the previous rule: ipvsadm-c

iptables-t net-f


Dir host and rs2 standby machine:

Yum Install-y keepalived


All Machines (three):

Yum Install-y Ipvsadm

Yum install-y Nginx can also be other services, can also be source package installation, port Open is good






Three. File configuration


Dir Host:

vim/etc/keepalived/keepalived.conf//Join as follows:

Vrrp_instance Vi_1 {

State MASTER standby server is backup, status

Interface eth0 Monitor VIP's network card

VIRTUAL_ROUTER_ID 51

Priority 100 on the standby server is 90 start-up precedence, the larger the higher

Advert_int 1

Authentication {authentication mechanism for cluster communication

Auth_type PASS

Auth_pass 1111

}

virtual_ipaddress {VIP

192.168.11.110

}

}

Virtual_server 192.168.11.100 80 {

Delay_loop 6 (Query Realserver status every 10 seconds)

Lb_algo WLC (LVS algorithm)

Lb_kind DR (Direct Route)

Persistence_timeout 60 (connection of the same IP is assigned to the same realserver within 60 seconds)

Protocol TCP (check Realserver status with TCP protocol)


Real_server 192.168.11.20 { real server1

Weight 100 (weight)

Tcp_check {

Connect_timeout 10 (10 seconds No response timeout, timeout disconnect)

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

Real_server 192.168.11.30 80 {

Weight 100

Tcp_check {

Connect_timeout 10

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

}


Master and slave hot standby in the system configuration is almost the same, just a few options configuration is not the same, but this time the standby machine is built on the real server, so also add some real server should have content;


From the configuration on the machine, need to change the two places, the file has been marked, the host Super copy in the past is good;

SCP file path hostname @ slave IP: stored path

After configuring keepalived, you need to turn on port forwarding (master/slave):

Echo 1 >/proc/sys/net/ipv4/ip_forward


We configure the load balancer when the port forwarding and other configuration rules are written to the lvs_dr.sh script, and this time the rule by keepalived, and the other only need to perform port forwarding is good;

In fact, this high-availability load cluster is to let keepalived help LVS configure the rules that should be ipvsadm configured.



Rs1 and RS2: both as real servers, configuration is the same


vim/usr/local/sbin/lvs_dr.sh

#! /bin/bash

vip=192.168.11.100

Ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

Route Add-host $vip lo:0 Configure static routes, unchanged

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce


Start: Bash (/bin/bash)/usr/local/sbin/lvs_dr.sh



Start the master-slave dual-machine keepalived: (Guthrie after)

/etc/init.d/keepalived start

Also, it is important to note that starting the Keepalived service automatically generates VIP and IPVSADM rules and does not need to execute the/usr/local/sbin/lvs_dr.sh script mentioned above.



View mode: Can query the clear process at any time

View Ip:ip addr ifconfig The virtual IP is not found because the virtual IP is configured on the eth0

View Cluster IPVSADM-LN






Four. Testing

To be easy to observe, or to change the default page display of Nginx, the method mentioned twice before;

Rs1: Echo "Rs1rs1" >/usr/share/nginx/html/index.html

RS2: Echo "Rs2rs2" >/usr/share/nginx/html/index.html


Can be seen in the browser, but the total problem, not recommended; Go to the command line on Windows or find another server ping 192.168.11.100


Downtime Rs1:ifdown eth0 or/etc/init.d/nginx stop

View Dir:ipvsadm-ln or IP addr you'll find 192.168.11.20 missing.

And ping, will always link rs2, does not show rs1 connection;

When the rs1 restored, everything was back to normal again;

Downtime Dir:ifdown eth0 or/etc/init.d/nginx stop

View RS2: We do not see anything on the Dir host, but on the RS2 (proxy) can find the original content displayed on Dir;

At this time we in the other machine ping, the normal display;





Five. Extensions: haproxy+keepalived for high-availability load balancing

Http://www.cnblogs.com/dkblog/archive/2011/07/06/2098949.html

Relatively clear http://s2t148.blog.51cto.com/3858027/888263



650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/74/5F/wKioL1Ybpw_C8MXLAAJli6-NZgA887.jpg "title=" 22.PNG "alt=" Wkiol1ybpw_c8mxlaajli6-nzga887.jpg "/>




Day 28th: High-Availability load Balancing cluster setup

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.