Linux under LVs to build load Balancer cluster

Source: Internet
Author: User

Common open source software for load balancing: Nginx, LVS, keepalived

Commercial hardware load devices: F5, Netscale

1, LB, LVS Introduction

LB cluster is a shorthand for the load balance cluster, and translation into Chinese is a loading balancing cluster;

LVS is an open source software project that implements load balancing clusters;

The LVS architecture can be logically divided into the scheduling layer (Director), the server cluster layer (Real server) and the shared storage layer.


LVS can be divided into three modes of operation:

NAT (The scheduler changes the requested destination IP, the VIP address to the IP of the real server, and the returned packets go through the scheduler, and the scheduler then modifies the source address to the VIP)

TUN (The scheduler encapsulates the requested packet encryption over the IP tunnel to the back-end real server, and real server returns the data directly to the client, not the scheduler)

DR (The scheduler changes the destination MAC address of the requested packet to the MAC address of the real server and returns to the client without going through the scheduler.)

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>

LVS Scheduling algorithm: Round call scheduling (Round Robin) (abbreviated RR), weighted round call (Weighted Round Robin) (WRR), least link (least connection) (LC), weighted least link (Weighted least Connections) (WLC), etc.;


2. Lvs/nat Configuration

Preparatory work:

Need to prepare three machines clean centos6.6 system, Director machine need to install two network card;

Three servers one as director, two as real server

Director has an external network ip:192.168.22.11 and an intranet ip:192.168.11.11

Two real servers only have intranet IP: 192.168.11.100   and 192.168.11.101, and need set

Device=eth1 type=ethernetonboot=yesbootproto=staticipaddr=192.168.11.100gateway=192.168.11.11

After changing the gateway needs to restart the network card, first down and up, in a command implementation; If Ifdown, SSH will be interrupted;

# ifdown eth1 && ifup eth1

Director install Ipvsadm:#yum install-y ipvsadm


Two real servers install Nginx and need to install the Epel extension source first.

Yum Install-y epel-release

Yum install-y Nginx

When the installation is complete, start the Nginx:/etc/init.d/nginx start

Change the hostname of three servers to Dr, Rs1, Rs2


DIRECOTR on vi/usr/local/sbin/lvs_nat.sh//Add the following:

#! /bin/bashecho 1 >/proc/sys/net/ipv4/ip_forwardecho 0 >/proc/sys/net/ipv4/conf/all/send_redirectsecho 0 >/ Proc/sys/net/ipv4/conf/default/send_redirectsecho 0 >/proc/sys/net/ipv4/conf/eth0/send_redirectsecho 0 >/ Proc/sys/net/ipv4/conf/eth1/send_redirectsiptables-t nat-fiptables-t nat-xiptables-t nat-a postrouting-s 192.168.11 .0/24-j masqueradeipvsadm= '/sbin/ipvsadm ' $IPVSADM-c$ipvsadm-a-t 192.168.22.11:80-s wlc$ipvsadm-a-T 192.168.22.11:8 0-r 192.168.11.100:80-m-W 2$ipvsadm-a-T 192.168.22.11:80-r 192.168.11.101:80-m-W 1

Run this script directly to complete the Lvs/nat configuration:

/bin/bash/usr/local/sbin/lvs_nat.sh


Dr View Nat's Iptables

[[email protected] ~]# iptables-t nat-nvlchain postrouting (Policy ACCEPT 1 packets, 124 bytes) pkts bytes Target PR OT opt in Out source destination 0 0 Masquerade All--* * 192.168.11.0 /24 0.0.0.0/0


IPVSADM-LN View rules for Ipvsadm

Browser Open 192.168.11.100, 192.168.11.101 display Nginx Welcome page

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>650 "this.width=650;" src= "http ://s3.51cto.com/wyfs02/m02/6e/22/wkiol1v1b5scgg6raalq60a_4kg748.jpg "title=" nginx.jpg "alt=" Wkiol1v1b5scgg6raalq60a_4kg748.jpg "/>

Modify the HTML file on rs1 and rs2 to distinguish;

[Email protected] ~]# cat/usr/share/nginx/html/index.html rs1rs1rs1[[email protected] ~]# cat/usr/share/nginx/html/ Index.html rs2rs2rs2

Test the contents of two machines through a browser

Browser Open 192.168.22.11, will display the HTML content of rs1 or rs2;


Change the polling rule for WLC, with a weight of 2, to test

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>650 "this.width=650;" src= "http ://s3.51cto.com/wyfs02/m01/6e/22/wkiol1v1chsizvu2aadw4ntqigk933.jpg "title=" qq20150607192355.jpg "alt=" Wkiol1v1chsizvu2aadw4ntqigk933.jpg "/>

With another Linux machine Curl test, appear 2 times 1, 1 times 2, switch back and forth instructions OK;

[[email protected] ~]# Curl 192.168.22.11rs1rs1rs1[[email protected] ~]# Curl 192.168.22.11rs1rs1rs1[[email protected] ~ ]# Curl 192.168.22.11rs2rs2rs2

On the Dr Machine Ipvsadm-ln can be viewed, weight ratio, keep the link than the approximate;

Remoteaddress:port Forward Weight activeconn inactconntcp 192.168.22.11:80 WLC--192.168.11.100:80 MASQ 2 0-192.168.11.101:80 masq 1 0 13


3. LVS/DR Configuration

In Dr Mode, the Director is only responsible for distributing, only incoming traffic, the throughput will be very large; Real server provides data directly to the user, the security will be reduced;

The machine in the DR needs to configure the public IP, the virtual IP each machine needs to be configured, the user requests the virtual IP when the request, returns the time to poll the RS to provide;

Three machines, each machine only need to configure 1 IP,VIP is executed with a script will appear, not manually set;

Director (eth1:192.168.11.11 VIP eth1:0 192.168.11.110)

Real Server1 (eth1:192.168.11.100 vip lo:0: 192.168.11.110)

Real Server1 (eth1:192.168.11.101 vip lo:0: 192.168.11.110)


Director on vim/usr/local/sbin/lvs_dr.sh //Add the following content

#! /bin/bashecho 1 >/proc/sys/net/ipv4/ip_forwardipv=/sbin/ipvsadmvip=192.168.11.110rs1=192.168.11.100rs2= 192.168.11.101ifconfig eth1:0 $VIP broadcast $vip netmask 255.255.255.255 uproute add-host $vip Dev eth1:0$ipv-c$ipv-a -T $VIP: 80-s rr$ipv-a-T $vip: 80-r $rs 1:80-g-W 1$ipv-a-t $vip: 80-r $rs 2:80-g-W 1


Two RS on:vim/usr/local/sbin/lvs_dr_rs.sh

#! /bin/bashvip=192.168.11.110ifconfig lo:0 $VIP broadcast $vip netmask 255.255.255.255 uproute add-host $vip lo:0echo "1" & Gt;/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys /net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce


Then the director executes: bash/usr/local/sbin/lvs_dr.sh

Performed on two rs: bash/usr/local/sbin/lvs_dr_rs.sh


After execution, ifconfig can display the virtual IP address, the DR Display Eth1:0,rs1, RS2 display lo:0;

eth1:0    link encap:ethernet  hwaddr  00:0c:29:70:4e:58            inet addr :192.168.11.110  bcast:192.168.11.110  mask:255.255.255.255           UP BROADCAST RUNNING MULTICAST  MTU:1500   metric:1          interrupt:18 base address:0x2080            lo:0      link  encap:local loopback            inet  addr:192.168.11.110  Mask:255.255.255.255           up loopback running  mtu:65536  metric:1 

IPVSADM-LN List Rules

[Email protected] ~]# Ipvsadm-lnip Virtual Server version 1.2.1 (size=4096) Prot localaddress:port Scheduler Flags--     Remoteaddress:port Forward Weight activeconn inactconntcp 192.168.11.110:80 rr--192.168.11.100:80 Route 1 0 3-192.168.11.101:80 Route 1 0 3

Start a single Linux machine for testing, browser testing is not obvious cache;

Curl 192.168.11.110 test, each occurrence 1 times the RR polling rule OK;

[[email protected] ~]# Curl 192.168.11.110rs1rs1rs1[[email protected] ~]# Curl 192.168.11.110rs2rs2rs2[[email protected ] ~]# Curl 192.168.11.110rs1rs1rs1[[email protected] ~]# Curl 192.168.11.110rs2rs2rs2

Change the polling algorithm to WRR, the weight is 2, and then execute the file, the error hint file already exists , because the /usr/local/sbin/lvs_dr.sh script file is already up eth1:0, So need to add in the script: Ifconfig eth1:0 down, then will not error;

$IPV-A-t $VIP: 80-s wrr$ipv-a-T $vip: 80-r $rs 1:80-g-W 2$ipv-a-t $vip: 80-r $rs 2:80-g-W 1
[[email protected] ~]# BASH/USR/LOCAL/SBIN/LVS_DR.SHSIOCADDRT: File already exists

If one of the RS is hung, it will still poll for access, so it will be open for a while.

Analog, RS2 stops Nginx:/etc/init.d/nginx stop

With Curl test, the request will still be sent to RS2, but it has been prompted not to connect to the host;

[[email protected] ~]# Curl 192.168.11.110rs1rs1rs1[[email protected] ~]# Curl 192.168.11.110rs1rs1rs1[[email protected ] ~]# Curl 192.168.11.110curl: (7) couldn ' t connect to host

LVS itself will not eliminate the dead real server, so need to combine keeplived;



This article is from the "Model Student's Learning blog" blog, please be sure to keep this source http://8802265.blog.51cto.com/8792265/1659585

Linux under LVs to build load Balancer cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.