Build a Server Load balancer cluster with LVS in Linux

Source: Internet
Author: User

Build a Server Load balancer cluster with LVS in Linux

Common open-source load balancing software: nginx, lvs, and keepalived
Commercial Hardware load equipment: F5, Netscale
1. Introduction to LB and LVS
LB clusters are short for load balance clusters. load Balancing clusters are translated into Chinese;
LVS is an open-source software project for implementing Server Load balancer clusters;
The LVS architecture can be logically divided into the scheduling layer (Director), the server cluster layer (Real server), and the shared storage layer;


LVS can be divided into three working modes:
NAT (the scheduler changes the target ip address (vip address) of the request to the ip address of the Real server, and the returned data packet also goes through the scheduler. The scheduler then changes the source address to the vip address)
TUN (the scheduler forwards the request data packet encapsulation encryption to the backend real server through the ip tunnel, and the real server directly returns the data to the client without passing through the Scheduler)
DR (the scheduler changes the target mac address of the requested data packet to the mac address of the real server, and returns the result directly to the client without going through the Scheduler)
Spacer.gif

LVS scheduling algorithm: Round Robin (rr), Weighted Round Robin (wrr), and least connection (LC ), weighted Least Connections (WLC) and so on;

2. LVS/NAT configuration
Preparations:
You need to prepare a CentOS6.6 System for the three machines, and two NICs for the Director machine;
One of the three servers acts as ctor, and the other two serve as real servers.

Director has an Internet ip Address: 192.168.22.11 and an Intranet ip Address: 192.168.11.11.
The two Real servers only have the Intranet ip addresses 192.168.11.100 and 192.168.11.101, and the Intranet gateway must be set as the Intranet ip address of ctor: 192.168.11.11.
DEVICE = eth1
TYPE = Ethernet
ONBOOT = yes
BOOTPROTO = static
IPADDR = 192.168.11.100
GATEWAY = 192.168.11.11 after the GATEWAY is changed, restart the NIC. After the NIC is down, It is up and implemented in a command. If the ifdown is completed, ssh will be interrupted;

# Ifdown eth1 & ifup eth1
Ipvctor install ipvsadm: # yum install-y ipvsadm

To install nginx on two real servers, you must first install the epel extension source.
Yum install-y epel-release
Yum install-y nginx
Start nginx:/etc/init. d/nginx start after installation
Change the hostname of the three servers to dr, rs1, and rs2.


Add the following content to vi/usr/local/sbin/lvs_nat.sh on Direcotr:
#! /Bin/bash

Echo 1>/proc/sys/net/ipv4/ip_forward

Echo 0>/proc/sys/net/ipv4/conf/all/send_redirects
Echo 0>/proc/sys/net/ipv4/conf/default/send_redirects
Echo 0>/proc/sys/net/ipv4/conf/eth0/send_redirects
Echo 0>/proc/sys/net/ipv4/conf/eth1/send_redirects

Iptables-t nat-F
Iptables-t nat-X
Iptables-t nat-a postrouting-s 192.168.11.0/24-j MASQUERADE

IPVSADM = '/sbin/ipvsadm'
$ IPVSADM-C
$ IPVSADM-A-t 192.168.22.11: 80-s wlc
$ IPVSADM-a-t 192.168.22.11: 80-r 192.168.11.100: 80-m-w 2
$ IPVSADM-a-t 192.168.22.11: 80-r 192.168.11.101: 80-m-w 1 run this script to complete the lvs/nat configuration:

/Bin/bash/usr/local/sbin/lvs_nat.sh

Dr: View nat iptables
[Root @ dr ~] # Iptables-t nat-nvL
Chain POSTROUTING (policy ACCEPT 1 packets, 124 bytes)
Pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- ** 192.168.11.0/24 0.0.0.0/0
Ipvsadm-ln view ipvsadm rules

Open 192.168.11.100 and 192.168.11.101 in the browser to display the nginx welcome page.

Modify the html file on rs1 and rs2 to differentiate them;
[Root @ rs1 ~] # Cat/usr/share/nginx/html/index.html
Rs1rs1rs1
[Root @ rs2 ~] # Cat/usr/share/nginx/html/index.html
Rs2rs2rs2 test the content on the two machines through a browser
Open 192.168.22.11 in the browser, and the html content of rs1 or rs2 is displayed. Switching back and forth shows that the test is OK;

Change the polling rule to wlc and the weight to 2 for testing.

Test with curl of another linux machine. If there are 2 times, 1 times, and 2 times, the switch is OK;
[Root @ localhost ~] # Curl 192.168.22.11
Rs1rs1rs1
[Root @ localhost ~] # Curl 192.168.22.11
Rs1rs1rs1
[Root @ localhost ~] # Curl 192.168.22.11
Rs2rs2rs2 can be viewed on the dr machine using ipvsadm-ln. The weight ratio is the same as that maintained by the link ratio;
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.22.11: 80 wlc
-> 192.168.11.100: 80 Masq 2 0 26
-> 192.168.11.101: 80 Masq 1 0 13

3. LVS/DR Configuration
In DR mode, ctor Ctor is only responsible for distribution and only for incoming traffic. The throughput is very high. real server provides data directly to users, reducing the security;
The public ip must be configured for all machines in the DR. The virtual ip must be configured for each machine. When a user requests a virtual ip address, the returned ip address is provided for round-robin rs;
Three machines, each machine requires only one ip address, vip will appear after script execution, do not manually set;
Director (eth1: 192.168.11.11 vip eth1: 0 192.168.11.110)
Real server1 (eth1: 192.168.11.100 vip lo: 0: 192.168.11.110)
Real server1 (eth1: 192.168.11.101 vip lo: 0: 192.168.11.110)

Add the following content to vim/usr/local/sbin/lvs_dr.sh on ctor
#! /Bin/bash
Echo 1>/proc/sys/net/ipv4/ip_forward
Ipv =/sbin/ipvsadm
Vip = 192.168.11.110
Rs1 = 192.168.11.100
Rs2 = 192.168.11.101
Ifconfig eth1: 0 $ vip broadcast $ vip netmask 255.255.255.255 up
Route add-host $ vip dev eth1: 0
$ Ipv-C
$ Ipv-A-t $ vip: 80-s rr
$ Ipv-a-t $ vip: 80-r $ rs1: 80-g-w 1
$ Ipv-a-t $ vip: 80-r $ rs2: 80-g-w 1

On two rs: vim/usr/local/sbin/lvs_dr_rs.sh
#! /Bin/bash
Vip = 192.168.11.110
Ifconfig lo: 0 $ vip broadcast $ vip netmask running 255.255.255 up
Route add-host $ vip lo: 0
Echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
Echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
Echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
Echo "2">/proc/sys/net/ipv4/conf/all/arp_announce

Run bash/usr/local/sbin/lvs_dr.sh on ctor.
Run bash/usr/local/sbin/lvs_dr_rs.sh on the two rs.

After the execution is complete, ifconfig displays the virtual IP address. dr displays eth1: 0, rs1, and rs2 show lo: 0;
Eth1: 0 Link encap: Ethernet HWaddr 00: 0C: 29: 70: 4E: 58
Inet addr: 192.168.11.110 Bcast: 192.168.11.110 Mask: 255.255.255.255
Up broadcast running multicast mtu: 1500 Metric: 1
Interrupt: 18 Base address: 0x2080

Lo: 0 Link encap: Local Loopback
Inet addr: 192.168.11.110 Mask: 255.255.255.255
Up loopback running mtu: 65536 Metric: 1 Listen SADM-ln list rules
[Root @ dr ~] # Ipvsadm-ln
IP Virtual Server version 1.2.1 (size = 4096)
Prot LocalAddress: Port sched1_flags
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.11.110: 80 rr
-> 192.168.11.100: 80 Route 1 0 3
-> 192.168.11.101: 80 Route 1 0 3 Start a linux machine separately for testing. The cache in the browser test is not obvious;
Curl 192.168.11.110 is used for testing. The rr polling rule is OK once each time;
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs2rs2rs2
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs2rs2rs2: Change the polling algorithm to wrr and the weight to 2. Then, run the file and report the error that the file already exists. The cause is that eth1 has been added to the/usr/local/sbin/lvs_dr.sh script file: 0, so you need to add: ifconfig eth1: 0 down in the script, and no error will be reported afterwards;
$ Ipv-A-t $ vip: 80-s wrr
$ Ipv-a-t $ vip: 80-r $ rs1: 80-g-w 2
$ Ipv-a-t $ vip: 80-r $ rs2: 80-g-w 1 [root @ dr ~] # Bash/usr/local/sbin/lvs_dr.sh
SIOCADDRT: if an rs already exists in the file, it will still be accessed by polling. Therefore, it will be opened for a while and cannot be opened;
Analog: rs2 stops nginx:/etc/init. d/nginx stop
If you use curl for testing, the request will still be sent to rs2, but you have already indicated that you cannot connect to the host;
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Curl: (7) couldn't connect to hostlvs itself will not remove the dead real server, so you need to combine keeplived;

LVS/DR + Keepalived

LVS + Keepalived achieves layer-4 load and high availability

LVS + Keepalived high-availability server Load balancer cluster architecture Experiment

Heartbeat + LVS build a high-availability server Load balancer Cluster

Build an LVS load balancing test environment

A stress test report for LVS

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.