How to build a Web cluster using LVS Load balancer and installation configuration

Source: Internet
Author: User

First, load balanced LVS Basic introduction

LVS is the abbreviation for Linux virtual server, which is the Linux virtualization server. This is an open source project initiated by Dr. Zhangwensong, whose official website is http://www.linuxvirtualserver.org. LVS is part of the Linux kernel standard. LVS is an open source software project that implements a load balancing cluster, and the load balancing technology of LVS and the Linux operating system enable a high-performance and highly available Linux server cluster with good reliability, scalability and operability. The LVS architecture can be logically divided into the scheduling layer, the server cluster layer and the shared storage. LVS is actually equivalent to an IP address-based virtualization application.

Two, the composition of LVS

LVS is composed of 2 parts, including Ipvs and Ipvsadm.

    1. Ipvs (IP Virtual Server): Working in kernel space, is the code that really takes effect to implement the dispatch.
    2. Ipvsadm: Working in User space, responsible for writing rules for the Ipvs kernel framework, defining who is the Cluster service and who is the backend real server
Iii. related Terms of LVS
    1. Ds:director Server, which refers to the front-end load balancer node.
    2. Rs:real server, backend real work server.
    3. VIP: An IP address that is directed externally to a user request as the target of a user request.
    4. Dip:director Server IP, front-end load Balancer IP address, primarily for communication with internal hosts.
    5. Rip:real server IP, the IP address of the back-end server.
    6. Cip:client IP, accessing the IP address of the client
Iv. Introduction to the working mode of LVS

LVS load Balancing Common there are three modes of operation, namely address translation (NAT mode), IP tunneling (referred to as Tun mode) and direct routing (referred to as Dr Mode), in fact, the most commonly used in enterprises is the Dr Method, and the NAT configuration is relatively simple and convenient, the following summary of DR and Nat principles and features:

1. Lvs-nat mode (1) Lvs-nat principle

Similar to the firewall of the private network structure, Director server as a gateway to all server nodes, that is, as the client Access portal, is also the client's access to the node to respond to the exit, the network address as the entire cluster VIP address, the intranet address and the backend server real Server on the same physical network, Real server must use a private IP address.

Packet Flow Analysis

    • The user sends the request to the Director Server, the requested data message (the source IP is CIP, the destination IP is the VIP) reaches the kernel space.

    • Kernel space to determine the destination IP packet is the local, at this time Ipvs than the packet request service is a Cluster service, if, modify the destination IP address of the packet is the back-end server IP, reseal the packet (the source IP is CIP, the destination IP is RIP), and then choose the path to the data packets to real Server.

    • Real Server is the IP that is native to the target IP, and re-encapsulates the message (the source IP is rip and the target IP is CIP) sent back to the director server.

    • The Director server reseal the packet, modifies the source IP address to its own VIP address, and responds to the client. At this time the source IP of the message is the VIP, the target IP is CIP.
(2) Characteristics of Lvs-nat model
    • The RS must use a private IP address, and the gateway points to the dip.

    • Dip and rip must be within the same network segment.

    • The DS acts as a gateway to all server nodes, meaning that both request and response messages need to go through the director server.

    • Support Port Mappings

    • In high-load scenarios, the Director server is much more stressful than a performance bottleneck.
2. Lvs-dr mode (1) LVS-DR principle

The Director server is the Access portal for the cluster, but is not used as a gateway, and the real server in the back-end server pool is in the same physical network as the director server, and the packets sent to the client do not need to go through the director server. In response to access to the entire cluster, both DS and RS need to be configured with a VIP address.

Packet Flow Analysis

    • The user sends the request to the Director Server, the requested data message (the source IP is CIP, the destination IP is the VIP) reaches the kernel space.

    • Because DS and RS are in the same network, they are transmitted over a two-layer data link layer.
    • Kernel space to determine the destination IP packet is a native IP, at this time Ipvs than the packet request service is a Cluster service, if, re-encapsulate the packet, modify the source MAC address is the Dip MAC address, the destination MAC address is Rip MAC address, the source IP address and destination IP address has not changed, Then send the data packets to real Server.
    • Rs discovers the MAC address of the request message is its own MAC address, receives this message, re-encapsulates the message (the source IP address is VIP, the target IP is CIP), sends the response message through the LO interface to the ETH0 network card and then outward.
    • RS transmits the response message directly to the client.
(2) Characteristics of LVS-DR model
    • The RS and DS must be in the same physical network.
    • RS can use a private address or a public address, and if you use a public address, you can access rip directly via the Internet.
    • All request messages are routed through the director. Server, but the response message must not go through the director server.
    • RS gateways are never allowed to point to dips (packets are not allowed to pass through the director).
    • The LO interface on the RS configures the IP address of the VIP.

The LVS-DR pattern needs to be noted:
Ensure that the front-end routing sends the destination address to the Director Server instead of Rs.

The solution is to modify the kernel parameters on the RS (Arp_ignore and arp_announce) to configure the VIP on the RS to be on the alias of the Lo interface and to restrict its inability to respond to the VIP Address resolution request.

    • Arp_ignore=1 indicates that the system responds only to ARP requests for the local IP of the destination IP.

    • Arp_announce=2 indicates that the system does not use the source address of the IP packet to set the source address of the ARP request, and chooses the IP address of the sending interface.
Load scheduling algorithm for LVS

There are four most commonly used, polling (RR), weighted polling (WRR), Minimum Connection (LC), and weighted least connection (WLC).

    • Polling (RR): The received access requests are dispatched sequentially to different servers, regardless of the actual number of connections to the backend real server and the system load.
    • Weighted polling (WRR): To set the weight of the RS, the higher the weight, then the more the number of requests distributed, the value of the weight range 0–100. Depending on the performance of each server, to add weights to each server, if the weight of the RS1 is 1,rs2 the weight of 2, then the dispatch to RS2 request will be RS1 twice times. The higher the weighted value of the server, the more requests are processed. This algorithm is an optimization and supplement to the RR algorithm.
    • Minimum connection (LC): Depending on the number of connections to the back-end RS, the request is distributed to who is less than the number of RS1 connections compared to the RS2 connection, then requests are given priority to RS1.
    • Weighted minimum Connection (WLC): Depending on the weight of the backend RS and the number of connections to determine who to distribute the request, the higher the weight, the less connections RS will prioritize the request.
Vi. shared storage server for LVS

Provides stable, consistent file access services for back-end real servers, where shared storage can use NAS devices or dedicated servers that provide NFS (network file system) shared services. Typically placed in a private network.

Vii. NAT mode to implement LVS 1. Experimental environment

IP Address Planning

Request IP address of Client Access service: VIP 12.0.0.1

Server IP system
Director Server DIP 192.168.10.1 CentOS7
NFS Server 192.168.10.50 RedHat6
Real Server1 RIP 192.168.10.51 CentOS7
Real Server2 RIP 192.168.10.52 CentOS7

In the configuration of the Director of the server to add two network cards, detailed steps can refer to the DNS separation resolution this Boven introduction, set up the external network ENS37 for VIP, intranet ens33 for dip, two real server gateway set as Director's intranet IP is dip.

In the configuration of the virtual machine, the DIP network connection is set to host only mode, and Real server is also configured for host mode only.

2. Installation and Configuration

Yum Install software

Method One: If the virtual Machine network is a host-only mode, there is no network, you can create a Yum repository locally and then install Yum.

Method Two: If the virtual Machine network is NAT mode, there is a network that can be installed online yum.

(1) Configuring Server for NFS
yum install nfs-utils -y    #7系统版本需要安装nfs工具包service rpcbind start  service nfs restart
#发布共享exportfs -rv
#关闭防火墙service iptables stop
(2) configuration of two real server servers
#安装nfs客户端yum install nfs-utils -y systemctl start rpcbind.service  systemctl start nfs.service
#查看nfs挂载showmount -e 192.168.10.50
#挂载nfs#Real Server1挂载nfs#法一:直接挂载mount.nfs 192.168.10.50:/opt/wwwroot1 /var/www/html#法二:修改fatab文件挂载vim /etc/fstab  192.168.10.50:/opt/wwwroot1 /var/www/html nfs defaults,_netdev  0 0#Real Server2挂载nfs方法同Real Server1,将挂载目录/opt/wwwroot1改成/opt/wwwroot2,其余一样。
#安装httpdyum install httpd -y
#real server1创建测试网页echo "Server 192.168.10.51" > /var/www/html/index.html#real server2创建测试网页echo "Server 192.168.10.52" > /var/www/html/index.html
#关闭防火墙和安全性策略systemctl stop  firewalld.service systemctl disable firewalld.servicesetenforce 0
#测试网页打开是否正常firefox http://127.0.0.1/


(3) Configuring the Director Server server
#安装ipvsadm管理工具yum install ipvsadm -y
#加载LVS内核模块modprobe ip_vs#查看ip_vs版本信息cat /proc/net/ip_vs  

#开启路由转发#法一:编辑sysctl.conf文件,永久路由转发vim /etc/sysctl.conf  net.ipv4.ip_forward=1sysctl -p#法二:直接编辑,临时路由转发echo "1" > /proc/sys/net/ipv4/ip_forward
#配置SNAT转发规则,设置nat防火墙iptables -F -t nat      #清空nat防火墙iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -o ens37 -j SNAT --to-source 12.0.0.1
#Director 上编辑 nat 实现负载分配脚本# 设置 ipvsadmvim nat.sh #!/bin/bash ipvsadm-save  > /etc/sysconfig/ipvsadm   #保存策略 service ipvsadm start  ipvsadm -C     #清除内核虚拟服务器表中的所有记录 ipvsadm -A -t 12.0.0.1:80 -s rr  #创建虚拟服务器 ipvsadm -a -t 12.0.0.1:80 -r 192.168.10.51:80 -m  ipvsadm -a -t 12.0.0.1:80 -r 192.168.10.52:80 -m  ipvsadm

Options for the IPVSADM management tool use:

  • -A: Indicates adding a virtual server
  • -T: Used to specify the VIP address and TCP port
  • -S: Used to specify load balancing scheduling algorithm
  • -A: Indicates adding a real server
  • -R: Used to specify RIP address and TCP port
  • -M: Indicates the use of NAT cluster mode
  • -G: Indicates the use of Dr Cluster mode
  • -I: means using Tun cluster mode
  • -W: Used to set weights
#保存nat脚本后直接运行chmod +x nat.sh./nat.sh

#查看ipvsadm设置的规则ipvsadm -ln

3. Test the LVS Cluster

With Windows clients accessing http://12.0.0.1 directly, you will be able to see the content of the Web page provided by the real server.

First time visit:

Real Server connections Viewed:

Refresh one time:

Real Server connections Viewed:

Viii. The DR Mode of implementing LVS 1. Experimental environment

Four machines:

Director node: (ENS33 192.168.10.53 VIP ens33:0 192.168.10.80)

Real Server1: (Ens33 192.168.10.51 VIP lo:0 192.168.10.80)

Real server2: (Ens33 192.168.10.52 VIP lo:0 192.168.10.80)

NFS server:192.168.10.50

2. Installation and configuration (1) Configuring NFS server servers

Steps above

(2) configuration of two real server servers
#配置虚拟IP地址cd /etc/sysconfig/network-scripts/cp ifcfg-lo ifcfg-lo:0vim ifcfg-lo:0 DEVICE=lo:0 IPADDR=192.168.10.80 NETMASK=255.255.255.255  

#安装nfs客户端yum install nfs-utils -y service rpcbind start  service nfs restart
#查看nfs挂载showmount -e 192.168.10.50
#挂载nfs#Real Server1挂载nfs#法一:直接挂载mount.nfs 192.168.10.50:/opt/wwwroot1 /var/www/html#法二:修改fatab文件挂载vim /etc/fstab  192.168.10.50:/opt/wwwroot1 /var/www/html nfs defaults,_netdev  0 0#Real Server2挂载nfs方法同Real Server1,将挂载目录/opt/wwwroot1改成/opt/wwwroot2,其余一样。
#安装httpdyum install httpd -y
#real server1创建测试网页echo "Server 192.168.10.51" > /var/www/html/index.html#real server2创建测试网页echo "Server 192.168.10.52" > /var/www/html/index.html
#关闭防火墙和安全性策略systemctl stop  firewalld.service systemctl disable firewalld.servicesetenforce 0
Configure the startup script on the #在两台real server vim/etc/init.d/rs.sh #!/bin/bash vip=192.168.10.80 case "$" in start) Ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP/sbin/route add-host $VIP Dev lo:0 #为本机添 Add a route record echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/c Onf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/s  Ys/net/ipv4/conf/all/arp_announce sysctl-p >/dev/null 2>&1 echo "Realserver Start        OK "; Stop) Ifconfig lo:0 down Route del $VIP/dev/null 2>&1 echo "0" >/p Roc/sys/net/ipv4/conf/lo/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce Ech O "0" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/all/arp_announCe echo "Realserver stopd";; *) echo "Usage: $ {Start|stop}" Exit 1 Esac exit 0# Run script directly after save chmod +x Rs.shser Vice Rs.sh start

First Test on this machine to access the Web page:

(2) Configuring the Director Server server

#安装ipvsadmyum install ipvsadm -y  
vim /etc/sysctl.conf#开启路由功能  net.ipv4.ip_forward=1#调整/proc响应参数,关闭Linux内核重定向参数响应  net.ipv4.conf.all.send_redirects = 0    net.ipv4.conf.default.send_redirects = 0  
Configure startup Scripts vim/etc/init.d/dr.sh #!/bin/bash gw=192.168.10.1 vip=192.168.10.80 rip1=192.168.10.51 RIP2=192.168.10.52 Case " $ "in Start"/sbin/ipvsadm--save >/etc/sysconfig/ipvsadm systemctl start IPVSADM/SB In/ifconfig ens33:0 $VIP broadcast $VIP netmask 255.255.255.255 broadcast $VIP up/sbin/route add-host $VIP dev en s33:0/sbin/ipvsadm-a-T $VIP: 80-s rr/sbin/ipvsadm-a-T $VIP: 80-r $RIP 1:80-g/sbin/ipvsadm-a-        T $VIP: 80-r $RIP 2:80-g echo "Ipvsadm starting--------------------[OK]";  Stop)/sbin/ipvsadm-c systemctl stop Ipvsadm ifconfig ens33:0 down Route del $VIP Echo        "Ipvsamd stoped----------------------[OK]";;                status) if [!-e/var/lock/subsys/ipvsadm];then echo "Ipvsadm stoped---------------" Exit 1        else echo "Ipvsamd runing---------[OK]" FI;; *) echo "Usage: $ {Start|stop|status} "exit 1esacexit 0chmod +x/etc/init.d/dr.shservice dr.sh start 
#关闭防火墙和安全策略systemctl stop firewalld.servicesystemctl disable firewalld.servicesetenforce 0
3. Test the LVS Cluster

With Windows clients accessing http://192.168.10.80/directly, you will be able to see the content of the Web page provided by the real server.

First time visit:


Real Server connections Viewed:

Refresh one time:

Real Server connections Viewed:

Nine, LVs combined with keepalive

LVS can be load balanced, but not failover and health checks, that is, when an RS server fails, LVS will still forward the request to the faulty RS server, which will result in invalid requests. KeepAlive software can solve the problem of LVS single point failure, and can realize the high availability of LVS at the same time. Here is an example of the LVS-DR pattern.

1. Experimental environment

Five machines:

    • Keepalived1 + LVS1 (Director1): 192.168.10.53 (Master)
    • Keepalived2 + LVS2 (Director2): 192.168.10.54 (from)
    • Real server1:192.168.10.51
    • Real server2:192.168.10.52
    • NFS server:192.168.10.55
    • vip:192.168.10.80
2. Installation Configuration

The keepalived service is deployed on two director server node servers, assuming the DR mode of the LVS is implemented.

#安装keepalive软件yum install keepalived -y

Primary keepalived node configuration (LVS1)

#主节点 (MASTER) configuration file vim/etc/keepalived/keepalived.conf global_defs {... #省略部分 smtp_server 127.0.0.1 #指向本地 router_id lvs_01 #指定名称, backup server different name ... #省略部分} vrrp_instance vi_1 {#定义VRRP热备实例 State M    ASTER #MASTER表示主调度器 interface Ens33 #承载VIP地址的物理接口 virtual_router_id #虚拟路由器的ID号, each hot standby group remains consistent Priority #主调度器优先级 Advert_int 1 #通告间隔秒数 authentication {#认证信息 Auth_type PAS S #认证类型 auth_pass 1111 #字码密串} virtual_ipaddress {#指定群集VIP地址, which is the drift address 192.168.10.80}                     }virtual_server 192.168.10.80 {#虚拟服务器VIP地址 delay_loop 6 #健康检查的间隔时间 Lb_algo RR                   #轮询rr的调度算法 lb_kind DR #直接路由工作模式 persistence_timeout 0 #连接保持时间 Protocol TCP         #应用服务采用的是TCP协议 real_server 192.168.10.51 {#第一个web节点的服务器地址, Port weight 1     Tcp_check {       Connect_timeout nb_get_retry 3 delay_before_retry 3 Connect_port 80        }} real_server 192.168.10.52 {#第二个web节点的服务器地址, port router_id lvs_01 weight 1 Tcp_check {connect_timeout nb_get_retry 3 delay_before_retry 3 Conne Ct_port 80}}}

Configuration from keepalived node (LVS2)
Copy the configuration file for the master node keepalived.conf, and then modify the following:

router_id LV ->  router_id LVS_02 #从调度器名称state MASTER -> state BACKUP  #从调度器priority 100 -> priority 90   #从调度器优先级

Start KeepAlive

#先主后从分别启动keepalivesystemctl start keepalived.servicesystemctl status keepalived.service
3. Test the HA characteristics of the keepalived

(1) Virtual IP address drift

First execute the command IP addr on master (LVS1), you can see the VIP on the master node;

At this point if the Systemctl Stop keepalived.service command is executed on master, the VIP is no longer on master and the IP addr command on the slave node can see that the VIP has correctly floated to the slave node. At this time the client to access http://192.168.10.80 access is still normal.

systemctl stop keepalived.service     #在lvs1主调度器上停止keepalived服务


(2) Connectivity
The client performs a "ping 192.168.10.80-t" and is able to ping normally.
Disable Master (LVS1) of the ENS33 network card, and found that the normal ping pass.

(3) Web Access Testing

Disable Master (LVS1) ens33 network card, again access the above Web services, Web page documents display normal.


How to build a Web cluster using LVS Load balancer and installation configuration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.