Linux LVS (Linux virtual Server) V1.26 Load Balancer Detailed configuration tutorial

Source: Internet
Author: User

2 Linux LVS (Linux virtual Server) V1.26 load balancer

Configuration ideas:

    1. Install the LVS server install the kernel link kernel file on the LVS server and manually bind the VIP
    2. Configure the Realserver backend service to manually execute the VIP binding script

2.1 LVS Concept

LVS is the short name of Linux virtual server, which is a free software project initiated by Dr. Zhangwensong, and its official site is www.linuxvirtualserver.org. Now LVS is already part of the Linux standard kernel, Prior to the Linux2.4 kernel, the LVS had to be recompiled to support the LVS function module, but since the Linux2.4 kernel, the various functions of LVS have been built-in, without any patching of the kernel, and the functions provided by LVS can be used directly.

LVS is a module, can be used alone, but with keepalived more convenient,

Version: Select version based on kernel
Branch Version Release Date Status License
IPVS for Kernel 2.6
1.2.1
24-dec-2004 Stable GNU general public License (GPL)
IPVS for Kernel 2.5
1.1.7
5-jul-2003 Devel GNU general public License (GPL)
IPVS for Kernel 2.4
1.0.12
17-nov-2004 Stable GNU general public License (GPL)
IPVS for Kernel 2.2
1.0.8
14-may-2001 Stable GNU general public License (GPL)

LVS Load Balancing Scheme

    1. Monitor LVs in a self-scripted way
    2. Heartbeat+lvs+ldirectord is more complicated and difficult to control.
    3. Configure LVS with the tool piranha provided by Redhat
    4. Keepalived+lvs Solution (Recommended)

2.2 LVS Load Balancing mode

VIP Virtual IP Address
RIP back-end server real IP (real server IP address)
The IP address of the DIP director is primarily the IP address of the network card connecting the backend server and the external network
CIP Client IP Address

2.2.1 LVS Dr Mode: Direct routing (Enterprise Common)

    1. The user requests the VIP, sends the packet to the LVS server, (here is to find the LVS server through the ARP broadcast), the package source IP is CIP, the destination IP is the VIP
    2. After the LVS receives the request packet, the packet is forwarded to one of the RS servers according to the scheduling algorithm and forwarding method, and the destination MAC address of the packet is modified to the MAC address of the RS.
    3. Once the RS receives the packet, it first has to unpack the package, discovers that the destination Mac and destination IP are local, and then receives and processes the packet (because the LO interface on the RS is bound to the VIP, so it is considered to be a package for itself)
    4. RS modifies the requested destination IP to CIP, the source IP is the local VIP, and the result is returned directly to the client (the forwarding process is forwarded through the gateway via the ARP broadcast)

Advantages and Disadvantages

    1. Both LVS and RS need to configure the VIP and must be in the same network segment because it is forwarded by modifying the MAC address
    2. The DR Mode only modifies the destination MAC address, so the destination port cannot be changed, so the RS and LVS ports must be identical
    3. RS preferably has an external network IP, if it is through the intranet server to do the gateway, then the pressure bottleneck will be very large

2.2.2 LVS NAT mode: Network address translation

NAT (Network address translation) is a technique for external network and intranet addresses mapping. In NAT mode, the traffic of the network datagram must be processed by LVS. LVS is required as a gateway to the RS (real server). When the package arrives at the LVS, the LVS does the destination address translation (DNAT) and changes the target IP to the IP of Rs. RS receives the packet as if it were sent directly to it by the client. When the RS is processed, the source IP is the RS IP and the destination IP is the IP of the client when the response is returned. At this time, the RS packet through the Gateway (LVS) relay, LVS will do the source address conversion (SNAT), the source address of the package to the VIP,

Note NAT mode needs to turn on routing forwarding

Vi/etc/sysctl.conf

Net.ipv4.ip_forward = 1 Execution sysctl–p effective

Characteristics:

    1. Back-end RS does not require external IP and access to the external network
    2. The gateway for the RS node must point to the LVS
    3. Large bottleneck of LVS
    4. NAT supports an inconsistency between the LVS and RS ports,
    5. All RS nodes do not need to configure VIP,
    6. LVS needs to turn on routing forwarding
      2.2.3 LVS TUN Tunnel

Can achieve high availability across the computer room for some applications
2.2.4 LVS Full NAT
Implement multiple LB scheduler to process requests (ospf+ multiple LVs)

2.3 LVS Scheduling algorithm

Fixed scheduling algorithm: Rr,wrr,dh,sh
Dynamic Scheduling algorithm: WLC,LC,LBLC,SED,NQ
Common algorithms: RR, WRR, WLC
RR round robin (Round-robin), which assigns a request to a different RS node, that is, an averaging request in the RS node, which is simple, but only suitable for cases where the RS node is not very poor in performance

WRR, weighted round-robin scheduling (weighted Round-robin), which will be based on different RS node weights assigned tasks, higher weights RS will take precedence of the task, and assigned to the number of connections will be lower than the weight of Rs node more, the same weight of Rs to get the same number of connections

DH Destination Address Hash Dispatch (Destination Hashing) find a static hash table with the destination address as the keyword to obtain the required RS

SH Source Address hash dispatch (source hashing) finds a static hash table with the source address as a keyword to obtain the required RS

WLC weighted minimum number of connections scheduling (weighted least-connection), assuming that each of the RS weight value is WI (I=1..N), the current number of TCP connections is Ti (I=1..N), then select Ti/wi as the smallest RS as the next assigned RS

LC Minimum number of connections dispatch (least-connection), Ipvs table stores all active connections and sends new connection requests to the smallest number of current connections RS

Production Environment Selection:

    1. General network services, such as HTTP MAIL MySQL, commonly used LVS scheduling algorithm for
      A. Basic round call scheduling RR algorithm
      B. Weighted minimum connection scheduling WLC
      C. Weighted round call scheduling WRR algorithm

? Fixed scheduling method for static adjustment

RR #轮询
#调度器通过 the "Round call" scheduling algorithm distributes external requests sequentially to the real servers in the cluster, and treats each server equally, regardless of the actual number of connections and system load on the server.

WRR #加权轮询
#调度器通过 "Weighted round call" scheduling algorithm dispatches access requests based on the different processing capabilities of the real server. This ensures that the processing capacity of the server handles more access traffic. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

DH #目标地址hash
#算法也是针对目标IP地址的负载均衡, but it is a static mapping algorithm that maps a destination IP address to a server through a hash function.
#目标地址散列调度算法先根据请求的目标IP地址, as the hash key (hash key) from the static distribution of the hash list to find the corresponding server, if the server is available and not overloaded, send the request to the server, otherwise return empty.

SH #源地址hash
#算法正好与目标地址散列调度算法相反, it finds the corresponding server from a statically allocated hash table based on the requested source IP address, as a hash key (hash key), and if the server is available and not overloaded, sends the request to the server, otherwise returns NULL.
#它采用的散列函数与目标地址散列调度算法的相同. In addition to replacing the requested IP address with the requested source IP address, its algorithm flow is basically similar to the target address hash scheduling algorithm. In the actual application, the source address hash schedule and the target address hash schedule can be used together in the firewall cluster, they can guarantee the unique entrance of the whole system.

? Dynamic Scheduling method

LC #最少连接
#调度器通过 the "least connection" scheduling algorithm dynamically dispatches network requests to the server with the fewest number of established links. If the real server of the cluster system has similar system performance, the "Minimum connection" scheduling algorithm can be used to balance the load well.

WLC #加权最少连接
#在集群系统中的服务器性能差异较大的情况下, the scheduler uses a "weighted least-link" scheduling algorithm to optimize load-balancing performance, and servers with higher weights will withstand a large proportion of active connection loads. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

SED #最少期望延迟
#基于wlc算法, for example: ABC three machines weight 123, the number of connections is 123,name if a new request comes in using the WLC algorithm, he may be assigned to any of the ABC, and the SED algorithm is used to perform such an operation.
#A:()/2
#B:(1+2)/2
#C:(1+3)/3
#根据运算结果, give the connection to C.

NQ #从不排队调度方法
#无需列队, if the number of connections with a realserver = 0 is allocated directly to the past, no SED operation is required.

LBLC #基于本地的最少连接

The "least link based on locality" scheduling algorithm is a load balancing target IP address, which is mainly used in cache cluster system.

#该算法根据请求的目标IP地址找出该 the most recently used server for the destination IP address, if the server is available and is not overloaded, send the request to the server;
#若服务器不存在, or if the server is overloaded and has half the workload of the server, a "least-link" principle is used to select an available server and send the request to that server.

LBLCR #带复制的基于本地的最少连接
# "Local least-link with replication" Scheduling algorithm is also for the target IP address load balancing, is mainly used in the cache cluster system.
The #它与LBLC算法的不同 is that it maintains a mapping from a destination IP address to a set of servers, while the LBLC algorithm maintains a mapping from a destination IP address to a server.
#该算法根据请求的目标IP地址找出该目标IP地址对应的服务器组, select a server from the server group by the "minimum connection" principle.
#若服务器没有超载, send the request to the server, or, if the server is overloaded, select a server from the cluster by the "minimum connection" principle, join the server to the server group, and send the request to the server. Also, when the server group has not been modified for some time, the busiest server is removed from the server group to reduce the degree of replication.

2.4 LVS Installation

Official website: http://www.linuxvirtualserver.org/software/
Version:
1.27-7.EL7 Latest Version
ipvsadm-1.26-1.src.rpm (for kernel 2.6.28-rc3 or later)-February 8, 2011
ipvsadm-1.26.tar.gz (for kernel 2.6.28-rc3 or later)-February 8, 2011

Note
Select version based on kernel

2.4.1 Yum Installation

Linux kernel 2.4 version above the basic support LVS, to use LVS, only need to install another LVS management tool: Ipvsadm

Yum Install Ipvsadm

Yum installation automatically loads the Ip_vs module
[[email protected] ~]# lsmod |grep IP
Ip_vs 140944 0

2.4.2 Source Installation

    1. Installation dependencies
      Yum install popt-static popt-devel kernel-devel kernel libnl-devel libnl3-devel installing kernel modules
      Ln-s/usr/src/kernels/2.6.*/usr/src/linux
    2. Download LVS required software ipvsadm-1.2.6.tar.gz software, compile and install:
      Wget-c http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
      Tar xzvf ipvsadm-1.26.tar.gz &&cd ipvsadm-1.26 && make && make install

./configure--sysconf=/etc--with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64/

If you do not do the above soft link, you need to specify the kernel directory
    1. Loading modules
      Execute command: modprobe ip_vs or perform/usr/sbin/ipvsadm auto-load
      Lsmod | grep Ip_vs

Installation Summary

    1. centos5.x install LVS, use version 1.24, do not use 1.26
    2. centos6.4 install LVS, use version 1.26, and install yum install LIBNL popt -y
    3. After installing LVS, execute ipvsadm to load the Ip_vs module into the kernel

2.5 LVS IPVSADM Configuration

In fact, LVS itself and iptables very similar, and even the use of the command format are very similar, in fact, LVS is based on the framework of iptables developed, then the LVS itself is divided into two parts:

    1. The first part is a Ipvs module working in the kernel space, in fact, the function of LVS is Ipvs module realization,
    2. The second part is a tool for defining the Cluster service in the user space Ipvsadm, the main function of this tool is to transfer the administrator-defined list of cluster services to the Ipvs module working in kernel space, the following is a brief introduction to the use of Ipvsadm command

2.5.1 Ipvsadm Parameters Detailed

Parameter description:
-a adds a virtual server address (instance server, similar to a backend of haproxy). Named here with VIP
-A add a backend RS real server to the virtual server.
-D Delete a virtual server, remove the VIP instance Name
-D Delete an RS server
-t specifies the TCP service port provided by the virtual server.
The scheduling algorithm used by-S.
-r Specifies the RS real server address.
-M sets the current forwarding mode to NAT;
-G is set to the Dr Direct routing mode;
-I is set to Tun tunnel mode.
The weight of the-W backend real server.
-C--clear #清除所有配置.
--set TCP Tcpfin UDP #设置连接超时值

View LVs forward List command as: IPVSADM–LN

#virtual-service-address: Refers to the IP address of the virtual server (typically a VIP when multiple virtual servers have multiple VIPs)
#real-service-address: Refers to the IP address of the real server (back-end real server)
#scheduler: Scheduling method

The usage and format of the

Ipvsadm are as follows:
ipvsadm-a| E-t|u|f Virutal-service-address:port [-S scheduler] [-p[timeout]] [-M netmask]
ipvsadm-d-t|u|f Virtual-service-address
ipvsadm-c
ipvsadm-r
Ipvsadm-s [-n]
Ipvsadm-a|e-t|u|f service-address:port-r Real-server-address:port [-g|i|m] [-w weight]
ipvsadm-d-t|u|f service-address-r server-address
Ipvsadm-l|l [ Options]
Ipvsadm-z [-t|u|f service-address]
ipvsadm--set tcp tcpfin UDP
Ipvsadm--start-daemon state [-- Mcast-interface interface]
Ipvsadm--stop-daemon
ipvsadm-h

-A--add-service #在内核的虚拟服务器表中添加一条新的虚拟服务器记录. That is, add a new virtual server.
-E--edit-service #编辑内核虚拟服务器表中的一条虚拟服务器记录.
-D--delete-service #删除内核虚拟服务器表中的一条虚拟服务器记录.
-C--clear #清除内核虚拟服务器表中的所有记录.
-R--restore #恢复虚拟服务器规则
-S--save #保存虚拟服务器规则, output as-r option readable format
-a--add-server #在内核虚拟服务器表的一条记录里添加一条新的真实服务器记录. That is, add a new real server to a virtual server
-e--edit-server #编辑一条虚拟服务器记录中的某条真实服务器记录
-D--delete-server #删除一条虚拟服务器记录中的某条真实服务器记录
-l|-l--list #显示内核虚拟服务器表
-Z--zero #虚拟服务表计数器清零 (emptying the current number of connections, etc.)
--set TCP tcpfin UDP #设置连接超时值
--start-daemon # Start the synchronization daemon. He can be followed by master or backup to indicate that the LVS Router are master or backup. Keepalived's VRRP function can also be used in this function.
--stop-daemon #停止同步守护进程
-H--help #显示帮助信息

#其他的选项:
-T--tcp-service service-address #说明虚拟服务器提供的是tcp Service [vip:port] or [Real-server-ip:port]
-U--udp-service service-address #说明虚拟服务器提供的是udp Service [vip:port] or [Real-server-ip:port]
-F--fwmark-service fwmark #说明是经过iptables marked service type.
-S--scheduler Scheduler #使用的调度算法, there are several options rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq, the default scheduling algorithm is: WLC.
-P--persistent [Timeout] #持久稳固的服务. This option means that multiple requests from the same customer will be processed by the same real server. The default value for timeout is 300 seconds.
-M--netmask #子网掩码
-R--real-server server-address #真实的服务器 [Real-server:port]
-G--gatewaying The operating mode of the specified LVS is the direct route mode (also the LVS default mode)
-I--ipip #指定LVS mode of operation for tunnel mode
-M--masquerading #指定LVS working mode is NAT mode
-W--weight Weight #真实服务器的权值
--mcast-interface Interface #指定组播的同步接口
-C--connection #显示LVS current connection such as: Ipvsadm-l-C
--timeout #显示tcp Tcpfin UDP timeout value such as: Ipvsadm-l--timeout
--daemon #显示同步守护进程状态
--stats #显示统计信息
--rate #显示速率信息
--sort #对虚拟服务器和真实服务器排序输出
--numeric-n #输出IP The number form of addresses and ports

2.5.2 Configuring the LVs Server

    1. Configure the VIP for LVS
      Ifconfig eth0:0 10.204.3.250 up

    2. Add a virtual host (instance name VIP)
      Ipvsadm-c emptying the configuration
      Ipvsadm--set 5 tcp/tcp fin UDP time-out
      Ipvsadm-a-T 10.204.3.250:80-s RR add virtual Host

    3. Add real server Live servers under the virtual server
      Ipvsadm-a-T 10.204.3.250:80-r 10.204.3.21:80-g-W 1 # Add Rs-g Dr Mode-W Weight
      Ipvsadm-a-T 10.204.3.250:80-r 10.204.3.22:80-g-W 1

    4. IPVSADM-L-N View Configuration

2.5.3 Configuring LVs Rserver

    1. RS Binding VIP
      Ifconfig lo:0 10.0.0.10/32
    2. Configuring the RS side to suppress ARP responses
      echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
      echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
      echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
      echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

Monitoring RS Access
Watch-n 1 Ipvsadm-l-N # 1 seconds View Ipvsadm access
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 10.204.3.250:80 RR
-10.204.3.23:80 Route 1 2 20
-10.204.3.24:80 Route 1 2 20

2.5.4 Ipvsadm Deleting modifications
Delete method
ipvsadm-d-t 10.0.0.10:80-s WRR Delete lvs-server
ipvsadm-d-T 10.0.0.10:80-r 10.0.0.12:80 remove RS

2.1 LVS through script management Ipvsadm

2.1.1 LVS Server starts the gatekeeper process script

#!/bin/bash

Lvs_server

vip=10.204.1.250
Port=80
Rip= (
10.204.3.23
10.204.3.24
)

Start () {
Ipvsadm–c
Ifconfig eth0:0 $VIP up
Ipvsadm-a-T $VIP: $PORT-S RR
For ((i=0;i<${#RIP [*]};i++)) #循环数组中的值
Do
Ipvsadm-a-T $VIP: $PORT-R ${rip[$i]}: $PORT-G
Done
}

Stop () {
Ipvsadm-c
}

Restart () {
Stop
Start
}

Status () {
Ipvsadm-ln
}

Case $ in
Start
Start
;;
Stop
Stop
;;
Restart
Restart
;;
Status
Status
;;
*)
echo "usage:start| stop| Restart| STATUS "
Esac
2.1.2 LVS Realserver Start the Gatekeeper process script:

#!/bin/sh
#LVS Client Server
vip=192.168.33.188
Case $ in
start)
Ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
/sbin/route add-host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ Ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ Ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl-p >/dev/null 2>&1
Echo " Realserver Start OK "
exit 0
;;
Stop)
Ifconfig lo:0 down
Route del $VIP >/dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ Ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ Ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "realserver stoped OK"
exit 1
;;
*)
Echo "Usage: $ start|stop}"
;;
Esac

2.1.3 LVS Process Monitoring Recovery script
#!/bin/bash
rs_1=192.168.136.129
rs_2=192.168.136.130
vip=192.168.136.127
. /etc/init.d/functions
Web_result () {
rs=curl -I -s $1 |awk ‘NR==1 {print $2}‘
Return $rs
}

Lvs_result () {
rs=ipvsadm -ln |grep $1:80 |wc -l
Return $rs
}

Auto_lvs () {
Web_result $
A=$?

Lvs_result $
B=$?

If [$a-ne] && [$b-ge 1]
Then
Ipvsadm-d-T $VIP: 80-r $
Action "Kill"/bin/true
Fi

If [$a-eq] && [$b-lt 1]
Then
Ipvsadm-a-T $VIP: 80-r $1-g-W 1
Action "add $"/bin/true
Fi
}

While True
Do
Auto_lvs $rs _1
Auto_lvs $rs _2
Sleep 2
Done

2.2 LVS through Keepalived management Ipvsadm

Configuration Summary:

    1. Realserver If it is Dr Mode, you need to bind the VIP
    2. LVS does not need to bind the VIP, it has been specified in the configuration file
    3. To achieve high availability, write daemon scripts to listen for realserver service is normal,
    4. The Connect_port port in the real_server must exist, otherwise it will be misjudged as RS fault, automatic t except,.
    5. LVS cannot convert port, Nginx can convert port

LVS Load Balancer is a single point, in order to solve the single point problem, we use the keepalived on the two LVS to achieve high availability.

Lvs+keepalived the configuration script to write to keepalived.conf, Keepavlied will take over the LVS service startup
Global_defs {
Notification_email {
[Email protected]
}
Notification_email_from [email protected]
Smtp_server 192.168.80.1
Smtp_connection_timeout 30
router_id Lvs_devel # Set the ID of the LVS, which should be unique within a network
}
Vrrp_instance Vi_1 {
State Master #指定Keepalived的角色, master Main, backup as standby
Interface eth1 #指定Keepalived的角色, master Main, backup as standby
virtual_router_id #虚拟路由编号, the master must be consistent
Priority #定义优先级, the higher the number, the higher the precedence, the main Dr must be larger than the backup Dr
Advert_int 1 #检查间隔, default is 1s
Authentication {
Auth_type PASS
Auth_pass 1111
}
virtual_ipaddress {
192.168.80.200 #定义虚拟IP (VIP) is 192.168.2.33, multiple sets, one per line
}
}

Defines the VIP and port for the LVS that provide services externally

Virtual_server 192.168.80.200 80 {
Delay_loop 6 # Set Health check time, unit is seconds
Lb_algo WRR # Set the algorithm for load scheduling for WLC
Lb_kind Dr # sets the load mechanism for LVS, with Nat, TUN, DR three modes
Nat_mask 255.255.255.0
Persistence_timeout 60
Protocol TCP

Ipvsadm–a–t 192.168.80.200:80–s wrr–p 20 above statement corresponds to this command
real_server 192.168.80.102 80 {  # 指定real server1的IP地址    weight 3   # 配置节点权值,数字越大权重越高                  TCP_CHECK {      connect_timeout 10             nb_get_retry 3      delay_before_retry 3      connect_port 80      }  

}

Ipvsadm–a–t 192.168.80.200:80-
real_server 192.168.80.103 80 {  # 指定real server2的IP地址    weight 3  # 配置节点权值,数字越大权重越高      TCP_CHECK {      connect_timeout 10      nb_get_retry 3      delay_before_retry 3      connect_port 80      }   }  

}

#pvsadm-A-t 192.168.80.200:80-r 192.168.80.102:80-g-W 1 above statement corresponds to this command
#pvsadm-A-t 192.168.80.200:80-r 192.168.80.103:80-g-W 1 above statement corresponds to this command

2.3 lvs+ switch OSPF implements multi-master

2.4 LVS Fault Error

TCPDUMP-NN Port 80 detects 80-Port packets
Tcpdump-nn Port and host 192.168.1.100

2.4.1 LVS Troubleshooting Process

1.PING website name, can parse, domain name is normal
2. Log in to the LVS server and view Ipvsadm-ln

    1. View Log Tail-fn 100/var/log/message
    2. View Realserver 80 ports
    3. Check Zabbix monitoring
    4. See keepalived configuration file, process
    5. See if the realserver.sh script is running properly
    6. Use tcpdump to grab a bag

    7. When the LVS access site is slow or inaccessible, you need to check whether the VIP of the background site is stopped.

2.4.2 LVs Distribution Request RS unbalanced fault

Production environment, IPVSADM-L-N found two RS load imbalance, one has a lot of requests, a no, and no request of the RS tested service normal, LO:VIP also have, but there is no request

Cause of the problem:
The reason for persistent 10 is configured, persistent session is maintained, when client a visits the website, LVS distributes the request to the RS2 then other operations and requests that client a then clicks, will also be sent to RS2 this machine,

Workaround:
After commenting out persistent 10 in keepalived,/etc/init.d/keepalived reload, and then can see the load on both sides of the request is OK

Other reasons:

    1. The LVS own session hold parameter setting (-P persistent 300) Optimization: Large companies try to use cookies instead of Sesson
    2. LVS scheduling algorithm settings, for example, the RR,WRR,WLC,LS algorithm
    3. Back-end RS node session hold parameters, for example: Apache's keealive parameter
    4. With less traffic, the imbalance is more pronounced
    5. The length of time the request was sent by the user and the size of the requested resource

To implement a session-preserving scenario:
http://oldboy/blog.51cto.com/2561410/1331316
http://oldboy/blog.51cto.com/2561410/1332468

Thinking of 2.4.3 LVs fault debugging

    1. Correctness of the LVs dispatching rule IP on the scheduler
    2. Check for VIP bindings and ARP suppression on the RS node
      Production Processing ideas:
      A. Do real-time monitoring of the bundled VIP, alarm after a problem or automatic processing
      B. Make a configuration file of the bundled VIP,
      The configuration of ARP suppression idea:
      A. If you are a single VIP, you can use the stop parameter setting 0
      B. If there are multiple VIP bindings on the RS side, do not set 0 even if the VIP bindings are stopped
      if [${#VIP [@]}-le 1];then
      echo "0" >/proc/sys/net/ipv4/config/lo/arp_ignore
      echo "0" >/proc/sys/net/ipv4/config/lo/arp_announce
      echo "0" >/proc/sys/net/ipv4/config/all/arp_ignore
      echo "0" >/proc/sys/net/ipv4/config/all/arp_announce
      Fi

    3. Inspection of the service provided by the RS node itself
    4. Auxiliary exclusion Tool Tcpdump Ping
    5. Load balancing and reverse proxy clusters

2.4.4 LVS make Error
[[email protected] ipvsadm-1.23]# make
Make-c Libipvs
MAKE[1]: Entering directory '/opt/lvs/ipvsadm-1.23/libipvs '
Gcc-wall-wunused-wstrict-prototypes-g-o2-i/usr/src/linux/include-dhave_net_ip_vs_h-c-o libipvs.o libipvs.c
In file included from libipvs.c:23:
Libipvs.h:14:23:error:net/ip_vs.h:no such file or directory
In file included from libipvs.c:23:

Ln-s/usr/src/kernels/2.6.18-194.11.3.el5-i686//usr/src/linux

But sometimes you can not find the path of the kernel, many of the network is the above solution connection, there is no Kernerls directory after the installation of the system
Workaround: Yum Install Kernel-devel
The next step is to compile the connection, OK!

Linux LVS (Linux virtual Server) V1.26 Load Balancer Detailed configuration tutorial

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.