LVS/DR Mode Combat

Source: Internet
Author: User
Tags inotify rsync haproxy

On-line server clean up almost, ready to do a server layered architecture, is to separate the LNMP environment, using the LB cluster, first point theory.

Load balancing lb cluster is mainly concerned with the concurrency processing power, commonly used are: LVS, Nginx, Haproxy.

LVS is based on four-tier load balancing, does not support load balancing of complex features, and the forwarding efficiency is slightly higher than the 7 layer

Nginx, Haproxy are seven layers of load balancing, can be based on the characteristics of load balancing, more flexible.

Here the main talk about the principle of LVs and configuration method,

The LVS principle is actually the use of iptables input chain, listening in the input chain, interception of access to the Cluster service packet, will enter the packet header to make changes, different types of changes in the way Baotou is not the same, forwarded to the back end of the RS, the last RS processing request is forwarded to the user.

Lvs:linux virtual Server,lvs and iptable cannot be used simultaneously


Type of LVS:

NAT: Address translation is the same as Dnat principle, multi-objective

DR: Direct Routing

TUN: Tunnel


LVs Scheduling Method:

There are 10 types, as follows:

Fixed scheduling, also known as static scheduling, there are four kinds:

RR: Round call, polling

Wrr:weight, weighted

Sh:source hash, source address hash

DH: Target Address Hash

There are six types of dynamic scheduling methods:

LC: Minimum Connection

Active*256+inactive

Who's small, pick who

WLC: Weighted Minimum connection

(active*256+inactive)/weight

SED: Shortest expected delay

(active+1) *256/weight

Nq:never queue

LBLC: Local-based minimum connection

LBLCR: Local-based minimum connection with replication capability


The default scheduling method is the WLC

Well, the specific LVS workflow can be Google, with very detailed documentation, choose the most commonly used Dr Mode configuration, simple architecture diagram is as follows: Case for BBS Forum

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/58/2F/wKioL1Srpnuwx4z9AAH7CJmV6lw452.jpg "title=" Lvs-dr.png "alt=" Wkiol1srpnuwx4z9aah7cjmv6lw452.jpg "/>


The Director configuration script is as follows:

#!/bin/bash## lvs script for vs/dr# chkconfig: - 90 10. /etc/rc.d /init.d/functions#vip=192.168.8.230dip=192.168.8.226rip1=192.168.8.224rip2=192.168.8.225port=80rsweight1= 2rsweight2=5#case  "$"  instart)/sbin/ifconfig eth0:0  $VIP  broadcast  $VIP   netmask 255.255.255.255 up/sbin/route add -host  $VIP  dev eth0:0# since  this is the Director we must be able to forward  Packetsecho 1 > /proc/sys/net/ipv4/ip_forward# clear all iptables rules. /sbin/iptables -f# reset iptables counters./sbin/iptables -z# clear all  ipvsadm rules/services./sbin/ipvsadm -c# add an ip virtual service  for vip 192.168.8.230 port 80# in this recipe, we will  use the round_robin scheduling method.# in production, however, you should use a  weighted, dynamic scheduling method./sbin/ipvsadm -A -t  $VIP: 80 -s  wlc# Now direct packets for this VIP to# The real  Server ip (RIP)  inside the cluster/sbin/ipvsadm -a -t  $VIP: 80 -r $ rip1 -g -w  $RSWEIGHT 1/sbin/ipvsadm -a -t  $VIP:80 -r  $RIP 2 -g  -w  $RSWEIGHT 2/bin/touch /var/lock/subsys/ipvsadm &> /dev/null;; Stop) # stop forwarding packetsecho 0 > /proc/sys/net/ipv4/ip_forward#  Reset ipvsadm/sbin/ipvsadm -c# bring down the vip interface/sbin/route  del  $VIP/sbin/ifconfig eth0:0 down/bin/rm -f /var/lock/subsys/ipvsadmecho  "Ipvs is stopped ...";; Status) if [ ! -e /var/lock/subsys/ipvsadm ];thenecho  "ipvsadm is stooppd&nbsp ..." elseecho  "ipvsadm is running&nbsp ..." ipvsadm -l -nfi;; *) echo  "usage: $0 {start|stop|status]";; Esac


The configuration script for the back-end RS is as follows:

#!/bin/bash## scrip to start lvs dr real server.# chkconfig: -  90 10# description: lvs dr real server. /etc/rc.d/init.d/functionsvip =192.168.8.230host= '/bin/hostname ' case  "$"  instart) # start lvs-dr real server  on this machine./sbin/ifconfig lo down/sbin/ifconfig lo upecho 1  > /proc/sys/net/ipv4/conf/eth0/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/eth0 /arp_announceecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 2 > /proc /sys/net/ipv4/conf/all/arp_announce/sbin/ifconfig lo:0  $VIP  broadcast  $VIP  netmask  255.255.255.255 up/sbin/route add -host  $VIP  dev lo:0;; Stop) # stop lvs-dr real server loopback devuce (s) ./sbin/ifconfig lo:0  downecho 0 > /proc/sys/net/ipv4/conf/eth0/arp_ignore        echo 0 > /proc/ sys/net/ipv4/conf/eth0/arp_announce        echo 0 >  /proc/sys/net/ipv4/conf/all/arp_ignore        echo 0 >  /proc/sys/net/ipv4/conf/all/arp_announce;; Status) # status of lvs-dr real server.islothere= '/sbin/ifconfig  "lo:0"  |  grep -i  $VIP ' isrothere= ' netstat -rn | grep -i  "Lo"  | grep  -i  $VIP ' if [ !  ' $islothere  -o !  $isrothere  ];then#  either the route or the lo:0 device# not found.echo  "LVS-DR  real server stopped. " elseecho  "lvs-dr real server running." fi;; *) # invalid entry.echo  "$0: usage: $0 {start|status|stop}" exit 1;; EsAc 

Start the Ipvsadm service above the director and RS respectively, then need to write the LVS backend health monitoring script, run the script on the director as follows:

#!/bin/bash# userd by lvs service status check.vip=192.168.8.230cport=80fail_ back=127.0.0.1rs= ("192.168.8.224"   "192.168.8.225") declare -a rsstatusrw= ("2"   "1") rport= 80type=gchkloop=3log=/var/log/ipvsmonitor.logaddrs () {ipvsadm -a -t  $VIP: $CPORT  -r $1: $RPORT  -$TYPE  -w $2[ $? -eq 0 ] && return 0 | |  return 1}delrs () {ipvsadm -d -t  $VIP: $CPORT  -r $1: $RPORT [ $? -eq  0 ] && return 0 | |  return 1}checkrs () {local i=1while [  $I  -le  $CHKLOOP  ]; doif  curl --connect-timeout 1 http://$1 &> /dev/null; thenreturn  0filet i++donereturn 1}initstatus () {local ilocal count=0;for i in ${rs[*]};  doif ipvsadm -L -n | grep  "$I: $RPORT"   &> /dev/null ; thenrsstatus[$COUNT]=1elsersstatus[$COUNT]=0filet count++done} initstatuswhile :; dolet count=0for i in ${rs[*]}; doif checkrs $ i; thenif [ ${rsstatus[$COUNT]} -eq 0 ]; thenaddrs  $I  ${rw[$COUNT ]}[ $? -eq 0 ] && rsstatus[$COUNT]=1 && echo  "' date + '%f %h:%m:%s ',  $I  is back. "  >>  $LOGfielseif  [ ${rsstatus[$COUNT]} -eq 1 ]; thendelrs $ i[ $? -eq 0 ] && rsstatus[$COUNT]=0 && echo  "' date + '%f %h:%m:%s ',  $I  is gone. "  >>  $LOGfifilet  count++donesleep 5done

So LVS basic configuration completed, BBS user session problem, although there are many ways to solve, if only with LVS to solve, it is necessary to use the persistent connection of LVS, configured as follows:

IPVSADM-E-T 192.168.8.230:80-s wlc-p 3600

This allows the user to access the back end of an RS in the 3600s will only access the RS, you can find that this will damage the effect of LB, but can solve the user session problem.


Simply say Rsync+inotify configuration method, used to synchronize BBS Web files

Rsync will not say much, install rsync on each RS and configure rsync, and turn on the Rsync service, after installing INotify on the director, listen to the following script:

#!/bin/bash# Lixiang by created, 2014/12/31# used rsync web Datasrc=/home/www/des1=webdes2=webhost1=192.168.8.224host2 =192.168.8.225user1=rootuser2=root/usr/local/bin/inotifywait-mrq--timefmt '%d/%m/%y%H:%M '--format '%T%w%f '- Emodify,delete,create,attrib $SRC | While read FILEDO/USR/BIN/RSYNC-AVRP--delete--ignore-errors--progress $src [email protected] $host 1:: $des 1/usr/bin/ RSYNC-AVRP--delete--ignore-errors--progress $src [email protected] $host 2:: $des 2echo "${file} was rsynced" >>/va R/log/rsync.log 2>&1done

Write a monitor script, monitor the inotify process, write in crontab 5 minutes to run once, as follows:

#!/bin/bash# used by Monitor inotify Process.ps-lef |pgrep inotify &>/dev/nullif [$?-eq 0]; Thenecho "INotify is running ..." else/bin/sh/root/inotify.sh &>/dev/null &echo "INotify already running ..." Fi


NFS configuration is not much to say, note that the director to close Iptables, back-end RS Mount user upload directory, remember to write to Fstab.


Shared MySQL to the BBS application MySQL point to a MySQL server can



Two-wire configuration method:

At present, many servers are two-line, in order to better provide user experience, the configuration of the corresponding LVS is also very simple, only need to configure 2 instances: Unicom configuration of a VIP, telecom configuration A VIP, on the application of 2 RS state health monitoring script.


This simple lb cluster is built, in fact, there is a problem: director there is a single point of issue, the next time in the supplementary.


This article from the "Operation and maintenance of the road" blog, declined to reprint!

LVS/DR Mode Combat

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.