Comparison of LVS three modes and advantages and disadvantages

Source: Internet
Author: User

Directory:

LVS three-mode configuration

Comparison of advantages and disadvantages of LVS three working modes

LVS three-mode configuration

Brief configuration of LVS three (LVS-DR, LVS-NAT, LVS-TUN) Modes

What is LVS:

Http://www.linuxvirtualserver.org/VS-NAT.html

Http://www.linuxvirtualserver.org/VS-IPTunneling.html

Http://www.linuxvirtualserver.org/VS-DRouting.html

First, install ipvsadm management.Program

Download: http://www.linuxvirtualserver.org/software/

Note that the corresponding kernel version

Ipvsadm-1.24.tar.gz

Tar zxvf ipvsadm-1.24.tar.gz

CD ipvsadm-1.24

Make

Make install

1: LVS-DR mode (the scheduler and the actual server have a network card connected to the same physical network segment)
The brief network structure is as follows:

Configure LVS Server

Reference

 #! /Bin/sh VIP = 192.168.0.210 rip1 = 192.168.0.175 rip2 = 192.168.0.145. /etc/rc. d/init. d/functions case "$1" in START) echo "Start LVS of directorserver" # Set the virtual IP Address/sbin/ifconfig eth0: 1 $ VIP broadcast $ VIP netmask 255.255.255.255 up/sbin/route add-host $ VIP Dev eth0: 1 # Clear ipvs table/sbin/ipvsadm-C # Set LVS/sbin/ipvsadm-a-t $ VIP: 80-s RR/sbin/ipvsadm-a-t $ VIP: 80-r $ rip1: 80-g/sbin/ipvsadm-a-t $ VIP: 80-r $ rip2: 80-G # Run LVS/sbin/ipvsadm; stop) echo "Close LVS directorserver"/sbin/ipvsadm-C/sbin/ifconfig eth0: 1 down; *) echo "Usage: $0 {START | stop} "Exit 1 esac 

Configure rip Server

Reference

 #! /Bin/bash VIP = 192.168.0.210 local_name = 50 bang broadcast = 192.168.0.255 # VIP's broadcast. /etc/rc. d/init. d/functions case "$1" in start) echo "reparing for Real Server" Echo "1">/proc/sys/NET/IPv4/CONF/LO/arp_ignore echo "2">/proc/sys/NET/IPv4/ conf/LO/arp_announce echo "1">/proc/sys/NET/IPv4/CONF/All/arp_ignore echo "2">/proc/sys/NET/IPv4/CONF/ all/arp_announce ifconfig lo: 0 $ VIP netmask 255.255.255.255.255 broadcast $ broadcast up/sbin/route add-host $ VIP Dev lo: 0; stop) ifconfig lo: 0 down echo "0">/proc/sys/NET/IPv4/CONF/LO/arp_ignore echo "0">/proc/sys/NET/IPv4/CONF/LO/arp_announce echo "0">/proc/sys/NET/IPv4/CONF/All/arp_ignore echo "0">/proc/sys/NET/IPv4/CONF/All/arp_announce ;; *) echo "Usage: LVS {START | stop}" Exit 1 esac 

2: LVS-TUN Mode

The brief network architecture is as follows:

Configure LVS Server

Reference

 #! /Bin/sh # Description: Start LVS of directorserver VIP = 192.168.25.41 (Note: The LVS server has two IP addresses, one is VIP and the other is its own IP address, for example, 192.168.25.42) rip1 = 192.168.25.44 rip2 = 192.168.25.45 # ripn = 192.168.0.n GW = 192.168.25.254. /etc/rc. d/init. d/functions case "$1" in start) echo "Start LVS of directorserver" # Set the virtual IP Address/sbin/ifconfig tunl0 $ VIP broadcast $ VIP netmask bandwidth 255.255.0 up/sbin/route add-host $ VIP Dev tunl0 # Clear ipvs table/sbin/ipvsadm-C # Set LVS/sbin/ipvsadm-a-t $ VIP: 80-s RR/sbin/ipvsadm-a-t $ VIP: 80-r $ rip1: 80-I/sbin/ipvsadm-a-t $ VIP: 80-r $ rip2: 80-I #/sbin/ipvsadm-a-t $ VIP: 80-r $ rip3: 80-I # Run LVS/sbin/ipvsadm # end; stop) echo "Close LVS directorserver" ifconfig tunl0 down/sbin/ipvsadm-C; *) echo "Usage: $0 {START | stop} "Exit 1 esac 

Configure Real Server

Reference

#! /Bin/sh # GHB in 20060812 # Description: config RealServer tunl port and apply ARP patch VIP = 192.168.25.43. /etc/rc. d/init. d/functions case "$1" in start) echo "tunl port starting" ifconfig tunl0 $ VIP netmask 255.255.255.0 broadcast $ VIP up/sbin/route add-host $ VIP Dev tunl0 echo "1">/proc/sys/NET/IPv4 /CONF/tunl0/arp_ignore echo "2">/proc/sys/NET/IPv4/CONF/tunl0/arp_announce echo "1">/proc/sys/NET/IPv4/Conf /All/arp_ignore echo "2">/proc/sys/NET/IPv4/CONF/All/arp_announce sysctl-P ;; stop) echo "tunl port closing" ifconfig tunl0 down Echo 1>/proc/sys/NET/IPv4/ip_forward Echo 0>/proc/sys/NET/IPv4/CONF/All/arp_announce;; *) echo "Usage: $0 {START | stop}" Exit 1 esac

3: LVS-NAT Mode

The brief network architecture is as follows:

Configure LVS Server

Reference

#! /Bin/sh # Description: Start LVS of NAT VLAN-IP = 202.99.59.110 rip1 = 10.1.1.2 rip2 = 10.1.1.3 # ripn = 10.1.1.n GW = 10.1.1.1. /etc/rc. d/init. d/functions case "$1" in start) echo "Start LVS of natserver" Echo "1">/proc/sys/NET/IPv4/ip_forward echo "0">/proc/sys/NET/IPv4/CONF/All/ send_redirects echo "0">/proc/sys/NET/IPv4/CONF/default/send_redirects echo "0">/proc/sys/NET/IPv4/CONF/eth0/send_redirects echo "0">/proc/sys/NET/IPv4/CONF/eth1/send_redirects (on the Intranet card) # Clear ipvs table/sbin/ipvsadm-C # Set LVS/sbin/ipvsadm-a-t 202.99.59.110: 80-r 10.1.1.2: 80-m-W 1/sbin/ipvsadm-a-t 202.99.59.110: 80-r 10.1.1.3: 80-m-W 1 # Run LVS/sbin/ipvsadm # end; stop) echo "Close LVS Nat server" Echo "0">/proc/sys/NET/IPv4/ip_forward echo "1">/proc/sys/NET/IPv4/CONF/All/ send_redirects echo "1">/proc/sys/NET/IPv4/CONF/default/send_redirects echo "1">/proc/sys/NET/IPv4/CONF/eth0/send_redirects echo "1">/proc/sys/NET/IPv4/CONF/eth1/send_redirects (on the Intranet card) /sbin/ipvsadm-C; *) echo "Usage: $0 {START | stop}" Exit 1 esac

Configure Real Server

You do not need to configure the LVS-Nat mode backend machine.

TIPS:-G indicates the Dr mode,-M indicates the NAT mode, and-I indicates the tunneling mode.

Comparison of advantages and disadvantages of LVS three working modes

1. Virtual Server via NAT (VS-NAT)

Advantage: physical servers in the cluster can use any operating system that supports TCP/IP. Physical servers can allocate private IP addresses to the Internet. Only the Server Load balancer needs a valid IP address.

Disadvantage: Limited scalability. When data on server nodes (common PC servers) increases to 20 or more, Server Load balancer will become the bottleneck of the entire system, because all request packets and response packets must be regenerated by the Server Load balancer. If the average length of a TCP packet is 536 bytes, the average packet regeneration latency is about 60 US (the latency is shortened when a faster processor is used for calculation on the Pentium processor ), the maximum Server Load balancer capacity is 8.93 Mb/s. Assuming that the platform capacity of each physical server is 400 Mb/s, the Server Load balancer is responsible for computing 22 physical servers.

Solution: even if the server Load balancer becomes the bottleneck of the entire system, there are two ways to solve it. One is hybrid processing, and the other is virtual server via IP tunneling or virtual server via direct routing. If a hybrid processing method is used, many of the same rr dns domains are required. You use virtual server via IP tunneling or virtual server via direct routing for better scalability. It can also be nested using a Load balancer, in the front-end is the vs-tunneling or vs-drouting Load balancer, and then followed by a VS-NAT Load balancer.

Ii. Virtual Server via IP tunneling (vs-Tun)

We found that many Internet services (such as Web servers) have very short request packets, while the response packets are usually very large.

Advantage: the Server Load balancer is only responsible for distributing request packets to physical servers, while physical servers directly send response packets to users. Therefore, the Server Load balancer can process a large amount of requests. In this way, a server Load balancer can serve more than 100 physical servers, and the Server Load balancer is no longer a bottleneck of the system. In the vs-tun mode, if your server Load balancer has a m full-duplex Nic, the entire virtual server can reach 1g throughput.

Insufficient: however, This method requires all servers to support the "IP tunneling" (IP Encapsulation) protocol. I only implement this in Linux. If you can support other operating systems, still under exploration.

Iii. Virtual Server via direct routing (vs-DR)

Advantage: Like vs-Tun, the Server Load balancer only distributes requests, and the response packet is returned to the client through a separate routing method. Compared with vs-Tun, vs-Dr does not require a tunnel structure. Therefore, most operating systems can be used as physical servers, including Linux 2.0.36, 2.2.9, 2.2.10, and 2.2.12; solaris 2.5.1, 2.6, 2.7; FreeBSD 3.1, 3.2, 3.3; NT4.0 requires no patching; IRIX 6.5; hpux11, etc.

Insufficient: The NIC of the Server Load balancer must be in the same physical segment as the NIC.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.