Linux Note Web cluster LVS-DR combat

Source: Internet
Author: User
Tags naming convention node server

A Introduction to load Balancing.
Load Balancing (Balance), which means that load (work tasks, access requests) are balanced and distributed across multiple operating units (servers, components) for execution. is the ultimate solution for high performance, single point of failure (high availability), scalability (horizontal scaling)
Two Demand for load balancing
Load Balancing clusters provide an inexpensive, efficient, and transparent way to extend the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and improve network flexibility and availability.
1) A single computer can not withstand large-scale concurrency or data traffic to multi-node device processing, reduce the waiting time for users, improve the user experience.
2) The computation of a single heavy load is divided into multiple nodes to do parallel processing on the device. After the processing of each node device, the result is summarized and returned to the user, and the system processing ability is greatly improved.
3) Guaranteed 24x7 Service, any one or more limited back node equipment downtime will not affect the business

Three About LVS

Lvs:linux virtual server abbreviation, meaning is a virtual servers cluster system, it is also an open source software, by the University of Defense technology, Dr. Zhangwensong founded in May 1998, is one of the earliest free software projects in China, it can realize simple load balancing under Linux platform, with low cost, high performance, high reliability and high availability.

Four The naming convention for related terms of LVS.

Five Introduction of three operating modes of LVS load Balancing cluster

Three modes of operation of LVS:

1) Vs/nat mode (Network address translation)
2) Vs/tun mode (tunneling)
3) VS/DR mode (Direct routing)

1. Nat mode-Network address translation
VirtualServer via Network address translation (Vs/nat)
This is through the network address translation method to achieve scheduling. First the Scheduler (LB) receives the client's request packet (the destination IP for the request is the VIP), and according to the scheduling algorithm, decides which backend to send the request to the real server (RS). The dispatch then changes the destination IP address and port of the request packet sent by the client to the IP address (RIP) of the backend real server, so that the real server (RS) can receive the customer's request packet. After the real server responds to the request, view the default route (NAT mode we need to set the RS default route to the LB server.) After sending the response data packets to LB,LB and receiving the response packet, the source address of the package is changed to the virtual address (VIP) and then sent back to the client.

Scheduling process IP Packet Detail diagram:

Schematic Description:
1) Client request data, target IP is VIP
2) Request data to the LB server, lb according to the scheduling algorithm to modify the destination address to rip address and corresponding port (this RIP address is based on the scheduling algorithm.) ) and record the connection in the connection hash table.
3) The packet arrives from the LB server to the RS server webserver, and then webserver responds. The webserver gateway must be lb and then return the data to the LB server.
4) After receiving the return data from RS, modify the source address vip& the target address CIP, and the corresponding port 80 according to the Connection hash table. Then the data arrives at the client from Lb.
5) The client receives only the VIP\DIP information.

Nat Mode pros and Cons:
1, NAT technology will request the message and response of the message needs to be addressed through the LB address rewrite, so the site traffic is relatively large when the LB load Balancer Scheduler has a larger bottleneck, the general requirements can only be 10-20 nodes
2, only need to configure a public network IP address on lb.
3. The gateway address of each internal node server must be the intranet address of the scheduler lb.
4. Nat mode supports the conversion of IP address and port. That is, the port that the user requests and the port of the real server can be inconsistent.

2. Tun Mode
Virtual server via IP tunneling mode: When a NAT mode is used, the scheduler processing power becomes a bottleneck as the request and response messages must be rewritten through the scheduler address, as the client requests become more and more. To solve this problem, the scheduler forwards the requested message over the IP tunnel to the real server. The real server returns the response processed data directly to the client. In this way, the dispatcher only processes the request inbound message, because the General Network Service answer data is much larger than the request message, after adopting the Vs/tun mode, the maximum throughput of the cluster system can be increased 10 times times.
The work flow chart for Vs/tun is as follows, unlike Nat mode, where the transfer between LB and RS does not overwrite the IP address. Instead, the client request package is encapsulated in an IP tunnel, and then sent to the RS node server, and the node server receives the IP tunnel after it has been unpacked and responds to processing. and directly send the package through their own extranet address to customers without going through the LB server.

Tunnel principle Flowchart:

Schematic process Brief:
1) The client requests the packet, and the destination address VIP is sent to Lb.
2) LB receives the customer request packet for IP tunnel encapsulation. That is, in the original Baotou plus IP tunnel header. and send it out.
3) RS node server according to the IP Tunnel header information (at this time another logical stealth tunnel, only between LB and Rs) received the request packet, and then the IP tunnel header information, to obtain the customer Request packet and response processing.
4) After the response has been processed, the RS server packets the response data to the client using its own public network line. The source IP address or the VIP address. (The RS node server needs to configure the VIP on the local loopback interface)

3. Dr Mode (direct routing mode)
Virtual server via direct routing (VS/DR)
The DR Mode sends the request to the real server by overwriting the destination MAC address of the request message, and the processing result of the real server response is returned directly to the client user. As with Tun mode, Dr Mode can greatly improve the scalability of the cluster system. And Dr Mode does not have the overhead of IP tunneling, and it is not necessary to support the requirements of IP tunneling protocol for real servers in the cluster. But requires that the scheduler lb and real server RS have a NIC connected to the same physical network segment, must be in the same LAN environment. Dr Mode is a more used mode of Internet use.

Schematic diagram of Dr Mode:

Dr Mode principle Process brief:

The work flow diagram of the VS/DR mode, as shown in, its connection scheduling and management as in NAT and Tun, its message forwarding method and the first two different. The DR mode routes the message directly to the target real server. In DR Mode, according to the load situation of each real server, the dispatcher chooses a server dynamically, does not modify the target IP address and destination port, and does not encapsulate the IP packet, but instead the target MAC address of the data frame of the request message to the MAC address of the real server. The modified data frame is then sent on the local area network of the server group. Because the MAC address of the data frame is the MAC address of the real server, and it is on the same LAN. Then according to the network communication principle, the real reset is bound to receive the packet sent by Lb. When the real server receives the request packet, it is the VIP to unlock the IP header to see the target IP. (At this point only your own IP will be received in accordance with the target IP, so we need to configure the VIP on the local loopback pretext.) Another: Because the network interface will be ARP broadcast response, but the other machines in the cluster have the VIP LO interface, the response will conflict. So we need to shut down the ARP response to the LO interface of the real server. The real server then responds to the request, then sends the response packet back to the customer based on its own routing information, and the source IP address is the VIP.

Dr Mode Summary:
1. Forwarding is implemented by modifying the destination MAC address of the packet on the scheduler lb. Note The source address is still CIP, and the destination address is still the VIP address.
2, the requested message passes through the scheduler, and the RS response processing message does not need to go through the scheduler lb, so the concurrent access volume is very high efficiency (and NAT mode ratio)
3, because the DR mode is through the MAC address rewriting mechanism for forwarding, so all RS node and scheduler lb only in one LAN
4, the RS host needs to bind the VIP address on the LO interface, and need to configure ARP suppression.
5, the RS node default gateway does not need to be configured to LB, but directly configured as a superior Route gateway, can let RS directly out of the network.
6, because the DR Mode scheduler only makes the MAC address rewrite, so the scheduler lb can not overwrite the target port, then the RS server will have to use the same port as the VIP service.

The official three kinds of load balancing technology comparison summary table:

Six On the scheduling algorithm of LVS
The LVS scheduling algorithm determines how workloads are distributed between these nodes.
When a director receives a Cluster service inbound request from a client computer to access it on its VIP, the director must decide which cluster node to obtain the request.
On the connection scheduling algorithm in the kernel, Ipvs has implemented the following eight scheduling algorithms:
? Round call scheduling (Round-robin scheduling)
? Weighted round call scheduling (Weighted round-robin scheduling)
? Minimum connection scheduling (Least-connection scheduling)
? Weighted minimum connection scheduling (Weighted least-connection scheduling)
? Minimal link based on locality (locality-based Least Connections scheduling)
? Local least-link with replication (locality-based Least Connections with Replication scheduling)
? Destination Address hash Schedule (Destination Hashing scheduling)
? Source Address Hash Schedule (source Hashing scheduling)
Please refer to http://www.linuxvirtualserver.org/zh/lvs4.html#2 for details.

Seven Manual setup of a simple LVS experiment

Experimental environment topology diagram


A Load Balancer LVS-36-side configuration
1. Installing the LVS Software
I choose Yum installation here, or you can install it by yourself in the official download software
[email protected] ~]# Yum install Lvs–y

Load the LVS into the kernel module after installation
[Email protected] ~]# Ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
Check if successful
The following message appears stating that the load was successful
[Email protected] ~]# Lsmod | grep Ip_vs
IP_VS_WRR 12697 1
Ip_vs 140944 3 IP_VS_WRR
Nf_conntrack 105745 1 Ip_vs
LIBCRC32C 12644 2 Xfs,ip_vs

2. Configure VIP Addresses
View native IP
[Email protected] ~]# ifconfig
In your network card configuration a virtual IP as a VIP, I here the network card is ens34, I put ens34 that is, and the Physical Machine Bridge network card address as a VIP address, according to your environment according to the actual situation configuration

[Email protected] ~]# ifconfig

Ens34:flags=4163<up,broadcast,running,multicast> MTU 1500
inet 10.4.102.36 netmask 255.255.255.0 broadcast 10.4.102.255
Inet6 fe80::20c:29ff:fe94:8327 Prefixlen ScopeID 0x20<link>
Ether 00:0c:29:94:83:27 Txqueuelen (Ethernet)
RX packets 20634 Bytes 2075900 (1.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX Packets bytes 283161 (276.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Lo:flags=73<up,loopback,running> MTU 65536
inet 127.0.0.1 netmask 255.0.0.0
Inet6:: 1 prefixlen ScopeID 0x10Loop Txqueuelen 0 (Local Loopback)
RX Packets 98 Bytes 6876 (6.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX Packets 98 Bytes 6876 (6.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

3. Manually add the LVS service and add two RS servers (RS-35 and RS-37)
[[email protected] ~]# ipvsadm–c clears all records of the kernel virtual server
[[email protected] ~]# ipvsadm-l View the records of the virtual server
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn

Can see no record, we haven't configured

[[email protected] ~]# ipvsadm-a-t 10.4.102.36:80-s RR
[[email protected] ~]# ipvsadm-a-T 10.4.102.36:80-r 10.4.102.35:80-g-W 1
[[email protected] ~]# ipvsadm-a-T 10.4.102.36:80-r 10.4.102.37:80-g-W 1

Check configuration results
[Email protected] ~]# ipvsadm-l-N
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 10.4.102.140:80 RR
-10.4.140.35:80 Route 1 0 0
-10.4.140.37:80 Route 1 0 0

So the LVS load balancer is configured to complete
A reference to the usage of IPVSADM is as follows:

ipvsadm-a| E-t|u|f Virutal-service-address:port [-S scheduler] [-P [Timeout]] [-M netmask]
ipvsadm-d-t|u|f virtual-service-address
Ipvsadm-c
Ipvsadm-r
Ipvsadm-s [-N]
Ipvsadm-a|e-t|u|f Virtual-service-address:port-r Real-server-address:port
[-g|i|m] [-W weight]
ipvsadm-d-t|u|f virtual-service-address-r real-server-address
ipvsadm-l|l [Options]
Ipvsadm-z [-t|u|f virtual-service-address]
Ipvsadm--set TCP Tcpfin UDP
Ipvsadm–h
-a--add-service adds a new virtual IP record to the list of virtual servers in the kernel. Which means adding a new virtual server
-e--edit-service Edit a virtual server record in the list of kernel virtual servers
-D--delete-service Delete a virtual server record from the list of kernel virtual servers
-C--clear clears all records in the kernel virtual server list
-R--restore Restore virtual Server rules
-S--save Save virtual Server rule, output to-r option readable format
-a--add-server adds a day's new real services record to a record in the kernel virtual server list. That is, adding a new realserver to a virtual server
-e--edit-server edit a real server record in a virtual server record
-D--delete-server Delete a real server record in a virtual server record
-L--list Displays the list of virtual servers in the kernel
-L--timeout Displays the timeout value for "TCP Tcpfin UDP" such as: Ipvsadm-l--timeout
-L--daemon Displays the status of the synchronization daemon, for example: Ipvsadm-l–daemon
-L--stats Displays statistics, for example: Ipvsadm-l–stats
-L--rate Display rate information, for example: Ipvsadm-l--rate
-L--sort sort the output on the virtual server and the real server, for example: Ipvsadm-l--sort
-Z--zero Virtual Server list counter clear 0 empty the current number of connections
--set TCP tcpfin UDP setting Connection timeout value
-T describes the TCP service provided by the virtual server, which is followed by the following format:
[Virtual-service-address:port] or [Real-server-ip:port]
-U indicates that the virtual server provides the UDP service, which is followed by the following format:
[Virtual-service-address:port] or [Real-server-ip:port]
-F Fwmark Description is a service type that has been iptables marked
-s The scheduling algorithm used with LVS after this option
There are several options: Rr|wrr|lc|wlc|lblc|lblcr|dh|sh, the default algorithm is WLC
The continuous service time on a real server for-p timeout. This means that multiple requests from the same user will be processed by the same real server. This parameter is typically used for operations with dynamic requests, and the default value for timeout is 300 seconds. For example:-P 600, which indicates a continuous service time of 600 seconds.
-r Specifies the IP address of the real server, which is followed by the following format:
[Real-server-ip:port]
-G--gatewaying The operating mode of the specified LVS is the direct route mode Dr (this mode is the LVS default operating mode)
-I-IPIP specifies that the operating mode of the LVS is tunnel mode tun
-M--masquerading specifies the operating mode of LVS for NAT mode
-W--weight Weight Specify the weight of the real server
-C--connection Display LVS Current connection information such as: IPVSADM-LNC

Two Realserver RS-35 and RS-37 end configurations
1. Install a simple HTTP service on the RS-35 and RS-37 side
[email protected] ~]# Yum install Httpd–y
[email protected] ~]# Yum install Httpd–y

After successful installation, change httpd default index file for easy testing
[Email protected] ~]# echo ">/var/www/html/index.html"
[Email protected] ~]# echo "Notoginseng" >/var/www/html/index.html

2. Test the installation httpd successfully on the physical machine side.
The following screen shows that the installation was successful

3. Manually bind VIP
[[email protected] ~]# ifconfig lo 10.4.102.36 netmask 255.255.255.255 up
[[email protected] ~]# ifconfig lo 10.4.102.36 netmask 255.255.255.255 up

Check execution results
[Email protected] ~]# ifconfig

Eth1:flags=4163<up,broadcast,running,multicast> MTU 1500
inet 10.4.102.35 netmask 255.255.255.0 broadcast 10.4.102.255
Inet6 fe80::20c:29ff:fe11:7528 Prefixlen ScopeID 0x20<link>
Ether 00:0c:29:11:75:28 Txqueuelen (Ethernet)
RX packets 25603 Bytes 2678861 (2.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX Packets 1938 Bytes 204986 (200.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Lo:flags=73<up,loopback,running> MTU 65536
inet 10.4.102.36 netmask 255.255.255.255
Inet6:: 1 prefixlen ScopeID 0x10Loop Txqueuelen 0 (Local Loopback)
RX packets 7155 Bytes 493322 (481.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7155 Bytes 493322 (481.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[Email protected] ~]# ifconfig

Eth1:flags=4163<up,broadcast,running,multicast> MTU 1500
inet 10.4.102.37 netmask 255.255.255.0 broadcast 10.4.102.255
Inet6 fe80::20c:29ff:fe3a:345a Prefixlen ScopeID 0x20<link>
Ether 00:0c:29:3a:34:5a Txqueuelen (Ethernet)
RX packets 20218 Bytes 1958240 (1.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX Packets 605 Bytes 63965 (62.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Lo:flags=73<up,loopback,running> MTU 65536
inet 10.4.102.36 netmask 255.255.255.255
Inet6:: 1 prefixlen ScopeID 0x10Loop Txqueuelen 0 (Local Loopback)
RX packets 2220 Bytes 152714 (149.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2220 Bytes 152714 (149.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

4. ARP response to suppress LO loopback interface

[Email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[Email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[Email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[Email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

[Email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[Email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[Email protected] ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[Email protected] ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

5. Open browser test on physical machine is successful


To verify the accuracy of the experiment, let's check to see if the 10.4.102.36 machine has 80 ports open.
[Email protected] ~]# NETSTAT-LNT
Active Internet connections (only servers)
Proto recv-q send-q Local address Foreign address state
TCP 0 0 0.0.0.0:139 0.0.0.0: LISTEN
TCP 0 0 0.0.0.0:22 0.0.0.0:
LISTEN
TCP 0 0 127.0.0.1:25 0.0.0.0: LISTEN
TCP 0 0 0.0.0.0:445 0.0.0.0:
LISTEN
TCP6 0 0::: 139::: LISTEN
TCP6 0 0:
:: $::: LISTEN
TCP6 0 0:: 1:25::: LISTEN
TCP6 0 0::: 445::
: LISTEN

You can see that the LVS does not have 80 ports open, which means that the two RS servers are accessed by load balancer respectively.

Linux Note Web cluster LVS-DR combat

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.