Application Load Balancing LVS (I): Basic Concepts and three modes
Directory:
1. LVS Introduction
2. How LVS-ipvs works
2.1 VS/NAT Mode
2.2 VS/TUN Mode
2.3 VS/DR Mode
2.4 Comparison of Three lvs-ipvs Modes
3. ARP problems in VS/TUN and VS/DR Modes
4. LVS Load Balancing Scheduling Algorithm
In website architecture, Server Load balancer is one of the main means to achieve the scalability of website architecture. The so-called "scalability" means that new servers can be added to the cluster to improve performance and relieve the increasing pressure on concurrent user access. In layman's terms, when a cow does not move, it will be pulled with two, three, and more bulls.
There are several Load Balancing Methods: http URL redirection, dns a record load balancing, reverse proxy load balancing, IP load balancing, and link layer load balancing. This article describes LVS. Its VS/NAT and VS/TUN modes are excellent representatives of IP Server Load balancer, while its VS/DR mode is an excellent representative of link layer Server Load balancer.
1. LVS Introduction
LVS Chinese official manual :. This manual is helpful for understanding the background of lvs.
LVS English official manual :. This manual is comprehensive and helpful for understanding and learning the principles and configurations of lvs.
LVS is an open source load balancing software developed by Zhang Wenyu. LVS was initially a toy during his college career. As more and more users were using it, LVS became more and more perfect and eventually integrated into the Linux kernel. Many open-source coders have developed auxiliary tools and auxiliary components for LVS. The most famous is the Keepalived prepared by Alexander for LVS. It was originally used to monitor LVS, later, we added the high availability feature through VRRP.
LVS is short for Linux virtual server, that is, Linux virtual server. It is a virtual server because LVS itself is a Load balancer (director). Instead of directly processing requests, LVS forwards requests to the real backend server realserver.
LVS is a layer-4 (Transport Layer tcp/udp) and layer-7 (Application Layer) Server Load balancer tool. However, most people generally use its layer-4 Server Load balancer function ipvs, the layer-7 Content Delivery load tool ktcpvs (kernel tcp virtual server) is not very well-developed and used by many people.
Ipvs is a framework integrated in the kernel.ipvsadm
You can define rules to manage ipvs in the kernel. Just like the relationship between iptables and netfilter.
2. How LVS-ipvs works
The first thing to explain is several IP addresses related to LVS:
VIP
: Virtual IP, IP address of the NIC that receives Internet packets on the LVS server.
DIP
: Director IP, the IP address of the NIC that forwards data packets to the realserver on The LVS server.
RIP
: IP address for receiving Director forwarded packets on the realserver (RS), that is, the IP address of the server that provides services.
CIP
: The IP address of the client.
Three working modes of LVS: Using Network Address Translation (NAT) to form a group of servers into a high-performance, high-availability virtual server, is VS/NAT technology. Based on the analysis of the disadvantages of VS/NAT and the asymmetry of network services, a Virtual Server VS/TUN (Virtual Server via IP Tunneling) is proposed through IP Tunneling ), and Virtual Server VS/DR (Virtual Server via Direct Routing) through Direct Routing, which can greatly improve system scalability.
2.1 VS/NAT Mode
After the request sent by the client reaches Director, dire modifies the target address to a backend RIP (one of the hosts in the web server pool) based on the Load Balancing Algorithm and forwards it to the backend host, just like NAT. After the backend host processes the request, the backend host delivers the response data to Director, and the director changes the source address to VIP and then transmits it to the client. Most commercialized IP Server Load balancer hardware uses this method, such as Cisco's LocalDirector, F5's Big/IP.
In this mode:
- RIP and DIP are generally in the same private network segment. But it is not necessary, as long as they can communicate.
- The Gateways of RealServer point to DIP, which ensures that the response data is handed over to ctor.
- The biggest disadvantage of VS/NAT mode is that Director is responsible for all incoming and outgoing data: not only the requests initiated by the client, but also the responses transmitted to the client. The response data is generally much larger than the request data, and the scheduler ctor Ctor is prone to bottlenecks.
- This mode is the easiest to configure.
2.2 VS/TUN Mode
When NAT technology is used, because requests and response packets must be overwritten by the scheduler address, the processing capability of the scheduler becomes a bottleneck when there are more and more customer requests. To solve this problem, the scheduler forwards the request packets to the real server through the IP tunnel, and the real server directly returns the response to the customer. Therefore, the Scheduler only processes the request packets. Generally, the network service response message is much larger than the request message. after VS/TUN technology is adopted, the scheduler is greatly liberated, and the maximum throughput of the cluster system can be increased by 10 times.
Working principle of VS/TUN mode:
- (1) IP tunneling technology, also known as IP Encapsulation technology, can encapsulate data packets with source and target IP addresses for the second time using the new source and target IP addresses, in this way, the message can be sent to a specified target host;
- (2) In VS/TUN mode, IP tunneling is used between the scheduler and the backend server group. After the request (CIP --> VIP) sent by the client is received by ctor, diremodifies the message and adds the IP addresses at both ends of the IP tunnel as the new source and target addresses, forward the request to a selected backend target;
- (3) After the backend server receives the packet, it first unseals the packet and obtains the original CIP --> VIP. the backend server receives the packet because it has configured the VIP on its tun interface.
- (4) After the request is processed, the result will not be handed over to director, but will be directly returned to the client. When the backend server returns the data packet to the client, because it uses a common Nic interface, according to the general route entry, the source IP address will be the address on the interface of the NIC, for example, RIP. Therefore, to set the source IP address of the response packet to VIP, you must add a special route entry and specify that the source IP address of the route is VIP.
Basic Attributes and requirements for VS/TUN mode:
- The RIP of RealServer and the DIP of director do not need to be in the same physical network, and RIP must be able to communicate with the public network. That is to say, cluster nodes can be implemented across the Internet.
- The virtual IP (VIP) address must be configured on the tun interface of the realserver to receive data packets forwarded by ctor and the source IP address used as the response packet.
- When realctor is used to provide realserver, a tunnel is required. The Source IP address in the IP header of the outer tunnel is DIP, and the target IP address is RIP. The IP header that realsever returns to the client is obtained based on the IP header in the inner layer of the tunnel. The source IP address is VIP and the target IP address is CIP. In this way, the client cannot tell whether the VIP Is In The ctor or server group.
- You need to add a special route entry so that the source IP address returned by the backend server to the client is VIP.
- Director only processes inbound requests, and the Response Request is completed by the realserver.
In general, VS/TUN mode is used to schedule Cache Server groups for load scheduling. These cache servers are generally placed in different network environments and can return data to the client nearby. When the request object cannot be locally hit by the Cache server, the Cache server sends a request to the source server, retrieves the result, and finally returns the result to the customer.
2.3 VS/DR Mode
In VS/TUN mode, the scheduler processes data packets by using IP tunneling technology for secondary encapsulation. The VS/DR mode is similar to the VS/TUN mode, except that the scheduler processes data packets by rewriting the target MAC address of the data frame and implementing Load Balancing through the link layer.
VS/DR rewrite the target MAC address of the request message to send the request to the Real Server, and the real server directly returns the response to the customer. Like VS/TUN technology, VS/DR technology can greatly improve the scalability of Cluster Systems. This method does not involve the overhead of the IP tunnel, and does not require real servers in the cluster to support the IP tunnel protocol, however, the scheduler and the Real Server are required to have a network card connected to the same physical network segment, so as to use the MAC address to forward packets.
Working principle of VS/DR mode:
- (1) After the request sent by the client is received by ctor, dire modifies the target MAC address of the data frame to the MAC address of a backend RS according to the load balancing algorithm, and forward the packet to the RS (actually sent to the LAN, but only the RS of the MAC address will not be discarded ).
- (2) After RS receives the data packet, it finds that the destination IP address of the data packet is VIP, and RS has configured the VIP on an interface, therefore, RS will receive and process this data packet.
- (3) After the processing is completed, RS directly sends the Response Message to the client. When the interface is returned to the client, the source IP address is the IP address on the interface, such as RIP. Therefore, to set the source IP address of the response data packet to VIP, you must add a special route entry and specify the source IP address of the route as VIP.
That is to say, client requests are sent to LB. The source and target IP addresses are CIP: VIP, and LB has VIP and DIP. Rewrite the MAC address and send the requests to a realserver, such as RS1, through DIP, in this case, the source IP address and target IP address are CIP: VIP, but the target MAC address is changed to the MAC address "RS1_MAC" of the RIP1 Nic. RS1 finds itself having a VIP address, therefore, this data packet is received (so VIP must be configured on RS ). RS1 is directly returned to the client based on the route table. In this case, the source and target IP addresses are VIP: CIP.
Basic Attributes and requirements for VS/DR mode:
- The RIP of RealServer and the DIP of director must be in the same network segment for communication using the MAC address.
- The VIP address must be configured on the realserver to receive data packets forwarded by ctor and the source IP address used as the response packet.
- The source IP address and target IP address of the data packet that realsever returns to the client are VIP --> CIP.
- You need to add a special route entry so that the IP address that the backend server returns to the client is VIP.
- Director only processes inbound requests, and the Response Request is completed by the realserver.
2.4 Comparison of Three lvs-ipvs Modes
Comparison of the three modes:
In terms of performance, VS/DR and VS/TUN are much higher than VS/NAT, because the scheduler is only in a semi-connection from the customer to the server, status migration is performed based on the semi-connected TCP finite state machine, which greatly reduces the pressure on the scheduler. VS/DR performance is slightly higher than VS/TUN because it reduces the overhead of the tunnel. However, the main difference between VS/DR and VS/TUN is that VS/TUN can achieve backend server load balancing (or LAN) across networks ), VS/DR can only perform load balancing with director in the LAN.
3. ARP problems in VS/TUN and VS/DR Modes
In [VS/TUN and VS/DR arp problems], the related principles and setting methods of ARP, arp_ignore, and arp_announce are analyzed in detail. Here is a brief description of how to set arp suppression and arp suppression.
When a packet whose destination IP address is VIP enters the router at the ctor frontend, the router sends an ARP broadcast to the LAN to find the host on which the MAC address of the VIP address is located.
Director and RS are configured with VIP. After the router sends an ARP broadcast, Director and RS both receive the broadcast packet and think that this broadcast packet is for themselves, so they all respond to the router, in this way, entries in the ARP cache table on the routerVIP<-->vip_MAC
It will be overwritten until the last response. In this way, the router will send the client data packet to the last responding host. The VIP of this host may be on ctor or an RS. Within a certain period of time, the router will send packets whose destination IP address is VIP to the host. However, the router regularly sends an ARP broadcast packet, so that the MAC address corresponding to the VIP in the ARP cache table may be changed to another host.
Therefore, make sure that the vrodireonly saves the MAC address corresponding to the VIP address on ctor, that is, only Director is allowed to respond to the ARP broadcast of the vrodire. That is to say, all VIPs on RS must be hidden.
Set the VIP address on the Real Server to the alias interface of the lo interface (for example, lo: 0) andarp_ignore=1
Andarp_announce=2
To hide the rs vip. For more information about how to set the two arp parameters on the lo interface, see [VS/TUN and VS/DR arp problems.
Echo 1>/proc/sys/net/ipv4/conf/all/arp_ignore
Echo 2>/proc/sys/net/ipv4/conf/all/arp_announce
Or
Sysctl-w net. ipv4.conf. all. arp_ignore = 1
Sysctl-w net. ipv4.conf. all. arp_announce = 2
You can also set the arp parameter to the kernel parameter configuration file to make it take effect permanently.
Echo "net. ipv4.conf. all. arp_ignore = 1">/etc/sysctl. conf
Echo "net. ipv4.conf. all. arp_announce = 2">/etc/sysctl. conf
Sysctl-p
Almost all articles on the Internet also set arp parameters on the lo interface:
Echo 1>/proc/sys/net/ipv4/conf/lo/arp_ignore
Echo 2>/proc/sys/net/ipv4/conf/lo/arp_announce
However, this is meaningless because the slave lo interface is not affected by arp parameters.
Arp parameters should be set before VIP configuration to prevent detection by external hosts after VIP configuration and before arp suppression.
4. LVS Load Balancing Scheduling Algorithm
For more information about the LVS scheduling algorithm, see the official manual :.
In the kernel, IPVS performs Load Balancing scheduling at the connection granularity. In HTTP (non-persistent), a TCP connection is required to obtain resources from the WEB server every time. Different requests of the same user are scheduled to different servers, therefore, this fine-grained scheduling can avoid sudden server load imbalance caused by a single user access to a certain extent.
LVS supports static scheduling and dynamic feedback scheduling.
Static scheduling means that no matter how busy the RS is, the scheduling algorithm is used to calculate who will be scheduled when the scheduling algorithm is used. For example, the two realservers are both processing 500 connections at the beginning. Before the next request arrives, server1 only disconnects 10 connections, while server2 disconnects 490 servers. However, at this time, it is redirected to server1, server1.
The dynamic scheduling mode is used to calculate who should be scheduled for the next connection based on the RS's busy degree feedback (the dynamic feedback load balancing algorithm considers the server's real-time load and response, constantly adjust the proportion of requests processed between servers to avoid a large number of requests received when some servers are overloaded, thus improving the throughput of the entire system ).
In the connection Scheduling Algorithm in the kernel, IPVS has implemented the following eight scheduling algorithms: the default algorithm is wlc.
- Static scheduling:
- Round-Robin Scheduling, rr)
- Weighted Round Scheduling (Weighted Round-Robin Scheduling, wrr) is used as the polling standard according to the weight ratio.
- Destination Address hash Scheduling (Destination Hashing Scheduling, dh). Requests from the same Destination IP address are always sent to the same server.
- Source Address hash Scheduling (sh), Source address hash, send requests from the same client within a certain period of time to the same realserver
- Dynamic Feedback scheduling:
- Least-Connection Scheduling (lc). The scheduler needs to record the number of established connections on each server. When a request is scheduled to a server, the number of connections increases by 1; when the connection is terminated or times out, the number of connections decreases by 1. This algorithm is not ideal when the processing capability of each server is different.
- Weighted Least Connection Scheduling (Weighted Least-Connection Scheduling, wlc)
- Based on Locality-Based Least Connections Scheduling and lblc, this algorithm is mainly used in cache Cluster Systems.
- Locality-Based Least Connections with Replication Scheduling and lblcr. It is mainly used in Cache Cluster Systems.
Https://www.bkjia.com/Linux/2018-02/150977.htm