The LB cluster is a shorthand for the load balance cluster, translated into Chinese. Common load Balancing open source software has Nginx, LVS, keepalived, commercial hardware load equipment F5, Netscale.
The structure of the LB cluster, such as the principle is also very simple answer, that is, when the user requests come over, will be sent directly to the Distributor (Director server), and then it put the user's request based on the pre-set algorithm, intelligently distributed to the back-end of the real server (virtual server). If a different machine, the data requested by the user may not be the same, in order to avoid this situation, so the use of shared storage, so that all users request the same data.
LVS is an open source software project that implements a load-balanced cluster, and the LVS architecture can be logically divided into the scheduling layer (Director), the server cluster layer (Real server), and the shared storage. LVS is divided into the following three modes from the implementation.
The basic working process of LVS
(1) NAT (the scheduler will change the requested destination IP, the VIP address to the IP of the Real server, the returned packets also go through the scheduler, the scheduler then modifies the source address to the VIP).
NAT mode-Network address translation
VirtualServer via Network address translation (Vs/nat)
This is through the network address translation method to achieve scheduling. First the Scheduler (LB) receives the client's request packet (the destination IP for the request is the VIP), and according to the scheduling algorithm, decides which backend to send the request to the real server (RS). The dispatch then changes the destination IP address and port of the request packet sent by the client to the IP address (RIP) of the backend real server, so that the real server (RS) can receive the customer's request packet. After the real server responds to the request, view the default route (NAT mode we need to set the RS default route to the LB server.) After sending the response data packets to LB,LB and receiving the response packet, the source address of the package is changed to the virtual address (VIP) and then sent back to the client.
Schematic Description:
1) Client request data, target IP is VIP
2) Request data to the LB server, lb according to the scheduling algorithm to modify the destination address to rip address and corresponding port (this RIP address is based on the scheduling algorithm.) ) and record the connection in the connection hash table.
3) The packet arrives from the LB server to the RS server webserver, and then webserver responds. The webserver gateway must be lb and then return the data to the LB server.
4) After receiving the return data from RS, modify the source address vip& the target address CIP, and the corresponding port 80 according to the Connection hash table. Then the data arrives at the client from Lb.
5) The client receives only the VIP\DIP information.
Nat Mode pros and Cons:
1, NAT technology will request the message and response of the message needs to be addressed through the LB address rewrite, so the site traffic is relatively large when the LB load Balancer Scheduler has a larger bottleneck, the general requirements of up to 10-20 nodes
2, only need to configure a public network IP address on lb.
3. The gateway address of each internal node server must be the intranet address of the scheduler lb.
4. Nat mode supports the conversion of IP address and port. That is, the port that the user requests and the port of the real server can be inconsistent.
(2) TUN (the scheduler will encapsulate the requested packet encryption over the IP tunnel to the backend real server, and real server will directly return the data to the client, not the scheduler).
Tun mode
Virtual server via IP tunneling mode: When a NAT mode is used, the scheduler processing power becomes a bottleneck as the request and response messages must be rewritten through the scheduler address, as the client requests become more and more. To solve this problem, the scheduler forwards the requested message over the IP tunnel to the real server. The real server returns the response processed data directly to the client. In this way, the dispatcher only processes the request inbound message, because the General Network Service answer data is much larger than the request message, after adopting the Vs/tun mode, the maximum throughput of the cluster system can be increased 10 times times.
The work flow chart for Vs/tun is as follows, unlike Nat mode, where the transfer between LB and RS does not overwrite the IP address. Instead, the client request package is encapsulated in an IP tunnel, and then sent to the RS node server, and the node server receives the IP tunnel after it has been unpacked and responds to processing. and directly send the package through their own extranet address to customers without going through the LB server.
Tunnel principle Flowchart:
Schematic process Brief:
1) The client requests the packet, and the destination address VIP is sent to Lb.
2) LB receives the customer request packet for IP tunnel encapsulation. That is, in the original Baotou plus IP tunnel header. and send it out.
3) RS node server according to the IP Tunnel header information (at this time another logical stealth tunnel, only between LB and Rs) received the request packet, and then the IP tunnel header information, to obtain the customer Request packet and response processing.
4) After the response has been processed, the RS server packets the response data to the client using its own public network line. The source IP address or the VIP address.
(3) DR (the scheduler will change the destination MAC address of the requested packet to the MAC address of the real server, and return it without passing the scheduler, directly to the client).
Dr Mode (direct route mode)
Virtual server via direct routing (VS/DR)
The DR Mode sends the request to the real server by overwriting the destination MAC address of the request message, and the processing result of the real server response is returned directly to the client user. As with Tun mode, Dr Mode can greatly improve the scalability of the cluster system. And Dr Mode does not have the overhead of IP tunneling, and it is not necessary to support the requirements of IP tunneling protocol for real servers in the cluster. But requires that the scheduler lb and real server RS have a NIC connected to the same physical network segment, must be in the same LAN environment.
Dr Mode is a more used mode of Internet use.
Schematic diagram of Dr Mode:
Dr Mode principle Process brief:
The work flow diagram of the VS/DR mode, as shown in, its connection scheduling and management as in NAT and Tun, its message forwarding method and the first two different. The DR mode routes the message directly to the target real server. In DR Mode, according to the load situation of each real server, the dispatcher chooses a server dynamically, does not modify the target IP address and destination port, and does not encapsulate the IP packet, but instead the target MAC address of the data frame of the request message to the MAC address of the real server. The modified data frame is then sent on the local area network of the server group. Because the MAC address of the data frame is the MAC address of the real server, and it is on the same LAN. Then according to the network communication principle, the real reset is bound to receive the packet sent by Lb. When the real server receives the request packet, it is the VIP to unlock the IP header to see the target IP. (At this point only your own IP will be received in accordance with the target IP, so we need to configure the VIP on the local loopback pretext.) Another: Because the network interface will be ARP broadcast response, but the other machines in the cluster have the VIP LO interface, the response will conflict. So we need to shut down the ARP response to the LO interface of the real server. The real server then responds to the request, then sends the response packet back to the customer based on its own routing information, and the source IP address is the VIP.
Dr Mode Summary:
1. Forwarding is implemented by modifying the destination MAC address of the packet on the scheduler lb. Note The source address is still CIP, and the destination address is still the VIP address.
2, the requested message passes through the scheduler, and the RS response processing message does not need to go through the scheduler lb, so the concurrent access volume is very high efficiency (and NAT mode ratio)
3, because the DR mode is through the MAC address rewriting mechanism for forwarding, so all RS node and scheduler lb only in one LAN
4, the RS host needs to bind the VIP address on the LO interface, and need to configure ARP suppression.
5, the RS node default gateway does not need to be configured to LB, but directly configured as a superior Route gateway, can let RS directly out of the network.
6, because the DR Mode scheduler only makes the MAC address rewrite, so the scheduler lb can not overwrite the target port, then the RS server will have to use the same port as the VIP service.
The official three kinds of load balancing technology comparison summary table:
There are several IP concepts that need to be explained, where DIP (driector IP) is the Ip,nat mode for the distributor and it must be a public IP, to be externally serviced. The VIP (virtual IP), which is used in TUN and DR mode, needs to be configured on both the Distributor and the back-end real server. RIP (real IP) is the IP of the backend real server, in TUN and Dr Mode, RIP is the public IP.
Reference http://www.it165.net/admin/html/201401/2248.html
LVS Scheduling algorithm
If we want to dispatch the user's request to the back-end RS, it needs to be implemented by the scheduling algorithm, so what are the scheduling algorithms for LVS?
(1) Call scheduling (Round Robin) (RR), this algorithm is the simplest, regardless of the back-end RS configuration and processing power, distributed very evenly.
(2) Weighted round call (Weighted Round Robin) (abbreviated as WRR), more than the above algorithm a weight concept, you can set the weight of RS, the higher the weight, then the more the number of distributed requests, the weight of the value range of 0-100.
(3) The minimum link (least connection) (LC), the algorithm will be based on the number of connections to the back-end RS to determine who to distribute the request, such as the number of RS1 connections less than RS2 connections, then the request priority to send to RS1.
(4) Weighted minimum link (Weighted Least Connections) (WLC), which has a more weighted concept than the third algorithm.
It is best to refer to this article: http://www.linuxvirtualserver.org/zh/lvs4.html
Linux system Architecture-Introduction to the LVS of lb clusters