Defined:
LVS is a shorthand for Linux virtual server, that is, a virtualized server cluster system.
Structure:
In general, the LVS cluster uses a three-tier structure, the main component of which is: A, Load Scheduler (load balancer), it is the entire cluster to the outside of the front-end machine, responsible for sending the customer's request to a set of servers to execute, And the customer thinks the service comes from an IP address (which we can call a virtual IP address). B, server pool, is a set of servers that actually perform customer requests, such as Web, MAIL, FTP, and DNS. C, shared storage (GKFX storage), which provides a shared storage area for a server pool, which makes it easy to have the same content for the server pool and provide the same service.
mode:Lvs-nat: Based on NAT (Network address translation) technology, the network data flow is as follows: 1) after receiving the client request, the load balancer overwrites the destination IP address to the backend server, which is the IP and/or port number, which is forwarded to the back-end server. 2) after the backend server processing is completed, reply to the load balancer. 3) The Load balancer overwrites the source IP as the virtual IP, sent to the client lvs-dr:1) after the load balancer is dispatched, the corresponding data traffic of the back-end server is returned directly to the client, and the packet is not passed through the load balancer. Lvs-tun:1) Lvs-tun is the LVS original forwarding mode, based on LVS-DR. The Load Balancer LVS code encapsulates the original package (the source client IP to the virtual IP) into a IPIP packet, the destination address is the real IP of the backend server, then goes into the output chain and routes to the back-end server. The back-end server unlocks the IPIP packet and processes it and responds directly to the client with the source address virtual IP, destination address client IP.
Compare and recommend:1) from the back-end server
Requirements, Lvs-nat only requires the backend server gateway to point to the network address of the load balancer, without any other requirements; Lvs-dr mode requires the backend server to disable ARP responses to the virtual IP, the backend server gateway does not point to the load balancer, and Lvs-tun requires the backend service to support IPIP unpacking. Some operating systems are not supported. 2) from the throughput point of view, lvs-dr highest, Lvs-nat lowest. 3) In terms of ease of configuration, Lvs-nat is minimal, lvs-dr and Lvs-tun are more complex. It is recommended to use LVS-DR mode in the application, and it is also the 4-layer open-source load Balancing forwarding strategy which is applied in cloud micro-architecture.
Usage Scenarios:
single LVS cluster: 1) Use the Protocol for the VRRP protocol multicast communication, the use of recommendations closed before, otherwise there will be a brain fissure phenomenon. 2) for back-end health check can use TCP Connect or HTTP GET, in the site class application load Balancing scheme, the recommended use of HTTP GET, you can do application-level checks to prevent the presence of ports but not to provide business conditions. keepalived configuration file Digest, using the software comes with the Genhash tool, Genhash--help 3) Important parameters:
important parameters of LVs forwarding behavior: Expire_nodest_conn expire_quiescent_template
LVS Synchronization State Important parameters: Sync_threshold 4) LVS-DR mode
Core TipsAnd
Optimized:-lvs-dr mode, because the backend server is also configured with a virtual IP, if the client makes ARP requests, the back-end server with its own MAC address to reply, it does not have the effect of load balancing, when the client directly connected to a back-end server. -The virtual IP of the backend server must be bound to lo:0, specifying that the subnet mask is 255.255.255.255, otherwise the ARP disable exception occurs. -Persistent connection problem. A persistent connection causes the same client to continue to connect to the same back-end server during the timeout period (specified by the ipvsadm-p parameter, the persistence_timeout instruction in keepalived), which is a persistent connection on layer 4. Each new connection from the client resets the time-out. -Keepalived health checks on back-end servers, it is recommended to use application-layer checking, and you can configure keepalived to use administrator-defined scripts for health checks (MISC check instructions). -Disable iptables or FIREWALLD or turn on support for the VRRP protocol when using the VRRP protocol for highly available settings between load balancers. -Load balancer in LVS cluster, we recommend using 16GB and above memory, and using multi-queue network card to improve network card throughput and reduce processing delay. -Back-end servers in the LVs cluster, based on IO-intensive and CPU-intensive Class 2, can be optimized using RAID10, SSD, and high-frequency multicore CPUs, respectively.
multiple sets of LVS settings and precautions:1) The ID of the virtual router, in the same group of LVS cluster, must be set to a consistent, different groups of LVS cluster must be different. 2) Priority, the setting value corresponding to the state for master must be higher than the corresponding state for the backup setting. 3) Virtual IP address, different group LVS cluster different 4) the same set of LVS inside the authentication key must be the same, different groups recommend different values. In operations, there has been a new LVS cluster using the same ID as the virtual router inside the original LVS cluster and the same network segment that caused the original business to fail.
Monitoring of availability:1) LVS availability monitoring: Typically, the services provided by the LVS virtual IP are monitored while the availability of all back-end servers is monitored. LVS should be able to health check the back-end server, the back-end server is not available although the LVS will be removed from the server does not affect the client, but this situation should be known to the system administrator for root cause analysis, and evaluation of other back-end server stress situation. 2) at the level of monitoring: as far as possible using application layer inspection, such as Nagios check_http plug-in, Zabbix Web ScenariosLVs Troubleshooting steps recommended:1) Ping the real IP and virtual IP of the load balancer. Determine network connectivity. 2) on the load balancer, check the status of the load and back end servers 3) such as the LVS cluster has multiple back-end servers, bind hosts to test each back-end server, to ensure that the service is normal. 4) Check that the ARP settings of the backend server are valid 5) Check that the virtual IP bindings on the back-end server are successful. 6) when the master-slave load balancer fails to switch, the MAC address of the virtual IP to which it learns is required to be first confirmed on the switch to be updated to the MAC address of the slave.
LVS (Linux viretual server) Load balancer + back-end server