LVS working mode and scheduling algorithm

Source: Internet
Author: User

Introduction of three operating modes and 10 scheduling algorithms of LVS

Working Mode Introduction:
1.Virtual server via NAT (Vs-nat)
Advantage: Physical servers in a cluster can use any TCP/IP-enabled operating system, the physical server can allocate reserved private addresses for the Internet, and only the load balancer needs a legitimate IP address.
Cons: Limited scalability. When the server node (normal PC server) data grows to 20 or more, the load balancer becomes the bottleneck for the entire system, because all the request packets and reply packets need to be regenerated by the load balancer. If the average TCP packet length is 536 bytes, the average packet regeneration delay time is approximately 60US (calculated on the Pentium processor, with a faster processor will make this delay time shorter), the maximum allowable capacity of the load balancer is 8.93m/s, Assuming that the platform allowable capacity of each physical server is 400k/s, the equalizer can be calculated for 22 physical servers.

Workaround: Even if the load balancer becomes the bottleneck of the entire system, there are two ways to resolve it if so. One is hybrid processing and the other is virtual server via IP tunneling or virtual server Viadirect routing. If you adopt a hybrid approach, many of the same single RR DNS domains will be required. You use virtual server via IP tunneling or virtual server Viadirect routing for better scalability. You can also nest using a load balancer, in front of the vs-tunneling or vs-drouting load balancer, followed by a vs-nat load balancer.

2.Virtual Server via IP tunneling (Vs-tun)
We found that many Internet services, such as Web servers, had a short request packet, and the answer packet was usually large.
Advantage: The load balancer is only responsible for distributing the request package to the physical server, and the physical server sends the reply package directly to the user. So, the load balancer can handle a huge amount of requests, and in this way, a load balancer can serve more than 100 physical servers, and the load balancer is no longer a system bottleneck. Using the Vs-tun method, if your load balancer has a 100M full-duplex network card, it will enable the entire Virtual server to achieve 1G throughput.
Cons: However, this approach requires all servers to support the "IP Tunneling" (IP Encapsulation) protocol, which I only implemented on Linux systems, and if you can get other operating systems to support it, it's still under exploration.

3.Virtual Server via Direct Routing (VS-DR)
Pros: Like Vs-tun, the load balancer is just a distribution request, and the reply packet is returned to the client through a separate routing method. Compared to Vs-tun, this implementation of VS-DR does not require a tunneling structure, so most operating systems can be used as physical servers, including Linux, Solaris, FreeBSD, Windows, IRIX 6.5;hpux11, and so on.
Insufficient: The NIC that requires the load balancer must be on a physical segment with the physical NIC.

Compare the pros and cons of three IP load Balancing technologies:
Miscellaneous Vs/nat Vs/tun VS/DR
Server operating system support tunnel majority (support Non-arp)
Server network private network LAN/WAN LAN
Number of servers (100M network) 10-20 100 + (100)
The server gateway load balancer itself routes its own route
High efficiency generally highest
Scheduling Algorithm Introduction:
1. Call scheduling (Round Robin) (RR)
The scheduler uses the "round-robin" scheduling algorithm to sequentially allocate external requests to real servers in the cluster, and treats each server equally, regardless of the actual number of connections and system load on the server.

2. Weighted round call (Weighted Roundrobin) (abbreviated WRR)
The scheduler uses the "Weighted round call" scheduling algorithm to schedule access requests based on the different processing capabilities of the real server. This ensures that the processing capacity of the server can handle more traffic. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

3. Minimum link (leastconnections) (LC)
The scheduler dynamically dispatches network requests to the server with the fewest number of links established through the "least connection" scheduling algorithm. If the real server of the cluster system has similar system performance, the "Minimum connection" scheduling algorithm can be used to balance the load well.

4. Weighted minimum link (Weighted leastconnections) (WLC)
In the case of the server performance difference in the cluster system, the scheduler uses the "Weighted least link" scheduling algorithm to optimize the load balancing performance, and the server with higher weights will bear a large proportion of active connection load. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

5. Minimum link based on locality (Locality-basedleast Connections) (LBLC)
The "least link based on locality" scheduling algorithm is a load balancing target IP address, which is mainly used in cache cluster system. According to the target IP address of the request, the algorithm finds the most recently used server, if the server is available and not overloaded, sends the request to the server, if the server does not exist, or if the server is overloaded and has half of the workload of the server, the principle of "least link" is used to select an available server. , the request is sent to the server.

6. Local least-link with replication (locality-basedleast Connections with Replication) (LBLCR)
The "least local link with replication" Scheduling algorithm is also a load balancer for the target IP address, which is mainly used in the cache cluster system. It differs from the LBLC algorithm in that it maintains a mapping from a destination IP address to a set of servers, while the LBLC algorithm maintains a mapping from a destination IP address to a server. According to the target IP address of the request, the algorithm finds the corresponding server group of the target IP address, selects a server from the server group according to the principle of "minimum connection", if the server is not overloaded, sends the request to the server, and if the server is overloaded, select a server from this cluster according to the "minimum connection" principle. Join the server to the server group and send the request to the server. Also, when the server group has not been modified for some time, the busiest server is removed from the server group to reduce the degree of replication.

7. Destination address hash (destinationhashing) (DH)
The "Target address hash" scheduling algorithm finds the corresponding server from a statically allocated hash list, based on the requested destination IP address, as a hash key (hash key), if the server is available and not overloaded, sends the request to the server, otherwise returns NULL.

8. Source Address hash (source Hashing) (SH)
The "Source address hash" scheduling algorithm, based on the requested source IP address, as the hash key (hash key) from the static distribution of the hash list to find the corresponding server, if the server is available and not overloaded, send the request to the server, otherwise return empty.

9. Shortest expected delay (shortestexpected delay scheduling sed) (SED)
Based on the WLC algorithm. This has to be for example.
ABC three machines weigh 123 respectively, and the number of connections is 123 respectively. Then if a new request comes in using the WLC algorithm, it may be assigned to any of the ABC. Such an operation is performed after using the SED algorithm
A (+)/1
B (1+2)/2
C (1+3)/3
According to the result of the operation, the connection is given to C.

10. Minimum queue scheduling (never queuescheduling NQ) (NQ)
No queues are required. If there is a realserver number of connections = 0 is directly allocated in the past, do not need to perform SED operations

LVS working mode and scheduling algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.