(turn) Details three working model principles of LVS Load balancing and 10 scheduling algorithms

Source: Internet
Author: User

Foreword: Recently in for our products in the high availability, while the environment, while understanding the relevant knowledge, search this blog, good quality, articulate clear, so reproduced to learn. Tags: detailed description of the three operating model of LVS load balancing and 10 scheduling algorithms original works, allow reprint, please be sure to use hyperlinks in the form of the original source of the article, author information and this statement. Otherwise, the legal liability will be investigated. http://linuxnx.blog.51cto.com/6676498/1195379

LVs Load Balancing principle and algorithm detailed

The rapid growth of the internet has led to a rapid increase in the number of access to multimedia Web servers and the ability of servers to provide a large number of concurrent access services, so the CPU, I/O processing capacity can quickly become a bottleneck for large-load servers. Because the performance of a single server is always limited, simply improving hardware performance does not really solve the problem. For this reason, multi-server and load-balancing techniques must be used to meet the needs of a large number of concurrent accesses. Linux Virtual Server (SERVERS,LVS) uses load balancing technology to make multiple servers a virtual server. It provides a cost-efficient solution for adapting to fast-growing network access requirements with a load capacity that is easy to scale. LVS is an open source software that enables simple load balancing under the Linux platform. LVS is the abbreviation for Linux virtual server, which means Linux virtualized servers.


In the implementation of scheduler technology, IP load Balancing technology is the most efficient. In the existing IP load balancing technology, a set of servers is made up of a high-performance, highly available virtual server through Network address translation (translation), which we call Vs/nat technology (virtual server via Network Address translation), most commercially available IP load Balancer Scheduler products Use this method, such as Cisco's LocalDirector, F5 big/ip, and Alteon acedirector. On the basis of analyzing the disadvantage of vs/nat and the asymmetry of network service, the method of implementing virtual server through IP tunneling Vs/tun (virtual server via IP tunneling), and the method of implementing the dummy server through direct routing vs/dr Server via Direct Routing), which can greatly improve the scalability of the system. Therefore, the Ipvs software implements these three kinds of IP load balancing techniques, and their approximate principles are as follows.

1,virtual Server via Network Address translation (Vs/nat)

Through the network address translation, the scheduler rewrites the target address of the request message, assigns the request to the backend real server according to the preset scheduling algorithm, and the response message of the real server passes through the scheduler, the source address of the message is rewritten, and then returned to the customer to complete the entire load scheduling process.

2,virtual Server via Direct Routing (VS/DR)

The VS/DR sends the request to the real server by overwriting the MAC address of the request message, and the real server returns the response directly to the customer. As with Vs/tun technology, VS/DR technology can greatly improve the scalability of the cluster system. This method does not have the overhead of IP tunneling, and there is no need to support the IP tunneling protocol for real servers in the cluster, but requires that the scheduler and the real server have a NIC attached to the same physical network segment.

3,virtual Server via IP tunneling (Vs/tun)

When using NAT technology, because the request and response packets must be rewritten by the dispatcher address, the processing power of the scheduler becomes a bottleneck when the customer requests are more and more. To solve this problem, the scheduler forwards the request message through the IP tunnel to the real server, and the real server returns the response directly to the client, so the scheduler only processes the request message. Since the General Network Service response is much larger than the request message, the maximum throughput of the cluster system can be increased by 10 times times with Vs/tun technology.

the structure and characteristics of LVS system:

The server cluster system built with LVS is transparent to the architecture, and the end user only senses a virtual server. Physical servers can be connected across a high-speed LAN or across a WAN. The most front-end is the load balancer, which distributes the various service requests to the subsequent physical servers, making the entire cluster behave like a virtual server serving the same IP address

The LVS cluster system has good scalability and high availability. Extensibility means that after the LVS cluster is established, it is easy to increase or decrease the physical server based on actual needs. High availability means that when a server node or service process is detected as faulty or failed, the cluster system can automatically adjust the system appropriately.

the LVS cluster uses IP load balancing technology and content-based request distribution technology. The scheduler has a good throughput rate, transfers the request evenly to the different server execution, and the scheduler automatically shields off the server's failure, thereby forming a set of servers into a high-performance, highly available virtual server. The structure of the entire server cluster is transparent to the customer and eliminates the need to modify client and server-side programs.   

To do this, you need to consider system transparency, scalability, high availability, and manageability at design time.

The LVS cluster adopts three-layer structure and its main components are:

A, Load scheduler (balancer), which is the entire cluster to the outside of the front end machine, is responsible for sending the customer's request to a set of servers to execute, and the customer believes that the service is from an IP address (we can call the virtual IP address) on the.

B, server pool, is a set of servers that actually perform customer requests, such as Web, MAIL, FTP, and DNS.

C, shared storage (GKFX storage), which provides a shared storage area for a server pool, which makes it easy to have the same content for the server pool and provide the same service.  

For different network service requirements and server configuration, the Ipvs Scheduler implements the following 10 load scheduling algorithms :

1, round call (Round Robin) abbreviation RR

The scheduler uses the "round-robin" scheduling algorithm to sequentially allocate external requests to real servers in the cluster, and treats each server equally, regardless of the actual number of connections and system load on the server.

2, weighted round called (Weighted Round Robin) abbreviation WRR

The scheduler uses the "Weighted round call" scheduling algorithm to schedule access requests based on the different processing capabilities of the real server. This ensures that the processing capacity of the server handles more access traffic. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

3, Minimum link (Least Connections) abbreviation LC

The scheduler dynamically dispatches network requests to the server with the fewest number of links established through the "least connection" scheduling algorithm. If the real server of the cluster system has similar system performance, the "Minimum connection" scheduling algorithm can be used to balance the load well.

4, weighted least link (Weighted Least Connections) abbreviation WLC

In the case of the server performance difference in the cluster system, the scheduler uses the "Weighted least link" scheduling algorithm to optimize the load balancing performance, and the server with higher weights will bear a large proportion of active connection load. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

5, minimum link based on locality (locality-based Least Connections) abbreviation LBLC

The "least link based on locality" scheduling algorithm is a load balancing target IP address, which is mainly used in cache cluster system. According to the target IP address of the request, the algorithm finds the most recently used server, if the server is available and not overloaded, sends the request to the server, if the server does not exist, or if the server is overloaded and has half of the workload of the server, the principle of "least link" is used to select an available server. , the request is sent to the server.

6, local least-link with replication (locality-based Least Connections with Replication) abbreviation LBLCR

The "least local link with replication" Scheduling algorithm is also a load balancer for the target IP address, which is mainly used in the cache cluster system. It differs from the LBLC algorithm in that it maintains a mapping from a destination IP address to a set of servers, while the LBLC algorithm maintains a mapping from a destination IP address to a server. According to the target IP address of the request, the algorithm finds the corresponding server group of the target IP address, selects a server from the server group according to the principle of "minimum connection", if the server is not overloaded, sends the request to the server, if the server is overloaded, select a server from this cluster according to the "minimum connection" principle. Join the server to the server group and send the request to the server. Also, when the server group has not been modified for some time, the busiest server is removed from the server group to reduce the degree of replication.

7, Target address hash dispatch (Destination Hashing) abbreviation DH

the algorithm is also load balanced against the target IP address, but it is a static mapping algorithm, Map a destination IP address to a server by using a hash (hash) function. The target address hash scheduling algorithm first finds the corresponding server from the statically allocated hash list according to the destination IP address of the request, as hash key (hash key), if the server is available and not overloaded, sends the request to the server, otherwise returns NULL.

8, Source address hash dispatch (source Hashing) abbreviation SH

The algorithm is exactly the same as the target address hash scheduling algorithm, it is based on the requested source IP address, as a hash key (hash key) from the static distribution of the hash table to find the corresponding server, if the server is available and not overloaded, the request is sent to the server, otherwise null is returned. It uses the same hash function as the target address hash scheduling algorithm. In addition to replacing the requested IP address with the requested source IP address, its algorithm flow is basically similar to the target address hash scheduling algorithm. In the actual application, the source address hash schedule and the target address hash schedule can be used together in the firewall cluster, they can guarantee the unique entrance of the whole system.

9, shortest expected delay (shortest expected delay scheduling) abbreviation SED
based on the WLC algorithm, illustrate
ABC three machines weigh 123 respectively, and the number of connections is 123,name if you use the WLC algorithm, a new request may be assigned to any of the ABC, and the SED algorithm is used to perform such an operation.
A: (+)/2
B: (1+2)/2
C: (1+3)/3
According to the result of the operation, the connection is given C

10, Minimum queue scheduling (never queue scheduling) abbreviation NQ

There is no need to queue, if there is a realserver number of connections = 0 directly assigned to the past, no sed operation.

This article is from the "Linux_ Summer" blog, make sure to keep this source http://linuxnx.blog.51cto.com/6676498/1195379

(turn) Details three working model principles of LVS Load balancing and 10 scheduling algorithms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.