LVS packet forwarding model and scheduling algorithm (RPM)

Source: Internet
Author: User

About LVS

The rapid growth of the internet has led to a rapid increase in the number of access to multimedia Web servers and the ability of servers to provide a large number of concurrent access services, so the CPU, I/O processing capacity can quickly become a bottleneck for large-load servers. Because the performance of a single server is always limited, simply improving hardware performance does not really solve the problem. For this reason, multi-server and load-balancing techniques must be used to meet the needs of a large number of concurrent accesses. Linux Virtual Server (SERVERS,LVS) uses load balancing technology to make multiple servers a virtual server. It provides a cost-efficient solution for adapting to fast-growing network access requirements with a load capacity that is easy to scale.

Structure and working principle of LVS I. Structure of LVS

LVS consists of the front-end load balancer (load balancer,lb) and the real-world server (real server,rs) of the backend. RS can be connected via a local area network or wide area network. This structure of LVS is transparent to the user, and the user sees only one virtual server as lb, and the RS group that provides the service is not visible. When a user's request is sent to the virtual server, LB forwards the user request to Rs according to the set packet forwarding policy and the load balancing scheduling algorithm. RS then returns the user request result to the user.
  

Two. LVS Kernel model

1. When the client's request reaches the load balancer's kernel space, it first reaches the prerouting chain.

2. When the kernel discovers that the destination address of the request packet is local, send the packet to the input chain.

3.LVS consists of the ipvsadm of the user space and the Ipvs of the kernel space, Ipvsadm is used to define the rules, Ipvs uses the Ipvsadm defined rules work, IPVS works on the input chain, and when the packet reaches the input chain, it is first Ipvs checked. If the destination address and port in the packet are not inside the rule, then the packet will be released to the user space.

4. If the destination address and port inside the packet are within the rule, then this data message will be modified to the destination address as a pre-defined backend server and sent to the postrouting chain.

5. Finally, the backend server is sent through the postrouting chain.

Three. LVs's packet-forwarding model 1.NAT model:

①. The client sends the request to the front-end load balancer, the request message Source address is CIP (client IP), which is referred to as CIP), the destination address is VIP (load balancer front-end address, hereafter collectively referred to as VIP).

②. After the load balancer receives the message, it discovers that the request is the address that exists in the rule, then it changes the destination address of the client request message to the RIP address of the back-end server and sends the message out according to the algorithm.

③. After the message is sent to real server, the message will respond to the request and return the response message to the LVS because the destination address is itself.

④. The LVS then modifies the source address of this message to native and to the client. 注意:在NAT模式中,Real Server的网关必须指向LVS,否则报文无法送达客户端.

2.DR Models:

①. The client sends the request to the front-end load balancer, the request source address is CIP, the destination address is VIP.

②. After the load balancer receives the message, it discovers that the request is the address that exists in the rule, then it changes the source MAC address of the client request message to its own Dip MAC address, the target Mac changes to the MAC address of the RIP, and sends this packets to Rs.

③.rs found in the request message the purpose of the MAC is itself, will receive the secondary message, after processing the request message, the response message through the LO interface sent to the ETH0 network card directly sent to the client. 注意:需要设置lo接口的VIP不能响应本地网络内的arp请求.

3.TUN Models:

①. The client sends the request to the front-end load balancer, the request source address is CIP, the destination address is VIP.

②. After the load balancer receives the message, it discovers that the request is the address that exists in the rule, then it will encapsulate a layer of IP message in the header of the client request packet, change the source address to dip, change the destination address to rip, and send this packets to Rs.

③.rs after receiving the request message, it will first open the first layer of encapsulation, and then found that there is a layer of IP header is the target address of its own LO interface VIP, so will process the request message, and send the response message through the LO interface sent to the ETH0 network card directly to the client. 注意:需要设置lo接口的VIP不能在共网上出现.

Four. LVs scheduling algorithm LVS scheduling algorithm is divided into static and dynamic two categories. 1. Static algorithm (4 kinds): Only according to the algorithm scheduling, regardless of the backend server actual connection situation and load situation

①.RR: Round call scheduling (Round Robin)
The scheduler uses the "round-robin" scheduling algorithm to sequentially allocate external requests to real servers in the cluster, and treats each server equally, regardless of the actual number of connections and system load on the server.

②.WRR: Weighted round call (Weight RR)
The scheduler uses the "Weighted round call" scheduling algorithm to schedule access requests based on the different processing capabilities of the real server. This ensures that the processing capacity of the server handles more access traffic. The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

③.DH: Target Address hash schedule (Destination hash)
According to the requested destination IP address, as the hash key (HashKey) from the static distribution of the hash list to find the corresponding server, if the server is available and not overloaded, send the request to the server, otherwise return empty.

④.sh: Source Address Hash
The source address hash "Scheduling algorithm based on the requested source IP address, as a hash key (HashKey) from the static distribution of the hash list to find the corresponding server, if the server is available and not overloaded, send the request to the server, otherwise return empty?

2. Dynamic Algorithms (6): The front-end scheduler allocates requests based on the actual connection of the backend real servers

①.LC: Minimum Link (Least Connections)
The scheduler dynamically dispatches network requests to the server with the fewest number of links established through the "least connection" scheduling algorithm. If the real server of the cluster system has similar system performance, the "Minimum connection" scheduling algorithm can be used to balance the load well.

②.WLC: Weighted least connection (this is the default) (Weighted Least Connections)
In the case of the server performance difference in the cluster system, the scheduler uses the "Weighted least link" scheduling algorithm to optimize the load balancing performance, the server with higher weights will bear a large proportion of active connection load? The scheduler can automatically inquire about the load of the real server and adjust its weights dynamically.

③.sed: Shortest delay scheduling (shortest expected delay)
Based on the WLC improvement, Overhead = (active+1) *256/weighted, no longer consider inactive state, the number of the current active state + one implementation, the smallest, accept the next request, +1 of the purpose is to consider the weighted time, Excessive number of inactive connections: When the permissions are too large, the idle server is always in a disconnected state.

④.nq never queue/minimum queue scheduling (never queue scheduling NQ)
No queues are required. If there is a realserver number of connections = 0 directly assigned to the past, do not need to perform the SED operation, to ensure that there is no space for a host. On the basis of the SED no matter how many, the second must give the next, to ensure that there will not be a host is not very idle, regardless of the inactive connection, only with nq,sed to consider the active state connection, UDP for DNS does not need to consider the inactive connection, The HTTPD service needs to consider the stress of inactive connections to the server.

⑤.LBLC: Minimal link based on locality (locality-based Least Connections)
The least-link "scheduling algorithm based on locality is the load balance for the target IP address, which is mainly used in the cache cluster system. The algorithm finds the most recently used server for the destination IP address based on the destination IP address of the request, and if the server is available and not overloaded, send the request to the server; If the server does not exist, or if the server is overloaded and the server is in half of the workload, then use the "least link" principle to select an available server and send the request to that server?

⑥. LBLCR: Local least-connection with replication (locality-based Least Connections with Replication)
Local least-Link scheduling algorithm with replication is also a load balancer for the target IP address, which is mainly used in the cache cluster system. It differs from the LBLC algorithm in that it maintains a mapping from a destination IP address to a set of servers. And the LBLC algorithm maintains a mapping from a destination IP address to a server? The algorithm based on the target IP address of the target IP address to find the corresponding server group, according to the "minimum connection" principle from the server group to select a server, if the server is not overloaded, send the request to the server; If the server is overloaded, Then select a server from this cluster by the "minimum connection" principle, join the server to the server group, and send the request to the server? Also, when the server group has not been modified for some time, the busiest server is removed from the server group to reduce the degree of replication.

LVS packet forwarding model and scheduling algorithm (RPM)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.