The advantage of
Virtual server via NAT
Vs/nat is that the server can run any operating system that supports TCP/IP, which requires only one IP address to be configured on the dispatcher, and the server group can use a private IP address. The disadvantage is its limited scalability, when the number of server nodes up to 20 o'clock, the scheduler itself may become a new bottleneck in the system, because in the Vs/nat request and response messages need to pass the load scheduler. We measured the average latency of the rewrite message to 60US on the host of the Pentium 166 processor, and the delay on the processor with higher performance was shorter. Assuming that the average length of the TCP message is 536 Bytes, the maximum throughput of the scheduler is 8.93 mbytes/s. Let's assume that the throughput for each server is 800KBYTES/S, so a scheduler can drive 10 servers. (Note: This is the data that was measured long ago)
A clustered system based on Vs/nat can be adapted to the performance requirements of many servers. If the load scheduler becomes a new bottleneck in the system, there are three ways to solve the problem: blending methods, Vs/tun, and VS/DR. In a DNS hybrid cluster system, there are several vs/nat load dispatchers, each with its own server cluster, and these load dispatchers make up simple domain names through Rr-dns. But Vs/tun and VS/DR are a better way to improve system throughput.
For those network services that transmit IP addresses or port numbers to the message data, a corresponding application module needs to be written to convert the IP address or port number in the message data. This will result in the workload of the implementation, while the application module to check the cost of the packet will reduce the system throughput rate.
Virtual Server via IP tunneling
In a Vs/tun cluster system, the load scheduler only dispatches requests to different back-end servers, and the backend server returns the data that is answered directly to the user. In this way, the load scheduler can handle a large number of requests, it can even schedule more than hundred servers (the same size of the server), and it will not become the bottleneck of the system. The maximum throughput of the entire system can exceed 1Gbps, even if the load scheduler has only a 100Mbps Full-duplex NIC. Therefore, Vs/tun can greatly increase the number of servers scheduled by the load scheduler. The Vs/tun Scheduler can dispatch hundreds of servers, which itself will not be a bottleneck in the system and can be used to build high-performance super servers.
The Vs/tun technology requires that all servers must support the IP tunneling or IP encapsulation protocol. Currently, Vs/tun's back-end servers are primarily running Linux operating systems, and we are not testing other operating systems. Because "IP tunneling" is becoming a standard protocol for each operating system, Vs/tun should apply to back-end servers running other operating systems.
Virtual Server via Direct Routing
Like the Vs/tun method, the VS/DR scheduler handles only client to server-side connections, and response data can be returned directly from a separate network route to the customer. This can greatly improve the scalability of the LVS cluster system.
Compared with Vs/tun, this method does not have the cost of IP tunneling, but requires that both the load scheduler and the actual server have a network card connected to the same physical network segment, that the server network device (or device alias) does not respond to the ARP, or that the message can be redirected (Redirect) to the local socket port.
Comparison of advantages and disadvantages of three methods
The advantages and disadvantages of three IP load balancing technologies are summarized in the following table:-vs/nat vs/tun vs/dr Server any tunneling Non-arp device server network private Lan/wan LAN server Number Low (10~20) high (MB) Server gateway load balancer own router own router
Note: The estimate of the maximum number of servers supported by the above three methods is assumed to be that the scheduler uses a 100M network adapter, the scheduler's hardware configuration is the same as the hardware configuration of the back-end server, and the general Web service. With a higher hardware configuration (such as a gigabit NIC and a faster processor) as a scheduler, the number of servers the scheduler can schedule increases accordingly. When the application is not the same, the number of servers will change accordingly. Therefore, the above data estimates are mainly for the scalability of three methods to quantify the comparison.
Personal Blog Original
NAT, TUN, Dr Summary
Link
Linux Server cluster System (III)