Linux load Balancing software LVs installation method detailed

Source: Internet
Author: User


Installation: (Some necessary components need to be installed due to system minimized installation)

[Root@lvs-gs001 ~]# Uname-a

Linux lvs-gs001 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 00:31:26 UTC 2013 x86_64 x86_64 x86_64 gnu/linux
[root@lvs-gs001 ~]# yum install kernel kernel-devel gcc gcc-c++ wget-y
[Root@lvs-gs001 ~]# reboot
[Root@lvs-gs001 ~]# Uname-r
2.6.32-431.23.3.el6.x86_64
[root@lvs-gs001 ~]# ln-s/usr/src/kernels/2.6.32-431.23.3.el6.x86_64//usr/src/linux
[Root@lvs-gs001 ~]# Mkdir/byrd/tools-p
[Root@lvs-gs001 ~]# cd/byrd/tools/
[root@lvs-gs001 tools]# wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gz
[root@lvs-gs001 tools]# tar zxf ipvsadm-1.24.tar.gz
[root@lvs-gs001 tools]# CD ipvsadm-1.24
[root@lvs-gs001 ipvsadm-1.24]# make && make install
[Root@lvs-gs001 ipvsadm-1.24]# echo $?
0
[Root@lvs-gs001 ipvsadm-1.24]# Ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
-> remoteaddress:port Forward Weight activeconn inactconn
[Root@lvs-gs001 ipvsadm-1.24]# Lsmod | grep Ip_vs
Ip_vs 125092 0
LIBCRC32C 1246 1 Ip_vs
IPv6 318183 Ip_vs,ip6t_reject,nf_conntrack_ipv6,nf_defrag_ipv6

Characteristics of LVS Cluster

3.1 IP load balancing and load scheduling algorithm
1. IP Load Balancing Technology
Load balancing technology has a number of implementation scenarios, there are methods based on DNS domain name rotation, there is a method based on client scheduling, there are scheduling methods based on application layer system load, and scheduling method based on IP address, in these load scheduling algorithms, IP load Balancing technology is the most efficient.
LVS IP Load Balancing technology is implemented through the Ipvs module, Ipvs is the core software of LVS cluster system, its main function is to install on Director server and virtual an IP address on Director server. The user must access the service through this virtual IP address. This virtual IP is generally called the VIP of LVS, that is virtual IP. The access request is first reached through the VIP to the load scheduler, and then the load Scheduler picks a service node from the real server list to respond to the user's request.
When the user's request arrives at the load scheduler, how the dispatcher sends the request to the real server node that provides the service, and how the real server node returns the data to the user, is the key technology implemented by Ipvs, Ipvs implements three load balancing mechanisms, NAT, Tun, and Dr, Details are as follows:
vs/nat: That is, Virtual Server via network address translation
That is, network address translation technology to implement a virtual server, when the user requests to the scheduler, the dispatcher will request the message's destination address (that is, the virtual IP address) to the selected real server address, and the target port of the message is also changed to the corresponding port of the selected real server, Finally, the message request is sent to the selected real Server. After the server has obtained the data, real server returns the data to the user, needs again through the load dispatcher to change the source address and the source port of the message to the virtual IP address and the corresponding port, then sends the data to the user, completes the entire load dispatch process.
It can be seen that in the NAT mode, the user requests and response messages must be rewritten by Director server address, when the user requests more and more, the scheduler's processing ability will be called the bottleneck.
vs/tun: That is, Virtual Server via IP tunneling
That is, the IP tunneling technology implements the virtual server. Its connection scheduling and management is the same as the Vs/nat way, but its message forwarding method is different, Vs/tun way, the scheduler uses IP tunneling technology to forward user requests to a real server, and this real server will respond directly to the user's request, no longer through the front-end scheduler, In addition, there is no requirement for the geographic location of real server to be located in the same network segment as the director server or as a stand-alone network. Therefore, in the Tun mode, the scheduler will only handle the user's message request, the throughput of the cluster system is greatly improved.
VS/DR: That is, Virtual Server via Direct Routing
That is, using direct routing technology to implement virtual server. Its connection scheduling and management is the same as in Vs/nat and Vs/tun, but its message forwarding method is different, vs/dr by overwriting the request message MAC address, send the request to real server, and real server will respond directly back to the customer, eliminates the vs/ The IP tunneling overhead in the Tun. This is the best performance in three load scheduling mechanisms, but requires that both Director server and real server have a single network card connected to the same physical network segment.
2. Load Scheduling algorithm
Above we talked about, the load scheduler is based on the load of each server, dynamically select a real server to respond to user requests, then dynamic selection is how to achieve it, in fact, we have to say here the load scheduling algorithm, according to different network service requirements and server configuration, Ipvs implemented the following eight kinds of load scheduling algorithms, here we detail the most commonly used four scheduling algorithms, the remaining four scheduling algorithms please refer to other data.
 Wheel Call scheduling (Round Robin)
"Wheel call" scheduling is also called 1:1 scheduling, the scheduler through the "round call" scheduling algorithm to the external user request in order 1:1 of the allocation to each real server in the cluster, this algorithm treats each of the actual server equally, regardless of the server's physical load status and connection state.
 Weighted wheel call scheduling (Weighted Round Robin)
The weighted wheel call scheduling algorithm dispatches access requests based on the different processing capabilities of real server. You can set different dispatch weights for each real server, and for real server with relatively good performance, you can set a higher weight, but for real server with less processing power, you can set a lower weight value, which ensures that the processing capacity of the server handles more traffic. Fully and reasonably utilize the server resources. At the same time, the scheduler can also automatically query the load of real server and adjust its weights dynamically.
 Minimum link scheduling (least connections)
The minimum connection scheduling algorithm dynamically dispatches network requests to the server with the fewest number of links established. If the real server of the cluster system has similar system performance, the "Minimal connection" scheduling algorithm can be used to balance the load well.
 Weighted minimum link scheduling (weighted least connections)
The weighted minimum link schedule is a superset of the minimum connection schedule. Each service node can represent its processing power with the corresponding weights, while the system administrator can dynamically set the corresponding weights, the default weight is 1, and the weighted minimum connection dispatch makes the connection number of the service node and its weights proportional to the new connection request.
The other four scheduling algorithms are: Minimal link based on locality (locality-based least connections), least-based local link with replication (locality-based least connections with Replication), Destination address hash (destination hashing) and source address hashes (sources hashing), for the meaning of these four scheduling algorithms, this article no longer describes, if you want to know more about the remaining four scheduling strategies, You can login to the LVS Chinese site zh.linuxvirtualserver.org for more detailed information.
3.2 High Availability
LVS is a kernel-level application software, therefore has the very high processing performance, uses the LVS frame load balanced cluster system to have the outstanding processing ability, each service node's fault does not affect the entire system the normal use, simultaneously realizes the load reasonable balance, causes the application to have the ultra-high load service ability, Millions of concurrent connection requests can be supported. If the configuration of the Hundred Gigabit Network card, using Vs/tun or VS/DR scheduling technology, the entire cluster system throughput can be as high as 1gbits/s; If you configure a Gigabit NIC, the system's maximum throughput is close to 10gbits/s.
3.3 High Reliability
LVS load Balancing software has been widely used in enterprises, schools and other industries, many large and critical Web sites have also adopted the LVS cluster software, so its reliability in practice has been well confirmed. There are many load balancing systems in LVS that run for a long time and have never been restarted. All these show the high stability and high reliability of LVS.
3.4 Applicable environment
LVS to front-end Director server currently supports only Linux and FreeBSD systems, but supports most TCP and UDP protocols, and applications that support TCP protocols are: Http,https, Ftp,smtp,,pop3,imap4,proxy, Ldap,ssmtp and so on. Applications that support UDP protocols are: DNS,NTP,ICP, video, audio streaming protocols, and so on.
LVS has no restrictions on real server operating systems, and real server can run on any TCP/IP-enabled operating system, including Linux, various Unix (such as FreeBSD, Sun Solaris, HP UNIX, etc.), mac/ OS, Windows, and so on.
3.5 Open Source Software
The LVS cluster software is free software issued under the GPL (GNU public License) license, so the user can obtain the source code of the software and make various modifications according to their own needs, but the modification must be issued under the GPL.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.