Common open source software for load balancing: Nginx, LVS, keepalived
Commercial hardware load devices: F5, Netscale
1. LB, LVs introduction lb cluster is a shorthand for load balance clusters, translated into Chinese is load balanced cluster
650) this.width=650; "title=" 1x4e5ueewzrd "style=" border-top:0px; border-right:0px; border-bottom:0px; border-left:0px; Display:inline "border=" 0 "alt=" 1x4e5ueewzrd "src=" http://s3.51cto.com/wyfs02/M01/72/98/ Wkiol1xook2byqizaabfidwrskk341.jpg "" 244 "height=" 159 "/>
LVS is an open source software project that implements a load-balanced cluster
The LVS architecture can be logically divided into the scheduling layer (Director), the server cluster layer (Real server), and the shared storage tier
LVS can be divided into three modes of operation: (DR Mode reference this article http://os.51cto.com/art/201105/264303.htm this article is very detailed: http://www.it165.net/admin/html/ 201401/2248.html)
NAT (The scheduler changes the requested destination IP, the VIP address to the IP of the real server, the returned packets also go through the scheduler, and the scheduler then modifies the source address to the VIP)
650) this.width=650; "title=" aga011e901u "style=" border-top:0px; border-right:0px; border-bottom:0px; border-left:0px; Display:inline "border=" 0 "alt=" aga011e901u "src=" http://s3.51cto.com/wyfs02/M02/72/98/ Wkiol1xook7sjptgaablpwen1vi798.jpg "" 244 "height=" 193 "/>
TUN (The scheduler encapsulates the requested packet encryption over the IP tunnel to the back-end real server, and real server returns the data directly to the client without the scheduler)
DR (The scheduler changes the destination MAC address of the requested packet to the MAC address of the real server and returns to the client without going through the scheduler)
650) this.width=650; "title=" W7v860vq8h6 "style=" border-top:0px; border-right:0px; border-bottom:0px; border-left:0px; Display:inline "border=" 0 "alt=" W7v860vq8h6 "src=" http://s3.51cto.com/wyfs02/M00/72/98/ Wkiol1xook6ae1xvaabnkugxjom976.jpg "" 244 "height=" 193 "/> Three kinds of ip:dip (driector IP), VIP (virtual IP), RIP (Real IP). Where dip, RIP for the same network segment, and for private network IP, VIP for external service ip,director, Real server Set VIP
LVS Scheduling algorithm: Round call scheduling (Round Robin) (abbreviated RR), weighted round call (Weighted Round Robin) (WRR), least link (least connection) (LC), weighted least link (Weighted least Connections) (WLC) and so on (other algorithms, reference http://www.aminglinux.com/bbs/thread-7407-1-1.html)
2. Lvs/nat Configuration
Three servers one as director, two as real server
The Director has an external network IP (192.168.31.166) and an intranet IP (192.168.21.166), with only the intranet IP (192.168.21.100) and (192.168.21.101) on two real servers And you need to set the intranet gateway of the two real server as the director's intranet IP (192.168.21.166)
Installation of Httpd:yum install-y Nginx on two real servers
Director installs Ipvsadm Yum install-y ipvsadm
DIRECOTR on vim/usr/local/sbin/lvs_nat.sh//increase:
#! /bin/bash
# on the director server, turn on the route forwarding feature:
Echo 1 >/proc/sys/net/ipv4/ip_forward
# Turn off ICMP redirection
echo 0 >/proc/sys/net/ipv4/conf/all/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/default/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/eth0/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/eth1/send_redirects
# Director Set NAT firewall
Iptables-t nat-f
Iptables-t Nat-x
Iptables-t nat-a postrouting-s 192.168.21.0/24-j Masquerade
# Director Setup Ipvsadm
Ipvsadm= '/sbin/ipvsadm '
$IPVSADM-C
$IPVSADM-A-T 192.168.31.166:80-s lc-p 300
$IPVSADM-T 192.168.31.166:80-r 192.168.21.100:80-m-W 1
$IPVSADM-T 192.168.31.166:80-r 192.168.21.101:80-m-W 1
Run this script directly to complete the Lvs/nat configuration:
/bin/bash/usr/local/sbin/lvs_nat.sh
To test the Web content on two machines through a browser, we can modify the default page of Nginx for the sake of partition:
Rs1: Echo "Rs1rs1" >/usr/share/nginx/html/index.html
RS2: Echo "Rs2rs2" >/usr/share/nginx/html/index.html
3. LVS/DR Configuration
Three machines:
Director (eth0192.168.31.166, VIP eth0:0: 192.168.31.110)
Real Server1 (eth0 rip:192.168 31.100, VIP lo:0:192.168.31.110)
Real Server2 (eth0 rip:192.168.31.101, VIP lo:0:192.168.31.110)
Director on vim/usr/local/sbin/lvs_dr.sh//increase
#! /bin/bash
Echo 1 >/proc/sys/net/ipv4/ip_forward
Ipv=/sbin/ipvsadm
vip=192.168.31.110
rs1=192.168.31.100
Rs2=192.168.31.101
Ifconfig eth0:0 $VIP broadcast $VIP netmask 255.255.255.255 up
Route add-host $vip Dev eth0:0
$IPV-C
$IPV-A-T $VIP: 80-s RR
$IPV-A-t $vip: 80-r $rs 1:80-g-W 1
$IPV-A-t $vip: 80-r $rs 2:80-g-W 1
Two RS on: vim/usr/local/sbin/lvs_dr_rs.sh
#! /bin/bash
vip=192.168.31.110
Ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
Route Add-host $vip lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
About Arp_ignore and Arp_announce reference: http://www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html
Then the director executes: bash/usr/local/sbin/lvs_dr.sh
Performed on two rs: bash/usr/local/sbin/lvs_dr_rs.sh
Windows under Browser test access
4. LVS/DR + keepalived Configuration
Note: Although we have already configured some operations, but below we use keepaliave operation and previous operation is some conflict, so if previously configured Dr, please do the following: Dr on the execution:
$IPV-C
Ifconfig eth0:0 Down
Although the previous LVS have been configured successfully and load balanced, we found that when a real server stopped the httpd process, the Director would still forward the request to the past, which caused some requests to be abnormal. So there needs to be a mechanism to detect the state of real server, which is keepalived. In addition to detecting the RS State, it can also detect the state of the standby director, that is, keepalived can implement the HA cluster function, of course, also need a standby director.
The backup Director also needs to install the Keepalived software
Yum Install-y keepalived
After the installation, edit the configuration file.
vim/etc/keepalived/keepalived.conf//Join as follows:
Vrrp_instance Vi_1 {
State MASTER #备用服务器上为 BACKUP
Interface eth0
VIRTUAL_ROUTER_ID 51
Priority #备用服务器上为90
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
virtual_ipaddress {
192.168.31.110
}
}
Virtual_server 192.168.31.110 80 {
Delay_loop 6 # (query Realserver status every 10 seconds)
Lb_algo WLC # (LVS algorithm)
Lb_kind DR # (Direct Route)
Persistence_timeout 60 # (connection of the same IP is assigned to the same realserver within 60 seconds)
Protocol TCP # (check realserver status with TCP protocol)
Real_server 192.168.31.100 80 {
Weight 100 # (weight)
Tcp_check {
Connect_timeout 10 # (10 seconds No response timeout)
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
Real_server 192.168.31.101 80 {
Weight 100
Tcp_check {
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
The above configuration files for the Master director are only required to be modified from the director's profile
State MASTER-State BACKUP
Priority 90
After configuring keepalived, you need to turn on port forwarding (master/slave):
Echo 1 >/proc/sys/net/ipv4/ip_forward
Then, execute the/usr/local/sbin/lvs_dr_rs.sh script on two RS
Finally, the two director starts the Keepalived Service (Guthrie):
/etc/init.d/keepalived start
Also, it is important to note that starting the Keepalived service automatically generates VIP and IPVSADM rules and does not need to execute the/usr/local/sbin/lvs_dr.sh script mentioned above.
Nginx Ip_hash for long connections
- Upstream Test {
- Ip_hash;
- Server 192.168.31.100;
- Server 192.168.31.101;
- }
- server {
- Listen 80;
- server_name bbs.aaa.cn;
- Location/{
- Proxy_pass http://test/;
- Proxy_set_header Host $host;
- Proxy_set_header X-real-ip $remote _addr;
- Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
- }
- }
Copy Code
Extended Learning:
Haproxy+keepalived http://blog.csdn.net/xrt95050/article/details/40926255
Nginx, LVS, haproxy comparison http://www.csdn.net/article/2014-07-24/2820837
Custom scripts in keepalived vrrp_script http://www.linuxidc.com/Linux/2012-08/69383.htm http://my.oschina.net/hncscwc/blog/158746
Nginx Proxy http://www.apelearn.com/bbs/thread-64-1-1.html
Nginx Long Connection http://www.apelearn.com/bbs/thread-6545-1-1.html
Nginx Algorithm Analysis http://blog.sina.com.cn/s/blog_72995dcc01016msi.html
The LVS Dr Mode only uses a public IP implementation method http://storysky.blog.51cto.com/628458/338726
Comparison and analysis of Nginx load balancing and LVS load balancing
Http://www.sudone.com/nginx/nginx_vs_lvs.html
2015-06-05/2015-06-08lb Load Balancing cluster