Introduction: In the Internet application, with the site on the hardware performance, response speed, server stability, data reliability and so on more and more high requirements. A single server is overwhelmed by the large number of accesses. In addition to the use of expensive mainframe and load offload devices, enterprises can also use a different solution. That is to build a load Balancer cluster---Provide the same address to the same service by consolidating multiple inexpensive, common servers.
Next, we will explain the most common cluster technology in the enterprise----LVS (Linux virtual Server,linux).
1. types of clusters
Load Balancing cluster: To improve the responsiveness of the system, handle more access requests as much as possible, reduce latency, and provide high concurrency, high load overall performance for the target. The implementation method is to share the request from the client to multiple server nodes, thus relieving the load pressure of the whole system. Such as: DNS polling, reverse proxy and so on.
Highly available clusters: Minimize downtime for failures and ensure continuous service operation. For example: failover, dual standby, multi-machine hot spare, etc. are highly available clustering technologies.
High-performance computing clusters: designed to increase the speed of CPU operations to extend hardware resources and analysis capabilities. For example: Cloud computing. Grid computing and so on.
2. tiered structure for load balancing
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image002[4] "border=" 0 "alt=" clip_ IMAGE002[4] "src=" http://s3.51cto.com/wyfs02/M00/71/8B/wKiom1XTehTCzFQIAAC6QZSNHGY923.jpg "width=" 747 "height=" 354 "/>
First layer: Load Scheduler: Access to the unique portal of the cluster, the use of the VIP (virtual IP), also become the cluster IP. Generally there are two servers, to prevent single point of failure.
Second tier: Server pools: Provides real-world application services (HTTP, FTP, and so on) for the cluster. Only customer requests that are deposited with the scheduler are processed. When a node has a failure, as in the scheduler, it is quarantined. Wait for the error to be excluded before you re-include the server pool.
Tier Three: Shared storage: Provides stable, consistent file access services for all nodes in the server pool. Ensure the consistency of cluster content. This means that all server files are the same.
3. load Balancing scheduling algorithm.
We know the front. The first-tier load scheduler assigns client requests to node servers in the third-tier server pool. But what is it assigned to? Below I will explain the commonly used LVS load scheduling algorithm.
L Polling (RR): Requests received by the load scheduler are assigned to the node server in order. Treat each node server equally.
L weighted Polling (WRR): Allocates incoming access requests based on the processing power of the real server. The scheduler automatically queries the load of each node and adjusts the weights automatically.
L Minimum Connection (LC): Based on the number of connections that have been established by the real server, the node server with the least number of connections will be requested first, and this algorithm can be selected when the server performance is basically consistent.
L weighted minimum number of connections (WLC): The load scheduler can automatically adjust server weights when server node performance varies greatly. Servers with higher weights will be responsible for large customer requests.
4. Three modes of load balancing
? Address translation: referred to as NAT mode. is the load scheduler as the gateway for all servers. That is the entrance to the customer's visit and the exit that the server responds to customer access.
? IP tunneling: Abbreviated tun mode. Open Network architecture. All servers are directly connected to the public network.
The scheduler acts as a client access portal. Each node server responds directly to the customer from the public network. Does not go through the scheduler.
? Direct routing: referred to as Dr Mode. Similar to Tun. But the nodes are not scattered around. Instead, they are concentrated in the same network structure.
Project Environment:
Main Scheduler: 192.168.1.129
Standby Scheduler: 192.168.1.128
Node One: 192.168.1.130
Node Two: 192.168.1.131
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image003[4] "border=" 0 "alt=" clip_ IMAGE003[4] "src=" http://s3.51cto.com/wyfs02/M01/71/88/wKioL1XTfCGDxO1GAAC6Uj90haY537.jpg "width=" 747 "height=" 354 "/>
First-level scheduler settings
1. Installing the IPVSADM management tool
Ipvsadm is the LVS Cluster management tool on the load scheduler. Add through the Ip_vs module, remove the server node, and view the status of the cluster.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image005[4] "border=" 0 "alt=" clip_ IMAGE005[4] "src=" http://s3.51cto.com/wyfs02/M02/71/88/wKioL1XTfCGyycTSAAA5cM7M5rU130.jpg "width=" 558 "height=" 45 "/>
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image006[4] "border=" 0 "alt=" clip_ IMAGE006[4] "src=" http://s3.51cto.com/wyfs02/M00/71/88/wKioL1XTfCKxK1yQAACWSw5Es1A950.jpg "width=" 582 "height=" 118 "/>
2. Installing keepalived
Before installing the keepalived. Install the dependent packages first. In addition, the LVS cluster environment is also used in the IPVSADM above has been installed.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image007[4] "border=" 0 "alt=" clip_ IMAGE007[4] "src=" http://s3.51cto.com/wyfs02/M00/71/8B/wKiom1XTehazQg3EAABmoCdbyJI087.jpg "width=" 548 "height=" 35 "/>
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image008[4] "border=" 0 "alt=" clip_ IMAGE008[4] "src=" http://s3.51cto.com/wyfs02/M01/71/8B/wKiom1XTehbQUpfOAABhRbd9GnY596.jpg "width=" 567 "height=" 34 "/>
Add keepalived as a system service if you are compiling the installation (make, do install).
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image010[4] "border=" 0 "alt=" clip_ IMAGE010[4] "src=" http://s3.51cto.com/wyfs02/M02/71/8B/wKiom1XTehawcT40AABDZRyJAUY484.jpg "width=" 557 "height=" 45 "/>
3. Main Scheduler Configuration Error correction: Master and Slave Scheduler are 192.168.1 network segment. The picture is wrong.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image002[6] "border=" 0 "alt=" clip_ IMAGE002[6] "src=" http://s3.51cto.com/wyfs02/M01/71/8B/wKiom1XTehejZZJgAAFAjkvo3iA633.jpg "width=" 558 "height=" 398 "/>
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image014[4] "border=" 0 "alt=" clip_ IMAGE014[4] "src=" http://s3.51cto.com/wyfs02/M00/71/8B/wKiom1XTehfzabU-AAFYeC8bGck947.jpg "width=" 558 "height=" 545 "/>
Start the server
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image015[4] "border=" 0 "alt=" clip_ IMAGE015[4] "src=" http://s3.51cto.com/wyfs02/M01/71/8B/wKiom1XTehfjdGyHAABpFdf8nTw818.jpg "width=" 548 "height=" 66 "/>
Use IP A to see the VIP binding on the network interface, indicating that the configuration was successful. There is no from the scheduler.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image017[4] "border=" 0 "alt=" clip_ IMAGE017[4] "src=" http://s3.51cto.com/wyfs02/M02/71/8B/wKiom1XTehjRekblAAD5aTKSwlI087.jpg "width=" 558 "height=" 174 "/>
4. Set the slave Scheduler
The configuration is basically the same from the scheduler and the main scheduler. Now you only need to adjust the router_id,state,priority parameter. Restart keepalived after configuration is complete.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image019[4] "border=" 0 "alt=" clip_ IMAGE019[4] "src=" http://s3.51cto.com/wyfs02/M00/71/88/wKioL1XTfCXh1gjKAAFSCIEyLzY365.jpg "width=" 558 "height=" 501 "/>
Second tier server pool settings: Network node one: 192.168.1.130 node Two: 192.168.1.131 settings
5. Configure each node server
Depending on the cluster mode you choose, the configuration of the node server differs Dr/nat. Take Dr, for example, to configure the VIP configuration for the virtual interface lo:0, except to adjust the ARP response parameters of the/proc system unexpectedly. and add a local route for a channel VIP.
Vi/etc/rc.d/init.d/realserver #编辑, add the following code
#################################################
#!/bin/sh
# Chkconfig:-80 90
# Description:realserver
# HTTPD_VIP Start Realserver
httpd_vip=192.168.21.254 #LVS虚拟服务器 (VIP)
. /etc/rc.d/init.d/functions
Case "$" in
Start
Ifconfig lo:0 $httpd _vip netmask 255.255.255.255 broadcast $httpd _VIP
/sbin/route add-host $httpd _vip Dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
Sysctl-p >/dev/null 2>&1
echo "Realserver Start OK"
;;
Stop
Ifconfig lo:0 Down
Route del $httpd _VIP >/dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "Realserver stoped"
;;
*)
echo "Usage: $ {start|stop}"
Exit 1
Esac
Exit 0
#################################################
chmod +x/etc/rc.d/init.d/realserver #添加脚本执行权限
Chkconfig Realserver on #添加开机启动
/etc/rc.d/init.d/realserver start #开启, parameter stop is off
5.2 Adjust node server parameters so that the LVS virtual server (VIP) ignores ARP broadcast packets
Vi/etc/sysctl.conf #编辑
Net.ipv4.ip_forward= 1 #修改0为1, turn on forwarding
Net.ipv4.conf.lo.arp_ignore= 1
Net.ipv4.conf.lo.arp_announce= 2
Net.ipv4.conf.all.arp_ignore= 1
Net.ipv4.conf.all.arp_announce= 2
: wq! #保存退出
/sbin/sysctl-p #使配置立即生效
6. Verify the results
See if each node joins LVS on the master/slave Scheduler
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image020[4] "border=" 0 "alt=" clip_ IMAGE020[4] "src=" http://s3.51cto.com/wyfs02/M01/71/88/wKioL1XTfCWCEuCJAAE4DXr-s-k301.jpg "width=" 580 "height=" 163 "/>
Create a Test Web page under the site root of the node server.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image022[4] "border=" 0 "alt=" clip_ IMAGE022[4] "src=" http://s3.51cto.com/wyfs02/M02/71/88/wKioL1XTfCWBw1tIAABlnGbr9_s428.jpg "width=" 489 "height=" "/>
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image023[4] "border=" 0 "alt=" clip_ IMAGE023[4] "src=" http://s3.51cto.com/wyfs02/M00/71/88/wKioL1XTfCaBMs8GAAC22nSEKLo922.jpg "width=" 488 "height=" "/>"
Close the 192.168.1.130 node server. The content of the website jumps to 192.168.1.131. Illustrates the success of LVS construction. Keepalived test later.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image025[4] "border=" 0 "alt=" clip_ IMAGE025[4] "src=" http://s3.51cto.com/wyfs02/M01/71/88/wKioL1XTfCaSpHG_AABTTJMh7SU630.jpg "width=" 383 "height=" 156 "/>
Now close the 192.168.1.130 node. Visit 192.168.1.222 again
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image027[4] "border=" 0 "alt=" clip_ IMAGE027[4] "src=" http://s3.51cto.com/wyfs02/M02/71/88/wKioL1XTfCbTSrTMAABmdNLekoQ368.jpg "width=" 413 "height=" "/>
At this point, view the node server in the LVS. Found 192.168.1.130 this faulty machine has been quarantined. This leaves the normal server 131. When 130 is back to normal, LVS will automatically join it in the LVS server pool.
650) this.width=650; "Style=" background-image:none;padding-left:0px;padding-right:0px;border-top-width:0px; border-bottom-width:0px;border-left-width:0px;padding-top:0px; "title=" clip_image028[7] "border=" 0 "alt=" clip_ IMAGE028[7] "src=" http://s3.51cto.com/wyfs02/M00/71/88/wKioL1XTfCeR-cmiAADqde58Z1I797.jpg "width=" 548 "height=" "/>
Next Test the keeplived to see if the build was successful.
First shut down the primary node server 192.168.1.129 see if you can still access 192.168.1.222 then keeplived build success. If this is not successful, check all the steps above.
7. Shared storage This is relatively simple with NFS. You can access the information. I won't say it here.
This article from the "Desert Camel" blog, reproduced please contact the author!
"My technology is my master." Load Balancing cluster Keeplived+lvs