I. Overview of LVS
Linux Virtual Server (LVS) is a complete set of IP-based load Balancing cluster software. LVS runs on a pair of computers with similar configurations:
One as Active LVs Router (active LVs Router), one as backup LVs Router (Backup LVs Router), active
The LVS Router service has two roles:
* Load balancer on the real server.
* Check whether the service provided by the real server is normal
Backup LVs Router is used to monitor the activity of the LVs Router. In case the active LVs Router fails, it is taken over by the backup LVs Router.
:
Active LVS router on the public network interface to check whether the active LVs router is normal. On the active LVS router, pulse initiates the LVS process
and responds to the heartbeat from the backup LVS router. The LVS process calls the Ipvsadm tool to configure and maintain the Ipvs routing table and to start a nanny process for each
virtual service on the real server, each nanny process to check the virtual service status on the real server
and notify the LVS process of the failure. If a fault is found, the LVS process notifies ipvsadm to remove the node from the Ipvs routing table.
If the backup LVS router failed to receive a response from the active LVS router. It will call Send_arp to reassign the virtual IP address to the public interface of the backup
LVS router. And on the public network interface and LAN interface to send a command to close the LVS
process on the active LVS router. Start your own LVS process at the same time to dispatch client requests.
The following is a 2-layer structure of the LVS schema:
exam using three layer of price-construct LVS, put data into a shared file system GFS for all real servers to read and write simultaneously. Consider using Vs/nat,
Vs/tun and VS/DR, and Vs/fullnat modes for IP load balancing:
* Use Vs/nat mode: When a customer accesses a network server through the virtual IP address, the request message arrives at the scheduler, Scheduler based on connection scheduling algorithm
650) this.width=650; "Title=" Lvs_nat. PNG "src=" Http://s1.51cto.com/wyfs02/M01/82/FC/wKioL1dn1z_gt4WYAAA01EBqEUM773.png-wh_500x0-wm_3-wmp_4-s_ 2913968412.png "alt=" wkiol1dn1z_gt4wyaaa01ebqeum773.png-wh_50 "/> Select a server from a set of real servers, the destination address of the message virtual IP Address is rewritten as the location of the selected server, the destination port of the message
The response port of the selected server is rewritten, finally the modified message is sent to the selected server, and the scheduler logs the connection in the connection hash table.
When the next message of this connection arrives, the address and port of the original server can be obtained from the connection hash table for the same rewrite operation.
and sends the messaging to the original selected server. When the response message from the real server passes through the scheduler. The dispatcher changes the source address and source port of the message to virtual
IP address and the corresponding port, and then send the message to the user. When using the Vs/nat method, if there is a large amount of response data passing through the scheduler. The scheduler will become the entire cluster
The bottleneck.
*vs/tun mode: Vs/tun connection scheduling and management is the same as in Vs/nat, but its message forwarding method is different. Depending on the load situation of each server, the scheduler dynamically
650) this.width=650; "title=" Ea0d783e0c07736ec76d0b366684f344_144838535.png "src=" http://s2.51cto.com/wyfs02/M01/ 82/fc/wkiom1dn11psvrcraafvt3yf9ru834.png-wh_500x0-wm_3-wmp_4-s_2827736021.png "alt=" Wkiom1dn11psvrcraafvt3yf9ru834.png-wh_50 "/> To select a server, the request message encapsulated in another IP message, and then forward the encapsulated IP message to the selected server, the server received the message, the message is first encapsulated
To obtain the original target address for the VIP message, the server found that the VIP address is configured on the local IP tunnel device, so the request is processed, and then the response message is returned according to the routing table directly
to the customer.
*VS/DR mode 650) this.width=650; "title=" B62c9d0443f40506381e85f1d8f1f9fe_144853476.png "src=" http://s1.51cto.com/ Wyfs02/m00/82/fc/wkiol1dn13shzlz-aajvnuw77ea059.png-wh_500x0-wm_3-wmp_4-s_782029314.png "alt=" Wkiol1dn13shzlz-aajvnuw77ea059.png-wh_50 "/>: both the scheduler and the server group must physically have a network card connected to the non-segmented LAN, such as through a switch or a high-speed hub. VIP Address for Scheduler and server group
Shared, the VIP address of the scheduler configuration is externally visible and is used to receive request packets for the virtual service; All servers configure VIP addresses on their non--arp network devices, just to address the destination
The network request for the VIP. In Vs/dr, the scheduler dynamically chooses a server based on the load of each server, does not modify or encapsulate IP packets, but instead converts the MAC address of the data frame to the Mac of the selected server.
Address, and then the modified data frame is sent on the LAN with the server group. Because the MAC address of the data frame is the selected server, the server must be able to receive the data frame from which the IP message can be obtained.
When the server discovers that the destination address of the IP message is on the local network device, the server processes the message and then returns the response message directly to the client based on the routing table.
Configuration case:
LVs are often used with keepalived components, and LVS use VS/DR mode this time. The selected 2 RS host domain names are server3.example.com
and server4.example.com.
Environment and Components: RedHat 6.5 keepalived-1.2.20.tar.gz libnfnetlink-devel-1.0.0-1.el6.x86_64.rpm
1. Install the relevant components, libnfnetlink-devel-1.0.0-1.el6.x86_64.rpm GCC
Openssl-devel-y
2.tar zxf keepalived-1.2.20.tar.gz
CD keepalived-1.2.20/
3. Start compiling:./configure--prefix=/usr/local/;make && make install
At a minimum, ensure that the following conditions are met: 650) this.width=650; "Title=" keepalived compilation conditions. PNG "src=" Http://s4.51cto.com/wyfs02/M02/82/FD/wKiom1dn3Q_SppVKAAFFYjFto34443.png-wh_500x0-wm_3-wmp_4-s_ 4194078004.png "alt=" Wkiom1dn3q_sppvkaaffyjfto34443.png-wh_50 "/>
4. Establish RELATED links:
#cd/usr/local/
Ln-s/usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/init.d/
Ln-s/usr/local/keepalived/etc/sysconfig/keepalived/etc/sysconfig/
Ln-s/usr/local/keepalived/etc/keepalived/etc/
Ln-s/usr/local/keepalived/sbin/keepalived/sbin
5. Synchronize the installed directory files to another host:
#scp-R/usr/local/keepalived/ [email protected]:/usr/local/
6. Edit/etc/keepalived/keepalived.conf
VI etc/keepalived/keepalived.conf
650) this.width=650; "title=" Keepalived.conf.PNG "src=" http://s2.51cto.com/wyfs02/M01/82/FD/ Wkiom1dn3hzguhffaaco7mq0t6a189.png-wh_500x0-wm_3-wmp_4-s_3683370137.png "alt=" Wkiom1dn3hzguhffaaco7mq0t6a189.png-wh_50 "/>
7. Because the network structure is symmetrical, the keepalived.conf file under the other host/etc/keepalived/is the same. You can copy the configuration file to the far end by remote replication:
#scp/etc/keepalived/keepalived.conf [email protected]:/etc/keepalived/
8. Check the error at this time to start the keepalived service.
#/etc/init.d/keepalived start
Perform the same operation on another host;
9. Observe the status of the 2 host keepalived. Mainly through the monitoring log to see if the Master and backup side is normal;
10. Test. Close one of the hosts. See if the other one can take over the master role normally.
Tail-f/var/log/message
650) this.width=650; "title=" role conversion. PNG "src=" Http://s1.51cto.com/wyfs02/M01/82/FD/wKiom1dn3_WztI_JAAObMz0wfkA780.png-wh_500x0-wm_3-wmp_4-s_ 3861005068.png "alt=" Wkiom1dn3_wzti_jaaobmz0wfka780.png-wh_50 "/>
This function is normal if you can perform a role conversion normally.
The realization of Keepalived+lvs