Software load Balancing is generally implemented in two ways: the implementation of soft load based on operating system and the implementation of soft load based on third party application. LVS is a kind of soft load based on Linux operating system, Haproxy is an open source and soft load based on the third application implementation.
Haproxy compared to the use of LVS is much simpler, the function is also very rich. Currently, Haproxy supports two main proxy modes: "TCP" is also 4 tiers (mostly for mail servers, internal protocol communications Servers, etc.), and 7 layers (HTTP). In layer 4 mode, haproxy only forwards bidirectional traffic between the client and the server. In 7-tier mode, Haproxy analyzes the protocol and can control the protocol by allowing, rejecting, exchanging, adding, modifying, or deleting requests (request), or in response (response), based on specific rules.
I now use haproxy mainly because it has the following advantages, here I summarize:
First, free open source, stability is very good, this can be done by some of my small projects can be seen, single haproxy also run well, stability can be comparable with LVS;
Second, according to official documents, HAProxy can run full 10gbps-new benchmark of HAProxy at ten Gbps using myricom ' s 10GbE NICs (myri-10g pci-express), this as a software-level load Balanced, is also more amazing;
Third, Haproxy can be used as MySQL, mail or other non-web load balancer, we often use it as MySQL (read) load balancing;
Four, with a powerful monitoring server status of the page, the actual environment in which we combine Nagios for mail or SMS alarm, this is one of the reasons I like it very much;
Five, Haproxy support virtual host.
===================================================================================
In the case of load balancing of the reverse proxy server, we usually use Nginx's balanced configuration. In fact, the load balance of haproxy also belongs to this category. So let's take a look at the configuration process in this area now. First, a brief introduction to the Haproxy, followed by the installation and configuration links.
Haproxy Introduction
Reverse proxy server, support the dual-machine hot standby support virtual host, but its simple configuration, has a very good server health check function, when its agent's back-end server failure, Haproxy will automatically remove the server, after the failure to automatically join the server? The new 1.3 introduced the frontend , the backend;frontend makes a rule match based on any HTTP request header content, and then directs the request to the relevant backend.
HTTP://BLOG.LIUTS.COM/POST/223/(build four-layer load balancer)
http://rfyimcool.blog.51cto.com/1030776/413187 (build seven-layer load balancer)
===================================================================================
keepalived Introduction
http://www.keepalived.org
Keepalived is a software similar to the Layer3, 4 & 5 switch, which is what we normally call the 3rd, 4th, and 5th layers of exchange. The role of keepalived is to detect the state of the Web server, if a Web server freezes, or a work failure occurs, keepalived detects and rejects the failed Web server from the system. When the Web server is working properly, Keepalived automatically joins the Web server to the server farm, all of which are done automatically, without the need for manual intervention, and the only thing that needs to be done manually is to repair the failed Web server.
Similar HA tools are heatbeat, DRBD, etc., Heatbeat, DRBD configuration are more complex.
Keepalived Theory Working principle
Keepalived can provide VRRP and Health-check functions, can only use it to provide dual-machine floating VIP (VRRP virtual routing function), which can easily realize a dual-machine hot standby high-availability features.
Keepalived is a software similar to the Layer3, 4 & 5 switch, which is what we normally call the 3rd, 4th, and 5th layers of exchange. The role of keepalived is to detect the state of a Web server. The layer3,4&5 works in the IP layer, TCP layer, and application layer of the IP/TCP protocol stack, respectively, as follows:
Layer3:keepalived when working with Layer3, keepalived periodically to servers in the server farm
Send an ICMP packet (both our usual ping program), if it is found that the IP address of a service is not active, keepalived reports that the server fails, and it is removed from the server farm, a typical example of this situation is a server is illegal shutdown. The Layer3 is based on whether the server's IP address is valid as a standard for the server to function properly or not. This is the way it will be used in this article.
Layer4: If you understand the Layer3 way, Layer4 is easy. The LAYER4 is primarily based on the status of the TCP port to determine whether the server is working properly. If the Web server's service port is typically 80, keepalived will remove the server from the server farm if Keepalived detects that port 80 is not booting.
Layer5:layer5 is to work in the specific application layer, more complex than Layer3,layer4, the network occupies a larger bandwidth. Keepalived will check the server program according to user's settings is normal, if the user's settings do not match, then keepalived will remove the server from the server farm.
VIP is the virtual IP, is attached to the host network card, that is, the host network card virtual, this IP is still occupied by this network segment of an IP.
Keepalived effect
As the volume of your website grows, the pressure on your site's servers is increasing. Requires a load balancing scheme! Commercial hardware such as F5 and too expensive, you are the entrepreneurial interconnection company how to effectively save costs, saving unnecessary waste? The same high-performance, high-availability features that enable commercial hardware? What good load balancing can be done to extend the scalable solution? The answer is YES! Yes! We use Lvs+keepalived's architecture based on full open source software to provide you with a load-balanced and highly available server.
lvs+keepalived Introduction
Lvs
LVS is a shorthand for Linux virtual server, that is, a virtualized server cluster system. Founded in May 1998 by Dr. Zhangwensong, this project is one of the earliest free software projects in China. There are currently three IP load Balancing technologies (Vs/nat, Vs/tun and VS/DR) with eight scheduling algorithms (RR,WRR,LC,WLC,LBLC,LBLCR,DH, SH).
Keepalvied
Keepalived is primarily used here as a realserver health check and failover implementation between the LoadBalance host and the backup host. keepalived Introduction keepalived is a software similar to layer3,4 &5 switch, which is what we normally call the 3rd, 4th and 5th layers of exchange. The role of keepalived is to detect the state of the Web server, if a Web server freezes, or a work failure occurs, keepalived detects and rejects the failed Web server from the system. When the Web server is working properly, Keepalived automatically joins the Web server to the server farm, all of which are done automatically, without the need for manual intervention, and the only thing that needs to be done manually is to repair the failed Web server.
===================================================================================
keepalived Introduction
Keepalived is a highly available Web service solution based on the VRRP protocol that can be used to avoid single points of failure. A Web service will have at least 2 servers running keepalived, one master server (master), one backup server (standby), but external as a virtual IP, and the primary server sends a specific message to the backup server. When the backup server does not receive the message, that is, when the primary server goes down, the backup server takes over the virtual IP and continues to serve, guaranteeing high availability.
HaProxy + keepalived for high-availability load balancing
Keepalived is the perfect implementation of VRRP, so before introducing keepalived, let's introduce the principle of VRRP.
Introduction to the VRRP protocol
In a real-world network environment, two hosts that need to communicate do not have a direct physical connection in most cases. For such a situation, how do they choose between routes? How the host selects the next hop route to the destination host, there are two common ways to solve this problem:
Use dynamic routing protocol (RIP, OSPF, etc.) on the host
Configure static routes on the host
Obviously, configuring route routing on a host is impractical because of the many issues of management, maintenance costs, and support. Configuring static routes becomes popular, but routers, or default gateways, are often a single point.
The purpose of VRRP is to solve the problem of static routing single point failure.
VRRP uses a campaign (election) protocol to dynamically hand over routing tasks to a VRRP router in a virtual router on a LAN.
Working mechanism
In a VRRP virtual router, there are more than one physical VRRP router, but the many physical machines do not work at the same time, but by a call Master is responsible for the routing work, the others are Backup,master is not static, VRRP let each VRRP router participate in the campaign, the ultimate winner is master. Master has some privileges, such as owning the IP address of the virtual router, and our host is using this IP address as a static route. The privileged Master is responsible for forwarding packets sent to the gateway address and responding to ARP requests.
VRRP uses the campaign protocol to realize the function of the virtual router, and all protocol messages are sent via IP multicast (multicast) packets (multicast address 224.0.0.18). The virtual router consists of a vrid (range 0-255) and a set of IP addresses that are externally represented as a known MAC address. So, in a virtual router, regardless of who is master, the external is the same Mac and IP (called VIP). The client host does not need to modify its routing configuration because of master changes, and for them, this master-slave switch is transparent.
In a virtual router, only the VRRP router as master will always send the VRRP ad package (Vrrpadvertisement message), and backup will not preempt master unless it has a higher priority. When Master is not available (backup does not receive the ad pack), the highest priority in multiple backups is preempted to master. This preemption is very fast (<1s) to ensure continuity of service.
Due to security considerations, the VRRP package uses the encryption protocol for encryption.
==========================================
VRRP Introduction
With the rapid development of Internet, the application of network-based is increasing. This puts forward more and more demands on the reliability of the network. It is certainly a good reliability solution to spend on updating all network equipment, but in terms of protecting existing investments, it is possible to use inexpensive redundancy and find a balance in terms of reliability and economy.
The Virtual Routing Redundancy Protocol is a good solution. In this protocol, the default gateway for the terminal IP device on shared multi-access media (such as Ethernet) is backed up redundantly, so that when one of the routing devices goes down, the backup routing device takes over the forwarding work in time, provides the user with transparent switch, and improves the network service quality.
I. Overview of the Agreement
In a TCP/IP protocol-based network, a route must be specified in order to ensure communication between devices that are not directly physically connected. There are two methods of specifying routes that are commonly used: Dynamic learning through routing protocols (e.g., internal routing protocol RIP and OSPF), and static configuration. It is unrealistic to run dynamic routing protocols at each terminal, and most client operating system platforms do not support dynamic routing protocols, even if support is limited by many issues such as management overhead, convergence, security, and so on. Therefore, it is generally used to configure the terminal IP device static routing, which is usually to specify one or more default gateways for end devices. The method of static routing simplifies the complexity of network management and reduces the communication overhead of terminal equipment, but it still has one disadvantage: if the router as the default gateway is damaged, all traffic using the gateway as the next hop host must be interrupted. Even if more than one default gateway is configured, you cannot switch to a new gateway without restarting the end device. The use of Virtual Routing Redundancy Protocol (Vsan Router Redundancy Protocol, abbreviated VRRP) can be a good means to avoid statically specified gateway defects.
In the VRRP protocol, there are two important sets of concepts: the VRRP router and the virtual router, the master router, and the backup router. A VRRP router is a router that runs VRRP, a physical entity, and a virtual router is a logical concept created by the VRRP protocol. A group of VRRP routers work together to form a virtual router. The virtual router is represented as a logical router with a unique fixed IP address and MAC address. Routers in the same VRRP group have two mutually exclusive roles: a master router and a backup router, a VRRP group with only one router in the master role, and one or more routers in the backup role. The VRRP protocol uses a selection policy to select one of the router groups as the master, responsible for ARP corresponding and forwarding IP packets, the other routers in the group as the backup role on standby. When a master router fails for some reason, the backup router can be upgraded to the primary router after a few seconds of delay. Because this switch is very fast and does not change the IP address and MAC address, it is transparent to the end-user system.
Second, the principle of work
A VRRP router has a unique identity: Vrid, which has a range of 0-255. The router behaves as a unique virtual MAC address, and the address is in the format 00-00-5e-00-01-[vrid]. The master router is responsible for answering the ARP request with that MAC address. In this way, regardless of switching, to ensure that the end device is the only consistent IP and MAC address, reducing the impact of switching to end devices.
There is only one VRRP control message: VRRP notice (advertisement). It is encapsulated with an IP multicast packet with a group address of 224.0.0.18, the release scope is limited to the same LAN. This ensures that the Vrid can be reused in different networks. In order to reduce the network bandwidth consumption, only the main control router can periodically send VRRP notification messages. The backup router launches a new round of VRRP elections after three consecutive notification intervals without receiving VRRP or receiving a notification with a priority of 0.
In the VRRP router group, the master router is elected by priority, and the priority range in the VRRP protocol is 0-255. If the IP address of the VRRP router is the same as the interface IP address of the virtual router, it is said that the virtual router is the IP address owner in the VRRP group; the IP address owner automatically has the highest priority: 255. Priority 0 is typically used when the IP address owner voluntarily abandons the master role. The configurable priority range is 1-254. The priority configuration principles can be set based on the speed and cost of the link, router performance and reliability, and other management policies. In the election of the master router, the high-priority virtual router wins, so if there is an IP address owner in the VRRP group, it always appears as a master-routed role. For the same priority of the candidate routers, according to the IP address size order election. VRRP also provides a priority preemption policy, and if configured, a high-priority backup router will deprive the current low-priority master router and become the new master router.
In order to ensure the security of VRRP protocol, two kinds of security authentication measures are provided: PlainText authentication and IP header authentication. The clear-Text authentication method requires that the same vrid and plaintext passwords must be provided at the same time when joining a VRRP router group. It is suitable to avoid configuration errors in LAN, but it does not prevent the password from being used for network listening. IP header authentication provides a higher level of security that prevents attacks such as message replay and modification.
Third, application examples
The most typical VRRP application: RTA, RTB makes up a VRRP router group, assuming that RTB has more processing power than RTA, the RTB is configured as an IP address owner, and the default gateway for H1, H2, and H3 is set to RTB. RTB becomes the master router, responsible for ICMP redirection, ARP response, and forwarding of IP packets; Once the RTB fails, RTA starts switching and becomes the master, ensuring a transparent security switch to the customer.
In the VRRP application, RTA online is only as backup, does not participate in forwarding work, idle router RTA and link L1. With a reasonable network design, you can reach the dual effect of backup and load sharing. Let RTA, RTB belong to each of the two VRRP groups that are backed up: RTA is the IP address owner in Group 1, and RTB in Group 2 is the IP address owner. Set the default gateway for H1 to RTA;H2, H3 default gateway set to RTB. In this way, both the equipment load and network traffic are shared, and the network reliability is improved.
The working mechanism of the VRRP protocol has many similarities with Cisco's HSRP (Hot Standby Routing Protocol). But the main difference between the two is that in Cisco HSRP, you need to configure an IP address separately as the address of the virtual router, which cannot be the interface address of any member of the group.
The use of VRRP protocol, do not transform the current network structure, to maximize the protection of current investment, only the minimum cost of management, but greatly improve the network performance, has a significant application value.
===================================================================================
Simple application of keepalive--managing the fluttering of VIP
From:http://www.1.qixoo.com/killkill/archive/2010/12/31/1922360.html
VIP fluttering can solve a lot of problems for us, before I tried to use Ifup/ifdown control network card Up/down to achieve, this way has a small problem, that is, every time VIP fluttering after a few 10 seconds to take effect, feel time is longer, And with some logical scripts to work well, is there a better way? Of course, this is the protagonist of this article--keepalived.
Installation is simple:
HaProxy + keepalived for high-availability load balancing
Modify/etc/keepalived/keepalived.conf This configuration file can be used, the following is my environment, 192.168.10.141 and 192.168.10.142 is two VIP, can be flapping between the two servers:
HaProxy + keepalived for high-availability load balancing
Configuration of the host:
HaProxy + keepalived for high-availability load balancing
Configuration of the Standby machine:
HaProxy + keepalived for high-availability load balancing
At first glance, the host and the standby configuration file is the same, take a closer look at the priority value, use the following command to add keepalived to the Linux service:
HaProxy + keepalived for high-availability load balancing
By Kai, stop keepalived This service can observe the fluttering of the VIP, as to why the VIP fluttering can quickly take effect, still need to study.
===================================================================================
Haproxy+keepalived for high-availability load balancing
My environment:
Haproxy keepalived Master: 192.168.1.192
Haproxy keepalived Preparation: 192.168.1.193
vip:192.168.1.200
Web:192.168.1.187:80 192.168.1.187:8000
HaProxy + keepalived for high-availability load balancing
One: The installation process, on the 192.168.1.192:
Installation of keepalived:
HaProxy + keepalived for high-availability load balancing
HaProxy + keepalived for high-availability load balancing
Haproxy installation (master and Standby):
HaProxy + keepalived for high-availability load balancing
HaProxy + keepalived for high-availability load balancing
Second: The two machines are started separately:
/etc/init.d/keepalived start (This command will automatically start the Haproxy)
Three: Test:
1. Perform IP add separately on two machines
Master: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast Qlen 1000
Link/ether 00:0c:29:98:cd:c0 BRD FF:FF:FF:FF:FF:FF
inet 192.168.1.192/24 BRD 192.168.1.255 Scope Global eth0
inet 192.168.1.200/32 Scope Global eth0
Inet6 FE80::20C:29FF:FE98:CDC0/64 Scope link
Valid_lft Forever Preferred_lft Forever
Preparation: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast Qlen 1000
Link/ether 00:0c:29:a6:0c:7e BRD FF:FF:FF:FF:FF:FF
inet 192.168.1.193/24 BRD 255.255.255.254 Scope Global eth0
Inet6 FE80::20C:29FF:FEA6:C7E/64 Scope link
Valid_lft Forever Preferred_lft Forever
2. keepalived will automatically restart the haproxy,3 seconds after it has been stopped
3. Stop the main keepalived and take over the service immediately.
Preparation: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast Qlen 1000
Link/ether 00:0c:29:a6:0c:7e BRD FF:FF:FF:FF:FF:FF
inet 192.168.1.193/24 BRD 255.255.255.254 Scope Global eth0
inet 192.168.1.200/32 Scope Global eth0
Inet6 FE80::20C:29FF:FEA6:C7E/64 Scope link
Valid_lft Forever Preferred_lft Forever
4. Change the hosts
192.168.1.200 test.com
192.168.1.200 test.domain.com
Through IE testing, you can find
Test.com's request was sent to 192.168.1.187:80.
Test.domain.com's request was sent to 192.168.1.187:8000.
HaProxy + keepalived for high-availability load balancing