Load Balancing and hot backup at the network layer

Source: Internet
Author: User

It is best to divide the routing load balancing network segments. This will not disrupt the routing behavior at the network layer, because the host or router forwards each packet through the "Route" found in the route table, unless the load balancing of the route table is achieved, otherwise, only a unique route is selected for a specific data packet. In the Linux kernel, the Server Load balancer of the route table is not implemented. I have submitted a patch, but it is gone, specifically, the first hit route and the corresponding load balancing route are loaded into the route cache, and then each time the route is searched, packets are sent between the two routes through the RR algorithm rotation. Afterwards, I felt that this was inappropriate, not because of poor technology, but because such behavior can be completely solved through "configuration". in the IT field,Do not modify the source code if the problem can be solved through configuration. Because the policy is configured, modifying the source code will affect the mechanism and directly affect all configurations.

. How to configure it? That is, the network segment division.
Another important reason why Server Load balancer does not use the protocol stack modification method is that the route directs network layer datagram while the network layer is end-to-end,It is not a broadcast network, which is essentially different from the link layer.
We know that Ethernet is a broadcast network. Although the address learning mechanism of a vswitch isolates the broadcast domain, it is essentially a broadcast network. Therefore, the multi-port aggregation of a vswitch (802.3ad or static aggregation) load Balancing does not affect the final guidance of Ethernet frames to the destination. The Broadcast Network means that any route can be used to reach the destination, as long as the address learning mechanism is supported, this can be achieved through the PDU notification at the link layer (we can regard ARP as a special link layer notification), but the Unicast Network is different, if an IP packet goes through a different path, each intermediate station on this path needs to have a route to the destination in advance. However, it is difficult to determine which route to take, this is bound to cause the entire IP network to eventually become a fully connected network, a broadcast-like network, and instantly consumes network resources. This problem does not exist on Ethernet due to the following reasons:
1. Ethernet is a LAN standard, and the number of nodes is limited. Broadcast is not a problem;
2. The implementation of the broadcast protocol (CSMA/CD) is very simple and the cost is very low;
3. Both bus-type Ethernet and hub-connected Ethernet are broadcast, ensuring that Ethernet frames reach their destination;
4. although the vswitch isolates the broadcast domain, it also implements this function through the learning mechanism, without or without human configuration factors, when the MAC address information is chaotic or there is no address-port information, broadcast is used for forwarding;
5. As long as the PDU notification mechanism of the address information is implemented or similar to the "Big Deal broadcast" mechanism, the Ethernet frame can reach the destination completely.
Therefore, port aggregation is very simple, and the routing load balancing can only be implemented through configuration if you want to implement it simply...



I have a network of 192.168.0.0/16 and more than 300 hosts need to access the Internet. I don't want all data packets to communicate with the outside world from one vro. I need two vrouters to share the traffic, obviously, I cannot modify the source code of the Linux kernel because it is possible that the host or router is not running the Linux system at all, and this requirement can be solved through configuration. The procedure is as follows:
1. first, I divide a network into two networks: 192.168.1.0/24 and 192.168.2.0/24. (because there are only more than 300 machines, the two 24-bit netmask networks are sufficient, though not easy to expand later );
2.192.168.1.0 uses vro1 1, while 192.168.2.0 uses vro2 2;
3. Connect to IOT platform.



This makes it easy to achieve load balancing.
However, in many cases, load balancing and Hot Standby are inseparable. If hot backup can be achieved based on the above load balancing, it would be too OK. In fact, vrrp can fully implement this function. The specific vrrp protocol is not described in this article. For details, see specifications. In essence, vrrp implements a virtual router, there is a group of real routers in this vro. Only one of the real vrouters in this group is the current working vro, and the load forwards data packets. Other routers are in the standby status, A group of routers communicate vrrp management information through multicast and determine whether a new working router should be re-elected through the reply time... obviously, a network will be connected to every vro in all vro groups. The figure below shows how to add the hot backup mode to the above plan:

Obviously, network 0 is divided into network 1 and Network 2, and R1 and R2 are the primary routers of network 1 and Network 2 and the backup routers of the other party, respectively, if R1 fails at a time, then vgroup 1 will elect R2 as the working router, then all packets in network 1 and Network 2 will be forwarded through R2, and the opposite is true.
In Linux, there is a keepalived project that fully implements vrrp specifications and is easy to use. For the above instances, the following configuration files are provided:
Vrrp_instance xxxxxx {
Interface eth0
Virtual_router_id 1
Priority100
...
Notify_backup "vrrp mechanism calls it the program to be executed during Backup"
Notify_master "vrrp mechanism calls it the program to be executed when the master node is running"
Policy_fault "..."

Authentication {
Auth_type pass
Auth_pass 12345
}

Virtual_ipaddress {
192.168.1.1/32
}
}


Deploy the above configuration file on R1 and R2 (Both routers run the Linux system, or at least support keepalived), and then enable keepalived. On the master node R1, we use the ip addr ls to check the address information. We found that the address on the eth0 of R1 has an additional IP address 192.168.1.1/32, the eth0 of R2 does not have this address. If we unplug R1, we will find that the eth0 of R2 has the address 192.168.1.1/32.
After understanding the load balancing and hot standby modes at the network layer, We have omitted another load balancing mode that is also very common, that is, the cluster, however, this is different from the above configuration-Based Server Load balancer. Most configuration-Based Server Load balancer instances are designed to solve router bottlenecks, while most Cluster Server Load balancer instances are designed to solve server bottlenecks, for Linux, LVS is a common method. It achieves Load Balancing through netfilter. LVS requires a machine to run as a Load balancer, then the request is explicitly loaded to different servers according to different algorithms. Common modes include Dr mode and Nat mode. Dr mode depends on several virtual IP addresses that do not respond to ARP on the lo network port of the Real Server, the server Load balancer has the same IP address that can respond to ARP. After obtaining the packet, the Server Load balancer only modifies the destination MAC address and source MAC address to redirect the packet to the Real Server. In fact, in the Dr mode, the Server Load balancer responds to the ARP requests they should have responded to for the real server, then intercepts the packets, and finally forwards the packets through a configurable algorithm. When the real server replies to the client, the server Load balancer does not modify any information on the IP layer. However, the NAT mode modifies the information on the IP layer. However, this mode is not described in detail (note the conflicts between the network layer and the link layer, that is to say, when a forward packet from the same port enters the same port is found, it will send an ICMP redirect packet ...). In any case, this LVS load balancing and hot standby mode are essentially different from the above vrrp method. To solve different problems, but keepalived integrates all of these, and it supports vrrp, it also supports LVS.
Remarks


Today, our network protocols are no longer like the layered model N years ago! The introduction of VLAN makes Ethernet no longer broadcast. VLAN introduces some layer-3 mechanisms, and even has to be improved on switches that support VLAN. In order not to affect the original IP protocol, I have to introduce a VLAN protocol of 802.1Q. It seems that VLAN is a LAN protocol. In fact, it is already a layer-3 protocol. 802.1Q introduces a tag field in the Ethernet frame header. When a VLAN switch receives an Ethernet frame with a tag field, it can use this tag field to select the port to which the request is forwarded, and decide whether to remove the tag, or receive the Ethernet frame without the tag. Based on the port configuration information, decide whether to forward the frame and whether to add the tag...
We say that Ethernet is a broadcast network. This Ethernet refers to the physical layer. For the link layer, it is only a part of the broadcast network, because the existence of a switch can identify the MAC address. Therefore, do not stick to the theoretical knowledge learned in textbooks. in actual application, there are very few theories that are based entirely on them. For example, some people think that the bus design on the motherboard is irrelevant to Ethernet. In fact, the PCI device on the motherboard has already been extended to Ethernet. How can this be achieved? Difficult? In fact, this is not surprising. PCIe bus is serial, Ethernet is serial, and the number of cables is also basically compatible. The rest is to implement some protocol logic in the chip. In the simplest example, the two PCIe NICs of the two machines are connected by a twisted pair wire. We shorten the cable, plug it into a chassis, and weld it to a motherboard, then they are the chips on the same motherboard. There is no difference between the two NICs on different machines and the North Bridge and South Bridge on the same motherboard.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.