In-depth analysis of seven modes of dual-Nic binding in Linux

Source: Internet
Author: User
In-depth analysis of seven modes of dual-Nic binding in Linux, enterprises now use dual-Nic access, which can not only add network bandwidth, but also make the corresponding redundancy, it can be said that there are a lot of benefits. In general, enterprises use the Nic binding mode that comes with the linux operating system. of course, the NIC production now... in-depth analysis of seven Nic binding modes in Linux
Nowadays, enterprises generally use dual-Nic access, which can both add network bandwidth and make corresponding redundancy, which can be said to be a lot of benefits. Generally, enterprises use the built-in Nic binding mode in the linux operating system, of course, Nic vendors now have some network card binding software for the windows operating system Nic management (for windows operating systems, the NIC binding function is not required by a third party ). Enter the topic. There are seven Nic binding modes in linux: 0. round robin, 1. active-backup, 2. load balancing (xor), 3. fault-tolerance (broadcast), 4. lacp, 5. transmit load balancing, 6. adaptive load balancing.

First: bond0: round robin
Standard: round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
Features: (1) all links are in the server load balancer status. the polling method sends packets to each link and sends packets based on the per packet method. Ping the same IP address on the service: 1.1.1.1 both NICs of the dual network card have traffic. Load to two links, which means polling and sending is based on the per packet method. (2) this mode adds bandwidth and supports fault tolerance. when there is a link problem, the traffic will be switched to the normal link.
Actual binding result:
Cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26,200 9)
Bonding Mode: load balancing (round-robin) ----- RR Mode
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 74: ea: 3a: 6a: 54: e3
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Application topology: the aggregation port must be configured on the vSwitch. cisco is called the port channel. The topology is as follows:

Type 2: bond1: active-backup
Standard document definition: Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. the bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. this mode provides fault tolerance. the primary option affects the behavior of this mode.
Mode features: a port is in the active state, and a port is in the slave state. all traffic is processed on the main chain, and there is no traffic. When the master port is down, the slave port takes over the master status.
Actual binding result:
Root @ 1 :~ # Cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26,200 9)
Bonding Mode: fault-tolerance (active-backup) -- backup Mode
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 74: ea: 3a: 6a: 54: e3
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: d8: 5d: 4c: 71: f9: 94
Application topology: This mode does not require vSwitch support.
Third: bond2: load balancing (xor)
Standard Document description: XOR policy: Transmit based on [(source MAC address XOR 'd with destination MAC address) modulo slave count]. this selects the same slave for each destination MAC address. this mode provides load balancing and fault tolerance.
Feature: This mode limits traffic to ensure that the traffic destined for a specific peer end is always sent from the same interface. Since the destination is determined by the MAC address, this mode works well in the local network configuration. If all traffic is through a single vro (for example, if there is only one gateway in the "gateway" network configuration, the source and target mac are fixed, then the line calculated by this algorithm will always be the same, so this mode does not make much sense .), This mode is not the best choice. Like balance-rr, the switch port must be configured as "port channel ". In this mode, the source and target mac are used as the hash factor for xor algorithm routing.
Actual binding result:
[Root @ localhost ~] # Cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.3 (March 23,200 6)
Bonding Mode: load balancing (xor) -- set it to xor Mode.
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: d0: f8: 40: f1: a0
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: d0: f8: 00: 0c: 0c
Application topology: same application model as bond0. In this mode, you also need to configure the aggregation port of the switch.
Type 4: bond3: fault-tolerance (broadcast)
Standard document definition: Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
Feature: This mode is characterized by a packet that is copied and sent to the two interfaces under bond separately. when the peer switch fails, we do not feel any downtime, however, this method is too resource-consuming. However, this mode has a good fault tolerance mechanism. This model is applicable to the financial industry because they require highly reliable networks and do not allow any problems.
Actual binding result:
Root @ ubuntu12 :~ /Ram # cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26,200 9)
Bonding Mode: fault-tolerance (broadcast) --- fault-tolerance Mode
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 74: ea: 3a: 6a: 54: e3
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: d8: 5d: 4c: 71: f9: 94
Application topology:

This mode is applicable to the following topology. The two interfaces are connected to two vswitches and belong to different VLANs. when one side of the network fails, the normal operation of the network connected to the other side of the server is not affected. In addition, the fault process is zero packet loss. The ping information in this mode is shown below:
64 bytes from 1.1.1.1: icmp_seq = 901 ttl = 64 time = 0.205 MS
64 bytes from 1.1.1.1: icmp_seq = 901 ttl = 64 time = 0.213 MS (DUP !) -Dup indicates duplicate packets.
64 bytes from 1.1.1.1: icmp_seq = 902 ttl = 64 time = 0.245 MS
64 bytes from 1.1.1.1: icmp_seq = 902 ttl = 64 time = 0.254 MS (DUP !)
64 bytes from 1.1.1.1: icmp_seq = 903 ttl = 64 time = 0.216 MS
64 bytes from 1.1.1.1: icmp_seq = 903 ttl = 64 time = 0.226 MS (DUP !)
From the ping information, we can see that this mode features that the same message server will copy two copies to send them to the two lines separately, resulting in two duplicate packets, this model is suspected of wasting resources.
Type 5: bond4: lacp
Standard document definition: IEEE 802.3ad Dynamic link aggregation. creates aggregation groups that share the same speed and duplex settings. utilizes all slaves in the active aggregator according to the 802.3ad specification. pre-requisites: 1. ethtool support in the base drivers for retrieving. the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link
Aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
Feature: The 802.3ad mode is IEEE standard. Therefore, all peers that implement 802.3ad can perform interoperability well. The 802.3ad protocol includes automatic aggregation configuration. Therefore, you only need to manually configure the vSwitch (note that only some devices can use 802.3ad ). The 802.3ad standard also requires frames to be transmitted in order (to a certain extent). Therefore, a single connection usually does not see the packet in disorder. 802.3ad also has some disadvantages: the standard requires that all devices be in the same rate and duplex mode for aggregation operations, and the same as other bonding load balancing modes except the balance-rr mode, no connection can use the bandwidth of more than one interface.
In addition, the linux bonding 802.3ad achieves traffic distribution through the peer end (through the XOR value of the MAC address), so in the "gateway" configuration, all Outgoing (Outgoing) the traffic will use the same device. Incoming traffic may also be terminated on the same device, depending on the balance policy in the peer 802.3ad implementation. In the local configuration, the routes are distributed through the devices in bond.
Actual binding result:
Root @:~ # Cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26,200 9)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 9
Partner Key: 1
Partner Mac Address: 00: 00: 00: 00: 00: 00
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 74: ea: 3a: 6a: 54: e3
Aggregator ID: 1
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: d8: 5d: 4c: 71: f9: 94
Aggregator ID: 2
Application topology: the application topology is the same as bond0 and bond2. However, in this mode, the LACP function must be enabled in addition to the port channel aggregation port. after successful negotiation, the two ends can communicate normally. Otherwise, it cannot be used.
VSwitch configuration:
Interface AggregatePort 1 configure the aggregation Port
Interface GigabitEthernet 0/23
Enable lacp active mode under port-group 1 mode active interface
Interface GigabitEthernet 0/24
Port-group 1 mode active
Type 6: bond5: transmit load balancing
Standard document definition: Adaptive transmit load balancing: channel bonding that does not require any special switch support. the outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. incoming traffic is already ed by the current slave. if the processing ing slave fails, another slave takes over the MAC address of the failed locking slave. prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.
Features: The balance-tlb mode balances outbound traffic through peer balancing. Since it is balanced based on the MAC address, in "gateway" configuration (as described above), this mode will send all traffic through a single device. However, in "local" network configuration, this mode balances multiple local network peer ends in a relatively smart way (not in the balance-xor or 802.3ad mode, therefore, the unfortunate MAC addresses of those numbers (such as XOR with the same value) are not clustered on the same interface.
Unlike 802.3ad, interfaces in this mode can have different rates without special switch configurations. The disadvantage is that all incoming (incoming) traffic in this mode will reach the same interface. In this mode, the network device driver of the slave interface must have some ethtool support; ARP monitoring is not available.
Actual configuration result:
Cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.3 (March 23,200 6)
Bonding Mode: transmit load balancing -- TLB Mode
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: d0: f8: 40: f1: a0
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: d0: f8: 00: 0c: 0c
Application topology: In this mode, bond members use their respective mac instead of the mac using the bond0 interface in the above modes.

For example, the device will send free arp at the beginning, with the mac address of the master Port eth1 as the source. when the client receives the arp address, it will record the ip address of the mac address in the arp cache. In this mode, when each port on the server is pinged, the egress is calculated based on the algorithm. when the address is changing, the port is loaded to different ports. In the experiment, ping1.1.1.3 is sent to eth2. the Source mac is 00: D0: F8: 00: 0C: 0C, ping1.1.1.4 is sent to eth1, and the source mac is 00: D0: F8: 40: f1: A0, and so on, so the traffic load from the server to the two lines, but since the arp service only uses 00: D0: F8: 40: F1: A0, in this case, the client buffer records the ip addresses of the 00: D0: F8: 40: F1: A0 pairs. during encapsulation, the target mac: 00: D0: F8: 40: F1: A0. In this way, the incoming traffic only goes to eth1 (00: D0: F8: 40: F1: A0. The device will always send snap packets, eth1 sends snap packets with the source of 00d0. f840.f1a0, and eth2 sends snap packets with the source of 00d0. f800109c0c. The mac and target mac of this snap packet are both local mac of the NIC, and the source ip address and target ip address are also the same. The role of this packet is to check whether the line is a normal loop packet.
Note: You can modify the bond0 mac address to guide the modified source mac free arp (MACADDR = 00: D0: F8: 00: 0C: 0C)
7. bond6: adaptive load balancing
Features: This mode includes the balance-tlb mode and the receive load balance (rlb) for IPV4 traffic, without the support of any switch. The received server load balancer is implemented through ARP negotiation. The bonding driver intercepts the ARP response sent by the local machine and changes the source hardware address to the unique hardware address of a server load balancer in bond, so that different peer terminals can communicate with each other using different hardware addresses. All ports receive arp request packets from the peer end. when arp is returned, the bond driver module intercepts the arp Reply packets and calculates the corresponding ports according to the algorithm, at this time, the source mac of the arp Reply packet and the send source mac are changed to the corresponding port mac. From packet capture analysis, the reply packet is sent from the first slave port 1 and the second slave port 2. And so on.
(There is another point: in addition to sending the port Reply packet, each port also sends the reply packet from other ports, mac or mac on other ports) in this way, the incoming traffic from the server is balanced.
When the local machine sends an ARP request, the bonding driver copies and saves the peer IP information from the ARP packet. When the ARP response arrives from the peer end, the bonding driver extracts the hardware address and initiates an ARP response to a slave in bond (the algorithm is the same as above, for example, if one port is counted, an arp request is sent. if one reply is sent, the mac address of 1 is used ). One problem with server load balancer using ARP negotiation is that the bond hardware address is used every time an ARP request is broadcast. Therefore, after the peer learns the hardware address, all received traffic will flow to the current slave. This problem is solved by sending an update (ARP response) to all the peer end and sending a response to all ports. the response contains their unique hardware address, which leads to a redistribution of traffic. When a new server load balancer instance is added to bond, or an inactive server load balancer instance is re-activated, the received traffic is distributed. The received loads are sequentially distributed (round robin) on the most efficient server load balancer in bond.
When a link is re-connected, or a new slave is added to bond, the received traffic is re-allocated in all currently activated slave, sends an ARP response to each client by using the specified MAC address. The updelay parameter described below must be set to a value greater than or equal to the forwarding latency of the switch to ensure that the ARP response sent to the peer end is not blocked by the switch.
Prerequisites:
Condition 1: ethtool supports obtaining the speed of each slave;
Condition 2: the underlying driver supports setting the hardware address of a device so that there is always a slave (curr_active_slave) using the bond hardware address, at the same time, ensure that the slave in each bond has a unique hardware address. If curr_active_slave fails, its hardware address will be taken over by the newly selected curr_active_slave.
Actual configuration result:
Root @:/tmp # cat/proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26,200 9)
Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (MS): 100
Up Delay (MS): 0
Down Delay (MS): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 74: ea: 3a: 6a: 54: e3
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: d8: 5d: 4c: 71: f9: 94
Application topology:

A is A dual-Nic binding.
When B sends an arp Request to A, normally A will respond to an arp response packet. the Source mac is the bond mac, and the source is the bond ip. However, in this mode, the bonding driver intercepts this arp response and changes the source mac to the mac of one of the NICs in the bond State: mac1, in this way, when B receives this arp response, it records the ip address in the arp cache: the mac address corresponding to 1.1.1.1 is mac1. In this way, all traffic from B goes through MAC1.
When C sends an arp Request to A, normally A will respond to an arp response packet. the Source mac is the bond mac, and the source is the bond ip. However, in this mode, the bonding driver intercepts this arp response and changes the source mac to the mac of one of the NICs in the bond State: mac2, in this way, when C receives this arp response, it will record the ip address in the arp cache: the mac address corresponding to 1.1.1.1 is mac2. In this way, the traffic from C goes through MAC2.
In this way, the returned traffic can be balanced. The outbound balancing is the same as MODE = 5. different IP addresses calculate different ports based on the xor algorithm, send arp packets at different ports, and the mac address is the mac address of the corresponding Nic.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.