Deep Analysis Linux under the dual NIC binding seven modes

Source: Internet
Author: User
Tags switches

Now the general enterprise will use dual-card access, so that both the network bandwidth can be added, while the corresponding redundancy can be said to be a lot of benefits. And the general enterprise will use the Linux operating system comes with the network card binding mode, of course, now the network card manufacturers will also be some for the Windows operating system network card management software to do network card binding (Windows operating system does not have a network card binding feature requires third-party support). To get to the point, Linux has seven types of NIC binding mode: 0. Round robin,1.active-backup,2.load Balancing (XOR), 3.fault-tolerance (broadcast), 4.LACP, 5.transmit load balancing, 6. Adaptive load Balancing.

The first kind: Bond0:round Robin
Standard: Round-robin policy:transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

Features: (1) All links are in a load-balanced state, and the polling method sends messages to each link, based on the per-packet mode. On the service ping one of the same addresses: 1.1.1.1 of the two network cards of the dual network card have traffic emitted. Load to two links, indicating that the polling is sent based on the per packet method. (2) This mode features increased bandwidth while supporting fault tolerance, and when there is a link problem, the traffic is switched to the normal link.

Actual binding Result:
Cat/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.6.0 (September 26, 2009)
Bonding mode:load Balancing (round-robin)-----RR mode
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface:eth0
MII Status:up
Link Failure count:0
Permanent HW Addr:74:ea:3a:6a:54:e3
Slave interface:eth1
MII Status:up
Link Failure count:0

Application topology: The switch side needs to configure the aggregation port, Cisco is called the Port channel. The topology diagram is as follows:

The second type: Bond1:active-backup
Standard document definition: Active-backup policy:only One slave in the bond is Active. A different slave becomes active if, and only if, the active slave fails. The bond ' s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

Mode features: One port is in the main state, one is in the slave state, all traffic is processed on the main link, and there is no traffic. When the primary port is down, the master state is inherited from the port.

Actual binding Result:
[Email protected]:~# cat/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.6.0 (September 26, 2009)
Bonding mode:fault-tolerance (active-backup)-–backup mode
Primary Slave:none
Currently Active Slave:eth0
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface:eth0
MII Status:up
Link Failure count:0
Permanent HW Addr:74:ea:3a:6a:54:e3
Slave interface:eth1
MII Status:up
Link Failure count:0
Permanent HW addr:d8:5d:4c:71:f9:94

Application topology: This mode of access does not require switch-side support, any access can be done.

Third type: Bond2:load balancing (XOR)
Standard document Description: XOR Policy:transmit based on [(Source MAC address XOR ' d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

Feature: This mode will limit traffic to ensure that traffic arriving at a particular peer is always emitted from the same interface. Since the destination is determined by the MAC address, this mode works well under the "local" network configuration. If all traffic is through a single router (such as a "gateway" network configuration, only one gateway, the source and target Mac are fixed, then the algorithm calculated by the line is always the same bar, then this mode does not have much meaning.) ), then the model is not the best choice. As with BALANCE-RR, the switch port needs to be configured as "Port channel." This pattern is done by the source and Target Mac to do the hash factor to do the XOR algorithm to choose the path.

Actual binding Result:
[Email protected] ~]# cat/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.0.3 (March 23, 2006)
Bonding mode:load Balancing (XOR)--configured for XOR mode
Transmit Hash policy:layer2 (0)
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave interface:eth1
MII Status:up
Link Failure count:0
Permanent HW addr:00:d0:f8:40:f1:a0
Slave Interface:eth2
MII Status:up
Link Failure count:0
Permanent HW addr:00:d0:f8:00:0c:0c

Application topology: The same application model as BOND0. This mode also requires the switch to configure the aggregation port.

Fourth type: Bond3:fault-tolerance (broadcast)
Standard document definition: Broadcast policy:transmits everything on all slave interfaces. This mode provides fault tolerance.

Features: This mode is characterized by a message will be copied two copies of the two interface to bond sent out, when the end of the switch fails, we do not feel any downtime, but this method is too wasteful, but this model has a good fault-tolerant mechanism. This mode is suitable for the financial industry because they require a highly reliable network and do not allow any problems.

Actual binding result:
[email protected]:~/ram# cat/proc/net/bonding/bond0
Ethernet Channel bonding driver:v3.6.0 ( September)
Bonding mode:fault-tolerance (broadcast) ——-fault-tolerance mode
mii status:up
Mii Polling I Nterval (ms): +
Up delay (ms): 0
Down Delay (ms): 0
Slave interface:eth0
MII status:up
Link Failure Cou nt:0
Permanent HW addr:74:ea:3a:6a:54:e3
Slave interface:eth1
MII status:up
Link Failure count:0
Perm Anent HW addr:d8:5d:4c:71:f9:94

Application topology: The following:

This mode is suitable for the following topology, two interfaces are connected to two switches, and belong to different VLANs, when one side of the network failure will not affect the server on the other side of the network to work properly. And the failure process is 0 drops. The following shows the ping information in this mode:
Bytes from 1.1.1.1:icmp_seq=901 ttl=64 time=0.205 ms
Bytes from 1.1.1.1:icmp_seq=901 ttl=64 time=0.213 MS (dup!)-dup for duplicate messages
Bytes from 1.1.1.1:icmp_seq=902 ttl=64 time=0.245 ms
Bytes from 1.1.1.1:icmp_seq=902 ttl=64 time=0.254 MS (dup!)
Bytes from 1.1.1.1:icmp_seq=903 ttl=64 time=0.216 ms
Bytes from 1.1.1.1:icmp_seq=903 ttl=64 time=0.226 MS (dup!)
From this ping information can be seen, the characteristics of this mode is that the same message server will replicate two copies to two lines sent, resulting in a reply to two duplicate messages, this mode is suspected of wasting resources.

Fifth type: BOND4:LACP

Standard document definition: IEEE 802.3ad Dynamic Link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3AD specification. Pre-requisites:1. Ethtool support on the base drivers for retrieving.the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link
Aggregation. Most switches would require some type of configuration to enable 802.3ad mode.

Features: 802.3AD mode is the IEEE standard, so all implementations of the 802.3AD of the peer can be very good interoperability. The 802.3AD protocol includes automatic configuration of the aggregation, so only a small number of manual configuration of the switch is required (to point out that only some devices can use 802.3AD). The 802.3AD standard also requires frames to be passed in order (to some extent), so a single connection usually does not see the packet's chaos. 802.3AD also has some drawbacks: the standard requires all devices to be in the same rate and duplex mode when aggregating operations, and, as with other bonding load balancing modes other than BALANCE-RR mode, no connection can use more than one interface's bandwidth.
In addition, the 802.3AD implementation of the Linux bonding distributes traffic over the end (via the XOR value of the MAC address), so all out-of-office (outgoing) traffic will use the same device under the gateway configuration. Incoming (Incoming) traffic may also terminate on the same device, which relies on a balanced strategy in the 802.3AD implementation. In the "local" type configuration, the road two will be distributed through the device in bond.

Actual binding Result:
[Email protected]:~# cat/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.6.0 (September 26, 2009)
Bonding mode:ieee 802.3ad Dynamic Link Aggregation
Transmit Hash policy:layer2 (0)
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3AD Info
LACP Rate:slow
Aggregator Selection Policy (Ad_select): Stable
Active Aggregator Info:
Aggregator Id:1
Number of Ports:1
Actor Key:9
Partner key:1
Partner Mac address:00:00:00:00:00:00
Slave Interface:eth0
MII Status:up
Link Failure count:0
Permanent HW Addr:74:ea:3a:6a:54:e3
Aggregator Id:1
Slave interface:eth1
MII Status:up
Link Failure count:0
Permanent HW addr:d8:5d:4c:71:f9:94
Aggregator Id:2

Application topology: The application topology is the same as the bond0, and Bond2, but this mode in addition to the port channel is configured to open the LACP function under the port channel aggregation, after successful negotiation, the two ends can communicate normally. Otherwise, it cannot be used.

Switch-side configuration:
Interface Aggregateport 1 Configuring the aggregation port
Interface Gigabitethernet 0/23
Port-group 1 Mode Active interface Open LACP active
Interface Gigabitethernet 0/24
Port-group 1 Mode Active

Sixth type: Bond5:transmit load Balancing

Standard document definition: Adaptive transmit load balancing:channel Bonding that does not require any special switch support. The outgoing traffic is distributed according-to-the-current load (computed-relative-the-speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite:ethtool the base drivers for retrieving, the speed of each slave.

Features: Balance-tlb mode through to-end equalization out (outgoing) traffic. Since it is balanced based on the MAC address, in a "gateway" type configuration (as described above), this mode sends all traffic through a single device, however, under the "local" network configuration, This mode balances multiple local network peers in a relatively intelligent way (not the Xor method mentioned in the Balance-xor or 802.3AD mode), so those digital unfortunate MAC addresses (such as XOR get the same value) do not converge on the same interface.
Unlike 802.3AD, the interface for this mode can have a different rate and does not require a special switch configuration. The downside is that all incoming (incoming) traffic in this mode will reach the same interface, which requires some kind of ethtool support for the network device driver of the Slave interface, and ARP monitoring is not available.

Actual configuration results:
Cat/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.0.3 (March 23, 2006)
Bonding Mode:transmit Load Balancing-–tlb mode
Primary Slave:none
Currently Active Slave:eth1
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave interface:eth1
MII Status:up
Link Failure count:0
Permanent HW addr:00:d0:f8:40:f1:a0
Slave Interface:eth2
MII Status:up
Link Failure count:0
Permanent HW addr:00:d0:f8:00:0c:0c

Application topology: In this mode, the bond members use their respective Macs instead of the Mac with the Bond0 interface in the above several modes.

For example, the device will initially send free ARP, the main port Eth1 's Mac as the source, when the client receives this ARP, the ARP cache will be recorded in the IP of the Mac. In this mode, the server each port in the ping operation, according to the algorithm to calculate the export, the address is constantly changing when he, then load to different ports. In the experiment ping1.1.1.3 to eth2 sent, the source Mac for 00:d0:f8:00:0c:0c,ping1.1.1.4 is sent to eth1, the source Mac for 00:d0:f8:40:f1:a0, and so on, So the traffic load from the server to two lines, but because the service sends ARP only with 00:D0:F8:40:F1:A0, so that the client buffer record is 00:d0:f8:40:f1:a0 to the IP, the package when the target mac:00:d0:f8:40:f1:a0. The traffic entering the service is only going to eth1 (00:D0:F8:40:F1:A0). The device is always sent to the snap message, Eth1 sends a snap message from the source 00d0.f840.f1a0, Eth2 sends a snap message from the source 00d0.f800.0c0c. The snap message Mac is the same as the target Mac, which is the same as the local Mac, and the source and destination IP are the same, and the message is a loopback message that detects if the line is normal.
Note: You can use the MAC address of the bond0 to boot the free ARP (MACADDR=00:D0:F8:00:0C:0C) from the modified source Mac.

Seventh type: bond6:adaptive load Balancing
Features: This mode includes the BALANCE-TLB mode, plus receive load balancing for IPV4 traffic (receive load balance, RLB), and does not require any switch (switch) support. Receive load balancing is implemented through ARP negotiation. The bonding driver intercepts the ARP response sent by the native and overwrites the source hardware address with the unique hardware address of one of the slave in bond, allowing different peer-to-peer communication using different hardware addresses. All ports will receive the ARP request message to the end, reply Arp back, the bond driver module will intercept the ARP reply message sent, according to the algorithm to calculate the corresponding port, then the ARP reply message source mac,send Source Mac is changed to the corresponding Port Mac. From the case of packet capture the reply message is the first to be sent from Port 1, and the second one from Port 2. And so on
(There is also a point: each port in addition to send the message of this port reply, also will send other port reply message, Mac or other port of Mac) so from the server side of the receiving traffic will also be balanced.
When the native sends an ARP request, the bonding driver copies and saves the IP information from the ARP packet to the peer. When the ARP response arrives from the peer, the bonding driver extracts its hardware address and initiates an ARP response to one of the slave in Bond (the algorithm is the same as above, such as counting to 1, to send an ARP request, 1 replies to Mac with 1 Mac). One problem with the use of ARP negotiation for load balancing is that the hardware address of the bond is used every time the ARP request is broadcast, so when the peer learns the hardware address, the incoming traffic will flow to the current slave. This problem is resolved by sending an update (ARP answer) to all the peers, sending a reply to all ports, including their unique hardware address, which causes traffic to be re-distributed. When a new slave is added to bond, or an inactive slave is reactivated, the incoming traffic is also re-distributed. The received load is distributed sequentially (round robin) on the highest speed slave in bond
When a link is re-connected, or a new slave is added to bond, the receive traffic is redistributed in all currently active slave, initiating an ARP reply to each client by using the specified MAC address. The Updelay parameter described below must be set to a value that is greater than or equal to the switch forwarding delay to ensure that the ARP response destined to the peer is not blocked by the switch.
Necessary:
The condition 1:ethtool supports obtaining the rate of each slave;
Condition 2: The underlying driver supports setting the hardware address of a device so that there is always a slave (curr_active_slave) using the hardware address of bond, while ensuring that slave in each bond has a unique hardware address. If the curr_active_slave fails, its hardware address will be taken over by the newly elected Curr_active_slave.

Actual configuration results:
[Email protected]:/tmp# cat/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.6.0 (September 26, 2009)
Bonding mode:adaptive Load Balancing
Primary Slave:none
Currently Active Slave:eth0
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface:eth0
MII Status:up
Link Failure count:0
Permanent HW Addr:74:ea:3a:6a:54:e3
Slave interface:eth1
MII Status:up
Link Failure count:0
Permanent HW addr:d8:5d:4c:71:f9:94

Application topology:

A is a dual NIC binding.
When B sends an ARP request to arrive at a, the normal condition a responds to an ARP response message, the source Mac is Bond's Mac, and the source is the IP of bond. However, in this mode the bonding driver intercepts the ARP response and changes the source Mac to the mac:mac1 of one of the cards in the bond state, so B receives the ARP response and logs the ip:1.1.1.1 corresponding Mac to Mac1 in the ARP cache. So the flow of B came MAC1.
When C sends an ARP request to arrive at a, the normal condition A will respond to an ARP response message, the source Mac is Bond's Mac, and the source is the IP of bond. However, in this mode the bonding driver intercepts the ARP response and changes the source Mac to the Mac:mac2 of one of the cards in the bond state, so that when C receives the ARP response, it records the ip:1.1.1.1 corresponding Mac as MAC2 in the ARP cache. So the traffic of C comes MAC2.
This can be done back to let back the traffic is also load balanced. Out of the direction of the balance and mode=5 consistent, different addresses will be based on the XOR algorithm to calculate the different exits, send different exports sent the corresponding ARP, MAC is the corresponding network card mac

Deep Analysis Linux under the dual NIC binding seven modes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.