Linux dual NIC Bindings

Source: Internet
Author: User

1 What is Bond

The network card bond is a kind of common technology in the production scene, which is bound by multi-net cards as a logical NIC, which realizes the redundancy, bandwidth expansion and load balance of the local network card. Kernels 2.4.12 and later versions are available for bonding modules, and previous versions can be implemented with patches. You can determine whether the kernel supports bonding with the following command:

123 [[email protected] network-scripts]#cat /boot/config-2.6.32-573.el6.x86_64 |grep -i bondingCONFIG_BONDING=m[[email protected] network-scripts]#

2 Bond's pattern

Bond patterns are commonly used in two ways:

mode=0 (BALANCE-RR)

Represents a load sharing round-robin, and is a polling method such as the first packet walk eth0, the second packet goes eth1 until the packet is sent.

Advantage: one-fold increase in flow

Disadvantage: Need to access the switch to do port aggregation, otherwise may not be used

mode=1 (active-backup)

Represents the primary and standby mode, which means that only 1 NICs are working at the same time.

Advantages: High Redundancy

Disadvantage: The link utilization is low, the two NICs only 1 blocks in the work

Bond other modes:

mode=2 (Balance-xor) (Balance strategy)

Represents an XOR hash load sharing, and the aggregation of the switch is forced to not negotiate a coordinated manner. (Requires xmit_hash_policy, switch configuration port channel required)

Feature: transmits packets based on the specified transfer hash policy. The default policy is: (Source MAC address XOR destination MAC address)% slave number. Other transport policies can be specified through the Xmit_hash_policy option, which provides load balancing and fault tolerance

mode=3 (broadcast) (Broadcast strategy)

Indicates that all packets are emitted from all network interfaces, this imbalance, only redundant mechanisms, but too wasteful of resources. This mode is suitable for the financial industry because they require a highly reliable network and do not allow any problems. The aggregation of the switch needs to be forced to mate with the non-negotiated mode.

Feature: Each packet is transmitted on each slave interface, and this mode provides fault tolerance

mode=4 (802.3AD) (IEEE 802.3ad dynamic Link aggregation)

Represents support for the 802.3AD protocol, and the aggregation of the Switch LACP mode mates (requires Xmit_hash_ Policy). Standard requires all devices to be in the same rate and duplex mode when aggregating operations, and, as with other bonding load balancing modes other than BALANCE-RR mode, no connection can use more than one interface's bandwidth.

Feature: Create an aggregation group that shares the same rate and duplex settings. Multiple slave are working under the same active aggregate according to the 802.3AD specification. The slave election for outgoing traffic is based on the transfer hash policy, which can be changed from the default XOR policy to another policy through the xmit_hash_policy option. It is important to note that not all transmission strategies are 802.3AD adapted, especially considering the problem of packet chaos mentioned in the 802.3AD standard 43.2.4 section. Different implementations may have different adaptations.


Conditional 1:ethtool supports getting the rate and duplex settings for each slave

Condition 2:switch (switch) supports IEEE802.3AD Dynamic link aggregation

Condition 3: Most switch (switches) require a specific configuration to support 802.3ad mode

mode=5 (BALANCE-TLB) (Adapter transport load balancer)

is to select slave for each slave load situation to send, using the current turn slave when receiving. This mode requires some kind of ethtool support for the network device driver of the Slave interface, and ARP monitoring is not available.

Features: no special switch (switch) supported channel bonding is required. Out-of-office traffic is allocated on each slave based on the current load (calculated based on speed). If the slave that is accepting data fails, the other slave takes over the MAC address of the failed slave.


Ethtool support for getting the rate per slave

mode=6 (BALANCE-ALB) (Adapter Adaptive load balancer)

RLB (Receive load balancer Receiveload balance) was added on a 5 TLB basis. No switch (switch) support is required. Receive load balancing is implemented through ARP negotiation.

Features: This mode includes the BALANCE-TLB mode, plus receive load balancing for IPV4 traffic (receiveload balance, RLB), and does not require any switch (switch) support. Receive load balancing is implemented through ARP negotiation. The bonding driver intercepts the ARP response sent by the native and overwrites the source hardware address with the unique hardware address of one of the slave in bond, allowing different peer-to-peer communication using different hardware addresses. Receive traffic from the server side is also balanced. When the native sends an ARP request, the bonding driver copies and saves the IP information from the ARP packet to the peer. When the ARP response arrives from the peer, the bonding driver extracts its hardware address and initiates an ARP response to one of the slave in Bond. One problem with the use of ARP negotiation for load balancing is that the hardware address of the bond is used every time the ARP request is broadcast, so when the peer learns the hardware address, the incoming traffic will flow to the current slave. This problem can be resolved by sending updates (ARP responses) to all the peers, which contain their unique hardware address, which results in the redistribution of traffic. When a new slave is added to bond, or an inactive slave is reactivated, the incoming traffic is also re-distributed. The received load is distributed sequentially (round robin) in bond with the highest speed slave a link is re-connected, or a new slave is added to bond, receiving traffic is redistributed in all currently active slave, by using the specified MAC address to each The client initiates an ARP reply. The Updelay parameter described below must be set to a value that is greater than or equal to the switch forwarding delay to ensure that the ARP response destined to the peer is not blocked by the switch.

Bond Mode Summary:

The MODE5 and mode6 do not require switch-side settings, and the NIC can be automatically aggregated. Mode4 needs to support 802.3AD. Mode0,mode2 and mode3 theoretically require a static aggregation method.

3 Configuring Bond

Test environment:

12345 [[email protected] ~]# cat/etc/redhat-releaseCentOS release 6.7 (Final)[[email protected] ~]# uname -r2.6.32-573.el6.x86_64[[email protected]~]#

1. Configure the physical network card

12345678910111213141516 [[email protected] network-scripts]#cat ifcfg-eth0    DEVICE=eth0TYPE=EthernetONBOOT=yesBOOTPROTO=noneMASTER=bond0SLAVE=yes//可以没有此字段,就需要开机执行ifenslave bond0 eth0 eth1命令了。[[email protected] network-scripts]#[[email protected] network-scripts]#cat ifcfg-eth1    DEVICE=eth1TYPE=EthernetONBOOT=yesBOOTPROTO=noneMASTER=bond0SLAVE=yes[[email protected] network-scripts]#

2. Configure the logical NIC Bond0

1234567891011 [[email protected] network-scripts]#cat ifcfg-bond0     //需要我们手工创建DEVICE=bond0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=[[email protected] network-scripts]#

Since there is no configuration file we can use to copy a ifcfg-eth1 to use: CP Ifcfg-{eth0,bond1}

3, load the module, let the system support bonding

1234 [[email protected] ~]# cat/etc/modprobe.conf  //不存在的话,手动创建(也可以放在modprobe.d下面)aliasbond0 bondingoptions bond0 miimon=100 mode=0[[email protected] ~]#

The link check time for configuration bond0 is 100ms and the mode is 0.


The backup mode experiment of the Linux NIC Bonging is completely fine on the real machine (as long as the Linux kernel supports it), but doing so in VMware Workstation virtual is problematic.

After the configuration is completed, such as the problem, but the bond0 can normally start and normal use, but did not play a backup mode effect. When the Ifdown eth0 is used, the network appears to be out of line.

In the kernel documentation, there are two ways to get a MAC address, one is to get the MAC address from the first active network card, and then the MAC address of the rest of the slave NIC Bond0 the MAC address, and the other is to use the Fail_over_mac parameter. Bond0 is the conversion of the current active NIC using the MAC address, MAC address, or active NIC.

Since VMware Workstation does not support the first way to get a MAC address, you can use the Fail_over_mac=1 parameter, so here we add the Fail_over_mac=1 parameter

1234 [[email protected] etc]# cat/etc/modprobe.d/modprobe.confaliasbond0 bondingoptions bond0 miimon=100 mode=0fail_over_mac=1[[email protected] etc]#

4. Load Bond Module

1 [[email protected] etc]# modprobe bonding

5. View Binding Results

12345678910111213141516171819202122232425 [[email protected] etc]# cat/proc/net/bonding/bond0Ethernet Channel BondingDriver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing(round-robin)MII Status: upMII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0 Slave Interface: eth0MII Status: upSpeed: 1000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr:00:50:56:28:7f:51Slave queue ID: 0 Slave Interface: eth1MII Status: upSpeed: 1000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr:00:50:56:29:9b:daSlave queue ID: 0[[email protected] etc]#

4 Test Bond

Because of the use of mode=0, load balancing method, then we ping Baidu, and then disconnect a network card, at which time the ping will not be interrupted.

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051 [[email protected] etc]# pingbaidu.comPING ( bytes of data.64 bytes from ttl=128 time=10.6 ms64 bytes from ttl=128 time=9.05 ms64 bytes from ttl=128 time=11.7 ms64 bytes from ttl=128 time=7.93 ms64 bytes from ttl=128 time=9.50 ms64 bytes from ttl=128 time=7.17 ms64 bytes from ttl=128 time=21.2 ms64 bytes from ttl=128 time=7.46 ms64 bytes from ttl=128 time=7.82 ms64 bytes from ttl=128 time=8.15 ms64 bytes from ttl=128 time=6.89 ms64 bytes from icmp_seq=12ttl=128 time=8.33 ms64 bytes from ttl=128 time=8.65 ms64 bytes from ttl=128 time=7.16 ms64 bytes from ttl=128 time=9.31 ms64 bytes from ttl=128 time=10.5 ms64 bytes from ttl=128 time=7.61 ms64 bytes from ttl=128 time=10.2 ms^C--- pingstatistics---18 packets transmitted, 18received, 0% packet loss, time17443msrtt min/avg/max/mdev= 6.899/9.417/21.254/3.170 ms//用另一个终端手动关闭eth0网卡,ping并没有中断[[email protected] etc]# !cacat/proc/net/bonding/bond0Ethernet Channel BondingDriver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing(round-robin)MII Status: upMII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0 Slave Interface: eth0MII Status: downSpeed: UnknownDuplex: UnknownLink Failure Count: 1Permanent HW addr:00:50:56:28:7f:51Slave queue ID: 0 Slave Interface: eth1MII Status: upSpeed: 1000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr:00:50:56:29:9b:daSlave queue ID: 0[[email protected] etc]#

Check the bond0 status and find Eth0,down, but Bond is OK.

Linux dual NIC Bindings

Related Article

E-Commerce Solutions

Leverage the same tools powering the Alibaba Ecosystem

Learn more >

Apsara Conference 2019

The Rise of Data Intelligence, September 25th - 27th, Hangzhou, China

Learn more >

Alibaba Cloud Free Trial

Learn and experience the power of Alibaba Cloud with a free trial worth $300-1200 USD

Learn more >

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.