Implementation of multiple NIC bindings in Linux Introduction 7 Common bond modes

Source: Internet
Author: User
Tags switches

Network card bond is to realize the redundancy, bandwidth expansion and load balance of the local NIC by binding the multi-net cards into a logical network card. In the application deployment is a commonly used technology, our company basic all the project related servers have done bond, here summarized and collated, in order to unknown origin.

Bond mode:

    1. Mode=0 (BALANCE-RR) represents the load sharing round-robin, and the aggregation of the switch is forced to mate in a manner that does not negotiate.
    2. Mode=1 (active-backup) means that the main standby mode, only one network card is active, the other piece is prepared standby, at this time if the switch with the bundle is tied, will not work, because the switch to two network card bundle, half of the packet is discarded.
    3. mode=2 (Balance-xor) represents an XOR hash load sharing, and the aggregation of the switch is forced to not negotiate a coordinated manner. (Requires Xmit_hash_policy)
    4. Mode=3 (broadcast) indicates that all packages are emitted from all interface, this unbalanced, only redundant mechanism ... And the aggregation of the switch is forced to not negotiate the way mates.
    5. Mode=4 (802.3AD) indicates support for 802.3AD protocol, and the aggregation of the switch LACP-mode mates (requires Xmit_hash_policy)
    6. Mode=5 (balance-tlb) is selected slave based on the load situation of each slave to send, using the current turn slave when receiving
    7. Mode=6 (BALANCE-ALB) added RLB on the 5 tlb basis.

5 and 6 do not require the switch-side settings, the network card can be automatically aggregated. 4 Need support 802.3AD. 0,2 and 3 theoretically require a static aggregation method
However, the actual measurement of 0 can be fooled by the MAC address in the case of the switch does not set the situation is not too balanced to receive.

There are three kinds of common

Mode=0: Balanced load mode with automatic redundancy, but requires "Switch" support and settings.

Mode=1: Automatic redundancy mode, where one line is disconnected, other lines will be automatically redundant.

MODE=6: Balanced load mode with automatic redundancy without "Switch" support and settings.

It is important to note that if you want to make mode 0 load balancer, just set the options bond0 miimon=100 mode=0 is not enough, the switch connected to the NIC must do a special configuration (these two ports should be aggregated), Because the two NICs that do bonding are using the same MAC address. From the principle analysis (bond runs under mode 0):

Mode 0 under Bond the IP of the NIC is changed to the same MAC address, if these network cards are connected to the same switch, then the switch ARP table this MAC address corresponds to a number of ports, then the switch to the MAC address to accept the packet to which port to forward it? Normally the MAC address is the only one in the world, and a MAC address that corresponds to multiple ports definitely confuses the switch. So mode0 bond If you connect to the switch, these ports of the switch should be aggregated (Cisco is called Ethernetchannel,foundry called PortGroup), since the switches are aggregated, Several ports under aggregation are also bundled into a MAC address. Our solution is to have two NICs connected to different switches.

There is no need to configure the switch in Mode6 mode, because the two NICs that do bonding use different MAC addresses.

Linux Network Port Bindings

Through the network port binding (bond) technology, can easily realize the network port redundancy, load balancing, so as to achieve high availability and high reliability. Prerequisite Agreement:

2 Physical network ports are: eth0,eth1

The virtual port after binding is: Bond0

The server IP is: 192.168.0.100

The first step is to configure the settings file:

/etc/sysconfig/network-scripts/ifcfg-bond0

Device=bond0

Bootproto=none

Onboot=yes

ipaddr=192.168.0.100

netmask=255.255.255.0

network=192.168.0.0

broadcast=192.168.0.255

#BROADCAST广播地址

/etc/sysconfig/network-scripts/ifcfg-eth0

Device=eth0

Bootproto=none

Master=bond0

Slave=yes

/etc/sysconfig/network-scripts/ifcfg-eth1

Device=eth1

Bootproto=none

Master=bond0

Slave=yes

The second step is to modify the Modprobe related settings file and load the bonding module:

1. Here, we create a dedicated setup file that loads bonding/etc/modprobe.d/bonding.conf

[Email protected] ~]# vi/etc/modprobe.d/bonding.conf

#追加

Alias Bond0 Bonding

Options Bonding Mode=0 miimon=200

2. Load the module (reboot the system without manually reloading)

[Email protected] ~]# modprobe bonding

3. Confirm that the module is loaded successfully:

[Email protected] ~]# Lsmod | grep bonding

Bonding 100065 0

The third step, restart the network, and then confirm the situation:

[Email protected] ~]#/etc/init.d/network restart

[Email protected] ~]# cat/proc/net/bonding/bond0

Ethernet Channel Bonding driver:v3.5.0 (November 4, 2008)

Bonding mode:fault-tolerance (Active-backup)

Primary Slave:none

Currently Active Slave:eth0

......

[Email protected] ~]# Ifconfig | grep HWaddr

Bond0 Link encap:ethernet HWaddr 00:16:36:1b:bb:74

Eth0 Link encap:ethernet HWaddr 00:16:36:1b:bb:74

eth1 Link encap:ethernet HWaddr 00:16:36:1b:bb:74

From the confirmation information above, we can see 3 important information:

1. Now the bonding mode is Active-backup

2. Now the active state of the network port is eth0

The physical address of the 3.bond0,eth1 is the same as the physical address of the eth0 in the active state, in order to avoid confusion on the upper switch.

Unplug a network cable and then visit your server to see if the Internet is still on.

In the fourth step, the system starts auto-binding and adds the default gateway:

[Email protected] ~]# vi/etc/rc.d/rc.local

#追加

Ifenslave bond0 eth0 eth1

Route add default GW 192.168.0.1

#如可上网就不用增加路由, 0.1 addresses are modified by environment.

------------------------------------------------------------------------

Note: The front is just 2 network ports bound into a bond0 case, if we want to set up a plurality of bond ports, such as physical network port eth0 and eth1 composition bond0,eth2 and eth3 composition Bond1,

Then the setting of the network settings file method and the 1th step above the same method, but the/etc/modprobe.d/bonding.conf settings can not be as simple as the following overlay:

Alias Bond0 Bonding

Options Bonding mode=1 miimon=200

Alias Bond1 Bonding

Options Bonding mode=1 miimon=200

There are 2 ways to set up the correct method:

First, you can see that, in this way, multiple bond-port patterns can only be set to the same:

Alias Bond0 Bonding

Alias Bond1 Bonding

Options bonding max_bonds=2 miimon=200 mode=1

Second, in this way, the mode of the different bond openings can be set differently:

Alias Bond0 Bonding

Options Bond0 miimon=100 mode=1

Install Bond1/sbin/modprobe bonding-o bond1 miimon=200 mode=0

Take a closer look at the above 2 Setup methods, now if you want to set 3, 4, or even more bond port, you should also have it!

PostScript: A brief introduction to the above when loading the bonding module, the options in the meaning of some parameters:

Miimon monitors the frequency of network links in milliseconds, which we set to 200 milliseconds.

Max_bonds number of bond ports configured

Mode bond model, in general practical applications, 0 and 1 used more.

Second, through the virtual machine to test, verify the bond effect:

1. Bind two NICs

General production environment is necessary to ensure uninterrupted service of network transmission, using network card binding technology can not only improve the transmission rate of network card bandwidth, but also in one of the network card failure, can still ensure the normal use of the network. To put it simply, suppose we implement a binding technology on two NICs, so that in normal work they will transmit data together, so that the speed of network transmission becomes faster, but as long as one of the cards suddenly appeared fault, the other a network card will be automatically replaced within 0.1 seconds, to ensure that data transmission will not be interrupted.

1th step: In the virtual machine to add 1 additional network card device, please make sure that two network cards are in the same network card mode, the same network card mode device can do network adapter binding experiment, otherwise the two network cards themselves can not transmit data to each other.

Set two NIC devices to the same NIC mode

2, through the Vim text editor to configure the network card device binding parameters, Nic binding theory is very similar to the RAID array group, we need to participate in the network card binding device first "initial setup", these original independent network card devices do not need to have their own IP address and other information, Let them support the network card binding device can be, and then also need to be the binding device named Bond0, the IP address and other information to fill in, so that when the user access to the corresponding service, in fact, is the two network card equipment in the common in the provision of services.

#配置二块网卡
#第一块网卡 [[email protected] ~]# Vim/etc/sysconfig/network-scripts/ifcfg-eno16777736type=ethernetbootproto=noneonboot =yesuserctl=nodevice=eno16777736master=bond0slave=yes
#第二块网卡 [[email protected] ~]# Vim/etc/sysconfig/network-scripts/ifcfg-eno33554968type=ethernetbootproto=noneonboot =yesuserctl=nodevice=eno33554968master=bond0slave=yes
#配置bond网卡 (no default, you need to create it yourself) [email protected] ~]# vim/etc/sysconfig/network-scripts/ifcfg-bond0type=ethernetbootproto= Noneonboot=yesuserctl=nodevice=bond0ipaddr=192.168.10.10prefix=24dns=192.168.10.1nm_controlled=no

3, let the kernel support network card binding driver, the common network card binding drive mode has three kinds of--mode0, Mode1 and Mode6. For example, for a file server that provides NFS or samba services, if you can only provide the maximum transmission rate of the gigabit network, but at the same time download the user is very much the case, then the network pressure must be great, and for example, a network storage server to provide iSCSI services, In the production environment, the reliability of the network card is extremely important, in this case must be able to guarantee the transmission rate and network security, so the better choice is the Mode6 solution, because the Mode6 balance complex mode can let two NIC simultaneously work together, When one of the network cards fails, it can be automatically backed up, providing reliable network transmission protection and no need for switch device support.

Use the Vim text editor to create a network adapter bound kernel driver file that enables the BOND0 network card device to support binding technology (bonding) while defining the NIC binding as Mode6 balanced load mode, and in the event of a failure
Automatic switchover time is 100 milliseconds: [[email protected] ~]# vim/etc/modprobe.d/bond.confalias bond0 bondingoptions bond0 miimon=100 mode=6

4, restart Network Service NIC binding operation can be successful, under normal circumstances only BOND0 network card will have IP address and other information

[email protected] ~]# systemctl restart Network[[email protected] ~]# ifconfigbond0:flags=5187< up, Broadcast,running,master,multicast>MTU 1500inet 192.168.10.10 netmask 255.255.255.0 broadcast 192.168.10.255inet6 fe80::20c:29ff:fe9c:637d Prefixlen ScopeID 0x20<Link>ether 00:0c:29:9c:63:7d Txqueuelen 0 (Ethernet) Rx Packets-Bytes 82899 (80.9 KiB) Rx Errors 0 dropped 6 overruns 0 f Rame 0TX Packets 588 bytes 40260 (39.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eno16777736:flags=6211< /c2>< up, Broadcast,running,slave,multicast>MTU 1500ether 00:0c:29:9c:63:73 txqueuelen (Ethernet) Rx packets 347 bytes 40112 (39.1 KiB) Rx Errors 0 dropped 6  Overruns 0 Frame 0TX packets 263 bytes 20682 (20.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eno33554968: flags=6211< up, Broadcast,running,slave,multicast>MTU 1500ether 00:0c:29:9c:63:7d Txqueuelen (Ethernet) Rx packets 353 bytes 42787 (41.7 KiB) Rx errors 0 dropped 0 overr UNS 0 Frame 0TX packets 325 bytes 19578 (19.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
We can ping 192.168.10.10 command on the local host to check network connectivity, and then suddenly in the virtual machine hardware configuration to randomly remove a piece of network card device, can very clearly see the process of network card switching (up to 1 data drops). [Email protected] ~]# ping 192.168.10.10PING 192.168.10.10 (192.168.10.10), bytes of data.64 bytes from 192.168.10. 10:icmp_seq=1 ttl=64 time=0.109 ms64 bytes from 192.168.10.10:icmp_seq=2 ttl=64 time=0.102 ms64 bytes from 192.168.10.10 : icmp_seq=3 ttl=64 time=0.066 msping:sendmsg:Network is unreachable64 bytes from 192.168.10.10:icmp_seq=5 ttl=64 time= 0.065 ms64 bytes from 192.168.10.10:icmp_seq=6 ttl=64 time=0.048 ms64 bytes from 192.168.10.10:icmp_seq=7 ttl=64 time=0. 042 ms64 bytes from 192.168.10.10:icmp_seq=8 ttl=64 time=0.079 ms^c---192.168.10.10 ping statistics---8 packets transmi Tted, 7 received, 12% packet loss, time 7006msrtt Min/avg/max/mdev = 0.042/0.073/0.109/0.023 ms

Note: There is no ifconfig command on the CentOS system and can be installed via Yum

Yum Install Net-tools-y

Implementation of multiple NIC bindings in Linux Introduction 7 Common bond modes

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.