Linux NIC Bonding configuration

Source: Internet
Author: User

Chapters
    1. Bonding technology
    2. CENTOS7 Configuration Bonding
    3. CENTOS6 Configuration Bonding

First, bonding technology

Bonding (binding) is a Linux network card binding technology, you can put the server N physical network card inside the system abstract (bound) into a logical network card, can improve network throughput, network redundancy, load and other functions, there are many advantages.

Bonding technology is implemented at the kernel level of the Linux system, which is a kernel module (driver). Using it requires the system to have this module, we can modinfo command to view the information of this module, generally support.

Modinfo Bonding

Bonding Seven Modes of operation:

Bonding technology offers seven modes of operation that need to be specified when used, each with its own pros and cons.

    1. BALANCE-RR (mode=0), by default, has high availability (fault tolerance) and load balancing features that require the configuration of the switch, and each NIC polls the bundle (traffic distribution is fairly balanced).
    2. Active-backup (mode=1) only has a high-availability (fault-tolerant) feature and does not require a switch configuration, this mode only works with a single NIC, with only one MAC address on the outside. Disadvantage is low port utilization
    3. Balance-xor (mode=2) not commonly used
    4. Broadcast (MODE=3) not commonly used
    5. 802.3AD (mode=4) IEEE 802.3ad dynamic Link aggregation, requires switch configuration, no use
    6. Balance-tlb (mode=5) not commonly used
    7. Balance-alb (mode=6) features high availability (fault tolerance) and load balancing without the need for switch configuration (traffic distribution to each interface is not particularly balanced)

Specific online has a lot of information, understand the characteristics of each model according to their own choice on the line, generally used to 0, 1, 4, 6 of these models.

Second, CENTOS7 configuration bonding

Environment:

System: CENTOS7 network card: EM1, em2bond0:172.16.0.183 Load mode: MODE6 (Adaptive load Balancing)

Two physical network cards on the server em1 and EM2, by binding into a logical NIC Bond0,bonding mode selection Mode6

Note: The IP address is configured on BOND0 and the physical network card does not need to be configured with an IP address.

1. Close and stop NetworkManager services

Systemctl Stop Networkmanager.service     # stop NetworkManager service systemctl disable Networkmanager.service  # Disable Boot NetworkManager service

PS: Be sure to close, do not shut the bonding have interference

2. Load Bonding Module

Modprobe--first-time Bonding

There is no prompt stating that the load succeeds if modprobe:ERROR:could not insert ' bonding ': module already in kernel instructions you have loaded this module, no need to control

You can also use Lsmod | grep bonding to see if the module is loaded

Lsmod | grep bondingbonding               136705  

3. Create a configuration file based on the Bond0 interface

1 vim /etc/sysconfig/network-scripts/ifcfg-bond0

Modify it as follows, depending on your situation:

Device=bond0type=bondipaddr=172.16.0.183netmask=255.255.255.0gateway=172.16.0.1dns1=114.114.114.114userctl= nobootproto=noneonboot=yesbonding_master=yesbonding_opts= "Mode=6 miimon=100"

The above bonding_opts= "mode=6 miimon=100" means that the operating mode configured here is MODE6 (Adaptive load Balancing), Miimon indicates the frequency of monitoring network links (milliseconds), we set the 100 milliseconds, Depending on your needs, you can also specify mode as a different load pattern.

4. Modify the configuration file of the Em1 interface

Vim/etc/sysconfig/network-scripts/ifcfg-em1

Modified to read as follows:

device=em1userctl=noonboot=yesmaster=bond0                  # need and above ifcfg-  The value of the device in the Bond0 configuration file corresponds to slave=Yesbootproto=none

5. Modify the configuration file of the EM2 interface

Vim/etc/sysconfig/network-scripts/ifcfg-em2

Modified to read as follows:

Device=em2userctl=noonboot=yesmaster=bond0                 # needs to correspond to the value of the device in the ifcfg-bond0 configuration file Slave=yesbootproto=none

6. Testing

Restart Network Service

Systemctl Restart Network

View Bond0 's interface status information (if the error indicates that it did not succeed, it is likely that the Bond0 interface is not up)

# cat/proc/net/bonding/bond0bonding Mode:adaptive Load Balancing//Bind mode: currently ALD mode (mode 6), i.e. high availability and load Balancing mode primary Slave : nonecurrently Active Slave:em1mii status:up//interface state: Up (Mii is media independent interface abbreviation,                     mouth meaning) MII Polling Interval (ms): 100//Interface Polling time interval (here is 100ms) up delay (ms): 0Down Delay (ms): 0Slave interface:em1 Interface: Em0mii status:up//Interface Status: Up (MII is media independent interface abbreviation, interface meaning) speed:1000 Mbps//Port rate is mpbsduplex:full//full-duplex link Failure count:0//LINK failures: 0 Permanent HW addr:84:2b:2b:6a:76:d4//Permanent MAC address slave queue Id:0slave Inter FACE:EM1//Spare interface: EM1MII status:up//Interface Status: Up (Mii is media independent Interf Ace abbreviation, meaning of the interface) speed:1000 Mbpsduplex:full//Full Duplex link Failure count:0//link Number of failures: 0Permanent HW Addr: 84:2B:2B:6A:76:D5//Permanent MAC address slave queue id:0 

View the interface information for the network under the ifconfig command

# ifconfigbond0:flags=5187<up,broadcast,running,master,multicast> MTU inet 172.16.0.183 netmask 255.2 55.255.0 broadcast 172.16.0.255 INET6 fe80::862b:2bff:fe6a:76d4 Prefixlen * ScopeID 0x20<link> ETH Er 84:2b:2b:6a:76:d4 txqueuelen 0 (Ethernet) Rx packets 11183 Bytes 1050708 (1.0 MiB) Rx Errors 0 Droppe D 5152 Overruns 0 frame 0 TX packets 5329 bytes 452979 (442.3 KiB) TX errors 0 dropped 0 overruns 0 car  RieR 0 Collisions 0em1:flags=6211<up,broadcast,running,slave,multicast> MTU ether 84:2B:2B:6A:76:D4 Txqueuelen (Ethernet) Rx packets 3505 bytes 335210 (327.3 KiB) Rx Errors 0 dropped 1 overruns 0 fr Ame 0 TX Packets 2852 bytes 259910 (253.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0em 2:flags=6211<up,broadcast,running,slave,multicast> MTU ether 84:2b:2b:6a:76:d5 Txqueuelen (Ethe rnet) RX Packets5356 bytes 495583 (483.9 KiB) RX errors 0 dropped 4390 overruns 0 frame 0 TX packets 1546 bytes 110385        (107.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo:flags=73<up,loopback,running> MTU 65536 inet 1 27.0.0.1 netmask 255.0.0.0 inet6:: 1 prefixlen scopeid 0x10

Test the network is highly available, we unplug one of the cables for testing, the conclusion is:

    • In this mode=6 mode drops 1, when recovering the network (network plug back) dropped in about 5-6, indicating that the high-availability function is normal, but when the recovery of the packet will be more than a drop
    • Test mode=1 mode drops 1, when recovering the network (cable plug back) basically did not lose packets, indicating high availability and recovery time are normal
    • Mode6 This load mode in addition to the failure to recover the time of the loss of the other is very good, if you can ignore this mode, and Mode1 fault switching and recovery are very fast, basically no packet loss and delay. But the port utilization ratio is low, because this kind of main standby mode only a net card is working.
Third, CENTOS6 configuration bonding

CENTOS6 configuration bonding and the above Cetons7 do bonding basically the same, but the configuration is somewhat different.

System: CENTOS6 network card: EM1, em2bond0:172.16.0.183 Load mode: mode1 (Adaptive load Balancing)  # Here the load mode is 1, that is, the main standby mode.

1. Close and stop NetworkManager services

Service  NetworkManager Stopchkconfig NetworkManager off

PS: If it is installed, close it, if the error indicates that it is not installed, then do not care

2. Load Bonding Module

Modprobe--first-time Bonding

3 , create configuration files based on the Bond0 interface

Vim/etc/sysconfig/network-scripts/ifcfg-bond0

Modify the following (depending on your needs):

device=bond0type=bondbootproto=noneonboot=yesipaddr=172.16.0.183netmask=255.255.255.0gateway=172.16.0.1dns1= 114.114.114.114userctl=nobonding_opts= "Mode=6 miimon=100"

4. Load the Bond0 interface to the kernel

Vi/etc/modprobe.d/bonding.conf

Modified to read as follows:

Alias Bond0 Bonding

5, edit the EM1, EM2 interface file

Vim/etc/sysconfig/network-scripts/ifcfg-em1

Modified to read as follows:

Device=em1master=bond0slave=yesuserctl=noonboot=yesbootproto=none
Vim/etc/sysconfig/network-scripts/ifcfg-em2

Modified to read as follows:

Device=em2master=bond0slave=yesuserctl=noonboot=yesbootproto=none

6. Load module, restart Network and test

modprobe Bondingservice Network Restart

View the status of the Bondo interface

Cat/proc/net/bonding/bond0
Bonding mode:fault-tolerance (active-backup)  # BOND0 Interface The current load pattern is the main standby mode primary slave:nonecurrently active Slave: Em2mii status:upmii Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0Slave interface:em1mii status:upspeed: Mbpsduplex:fulllink Failure count:2permanent HW addr:84:2b:2b:6a:76:d4slave queue Id:0slave interface:em2mii Sta tus:upspeed:1000 mbpsduplex:fulllink Failure count:0permanent HW addr:84:2b:2b:6a:76:d5slave queue id:0

Through the ifconfig command to see the status of the next interface, you will find all the MAC address in mode=1 mode is consistent, indicating that the external logic is a MAC address

Ifconfig bond0:flags=5187<up,broadcast,running,master,multicast> MTU Inet6 fe80::862b:2bff:fe6a:76d4  Prefixlen ScopeID 0x20<link> ether 84:2b:2b:6a:76:d4 txqueuelen 0 (Ethernet) RX packets 147436 Bytes 14519215 (13.8 MiB) RX errors 0 dropped 70285 overruns 0 frame 0 TX packets 10344 bytes 970333 (94 7.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0em1:flags=6211<up,broadcast,running,slave,mu Lticast> MTU ether 84:2b:2b:6a:76:d4 Txqueuelen (Ethernet) RX packets 63702 bytes 6302768 ( 6.0 MiB) RX errors 0 dropped 64285 overruns 0 frame 0 TX Packets 344 bytes 35116 (34.2 KiB) TX ER         Rors 0 dropped 0 overruns 0 carrier 0 collisions 0em2:flags=6211<up,broadcast,running,slave,multicast> MTU 1500 Ether 84:2b:2b:6a:76:d4 Txqueuelen (Ethernet) Rx packets 65658 Bytes 6508173 (6.2 MiB) Rx ER Rors 0 dropped 6001 ovErruns 0 Frame 0 TX Packets 1708 bytes 187627 (183.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 Co Llisions 0lo:flags=73<up,loopback,running> MTU 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6:: 1 PR Efixlen ScopeID 0x10

Perform a high-availability test, unplug one of the network cables to see the packet loss and delay, and then plug in the network cable (simulated failure recovery), and then see the case of packet loss and delay.

Some references:

Http://www.tuicool.com/articles/b6ZVNr

Http://www.cnblogs.com/dkblog/p/3613407.html (seven modes of bound)

Transferred from: https://www.cnblogs.com/huangweimin/articles/6527058.html

Linux nic Bonding configuration (RPM)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.