By bundling multiple Linux network ports into one, you can improve the performance of your network, such as backing up a server that needs to back up several T data at one night,
The use of a single gigabit network port will be a serious bottleneck. Other applications, such as FTP servers and high-load download sites, have similar problems.
Therefore, using Linux teaming or bond to bind multiple network cards as a logical network port, configuring a single IP address can significantly increase the server's network throughput (I/O).
The multi-NIC binding feature of Linux uses the "bonding" module in the kernel to refer to the Linux Ethernet bonding driver documentation for this module.
However, the current release of each Linux version of the kernel has included this module, in most cases do not need to recompile the kernel.
The Linux bonding driver provides the ability to bind/integrate (bond) multiple network cards to a virtual logical network port. And please note that the binding network port (bonded) has a variety of operating modes;
In general, it is divided into hot standby and load balancing (load balancing). It is relatively easy to configure in Redhat/fedora and other classes of Redhat Linux.
1. Create a bond0 configuration file
Vi/etc/sysconfig/network-scripts/ifcfg-bond0
[email protected] network-scripts]# less ifcfg-bond0
Device=bond0
#HWADDR =00:10:18:d8:62:58
Type=ethernet
#UUID =85f545ec-5add-48e0-bcc6-3699a2202972
Onboot=yes
Nm_controlled=no
Bootproto=none
ipaddr=192.168.2.10
netmask=255.255.255.0
gateway=192.168.2.1
mtu=9000
bonding_opts= "Mode=4 miimon=100"
2. Modify the configuration files of the eth0 and eth1 that are bound
[email protected] network-scripts]# cat Ifcfg-eth2
Device=eth2
hwaddr=00:10:18:d8:62:58
Type=ethernet
uuid=85f545ec-5add-48e0-bcc6-3699a2202972
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
mtu=9000
[email protected] network-scripts]# cat Ifcfg-eth3
Device=eth3
#HWADDR =00:10:18:d8:62:58
Type=ethernet
uuid=85f545ec-5add-48e0-bcc6-3699a2202972
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
mtu=9000
[email protected] network-scripts]# cat Ifcfg-eth4
Device=eth4
#HWADDR =00:10:18:d8:62:58
Type=ethernet
uuid=85f545ec-5add-48e0-bcc6-3699a2202972
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
mtu=9000
[email protected] network-scripts]# cat Ifcfg-eth5
Device=eth5
#HWADDR =00:10:18:d8:62:58
Type=ethernet
uuid=85f545ec-5add-48e0-bcc6-3699a2202972
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
mtu=9000
3. Installed in the Bond module drive
Edit the/etc/modprobe.conf or/etc/modules.conf file and add the following to enable the system to load bonding module driver when booting
If there is no modprobe.conf on VI modprobe.conf a file
Alias Bond0 Bonding
Option Bond0 miimon=100 mode=1
Description
1). miimon=100 is used for link monitoring. That is, the link status is monitored every 100ms. The bonding only monitors the link between the host and the switch. If the switch goes out of the link and there is no problem in itself, then bonding thinks the link is no problem and continues to use it.
2). Mode=1 means redundancy is provided. In addition, it can be 0, 2, 3, a total of four modes. 0 indicates load balancing
4. Add the following statement to the/etc/rc.d/rc.local file to enable the system to run automatically
Ifenslave bond0 eth0 eth1
Route add-net 192.168.1.254 netmask 255.255.255.0 bond0 #如有需要才加该路由
5. Detecting and verifying the configuration
First execute the command load bonding module: modprobe bonding
Restart the Network service and confirm that Bond0 started correctly: Service network restart
Verify that the device is loaded correctly: Less/proc/net/bonding/bond0
[Email protected] network-scripts]# less/proc/net/bonding/bond0
Ethernet Channel Bonding driver:v3.6.0 (September 26, 2009)
Bonding mode:ieee 802.3ad Dynamic Link Aggregation
Transmit Hash policy:layer2 (0)
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3AD Info
LACP Rate:slow
Aggregator Selection Policy (Ad_select): Stable
Active Aggregator Info:
Aggregator Id:15
Number of Ports:1
Actor key:17
Partner key:1
Partner Mac address:00:00:00:00:00:00
Slave Interface:eth2
MII Status:up
speed:1000 Mbps
Duplex:full
Link Failure Count:5
Permanent HW addr:00:10:18:d8:62:58
Aggregator Id:12
Slave Queue id:0
Slave Interface:eth3
MII Status:up
speed:1000 Mbps
Duplex:full
Link Failure count:1
Permanent HW addr:00:10:18:d8:62:5a
Aggregator Id:13
Slave Queue id:0
Slave Interface:eth4
MII Status:up
speed:1000 Mbps
Duplex:full
Link Failure Count:2
Permanent HW addr:00:10:18:d8:62:5c
Aggregator Id:14
Slave Queue id:0
Slave Interface:eth5
MII Status:up
speed:1000 Mbps
Duplex:full
Link Failure count:0
Permanent HW addr:00:10:18:d8:62:5e
Aggregator Id:15
Slave Queue id:0
At this point, Bond's setup is basically over.
This article is from the "Zoo" blog, make sure to keep this source http://zuopiezi.blog.51cto.com/4831427/1548329
Linux multi-NIC bonding binding Port Aggregation