Linux multi-NIC binding bonding

Source: Internet
Author: User

Linux Multi-NIC Binding overview

This article os:6.4 test here is four NIC bound 1 block bond


We introduce the Linux dual-NIC binding implementation is the use of two network card virtual become a network card, the aggregation of the device appears to be a separate Ethernet interface device, the popular point is that two network cards with the same IP address and parallel link aggregation into a logical link work. In fact, this technology has long existed in sun and Cisco, known as Trunking and EtherChannel technology, which is also used in the Linux 2.4.x kernel, known as bonding.
The earliest application of bonding technology was designed to improve the data transmission between cluster nodes in the cluster--beowulf. Let's discuss the principle of bonding, what is bonding needs to start with the promiscuous (PROMISC) mode of the NIC. We know that under normal circumstances, the network card only receives the destination hardware address (MAC addresses) is the Ethernet frame of its own Mac, filtering out other data frames to reduce the burden on the driver. But the NIC also supports another mode called promiscuous Promisc, which can receive all the frames on the network , such as Tcpdump, which is running in this mode. Bonding also runs in this mode, and modifies the MAC address in the driver, changing the MAC address of the two NIC to the same, and receiving data frames for a particular Mac. The corresponding data frames are then routed to the bond driver for processing.
It is not possible to set the same IP address directly for two NICs. Kernels 2.4.12 and later versions are available for bonding modules, and previous versions can be implemented with patches. You can determine whether the kernel supports bonding with the following command:

[Email protected]]$ cat/boot/config-2.6.32-431.el6.x86_64 |grep-i config_bondingconfig_bonding=m

The meaning of NIC bindings

Network Load Balancing

for the bonding Network Load Balancing is commonly used in the file server, such as the three network card, as a piece to use, to solve an IP address, traffic is too large, the server network pressure is too large problem. For file servers, such as NFS or samba file servers, none of the administrators can address the network load by making many IP addresses for the intranet's file servers. If in the intranet, the file server for management and application convenience, mostly with the same IP address. For a hundred m local network, the network pressure is great, especially for Samaba and NFS servers, in cases where the file server is used by multiple users at the same time. In order to solve the same IP address, break the limit of traffic, after all, network cable and network card to the data throughput is limited. In the case of limited resources, to achieve Network Load Balancing, the best way is to bonding;
Network redundancy;
for the server, network equipment Stability is also more important, especially the NIC. In a production-type system, the reliability of the NIC is even more important. In a production-type system, most of the redundancy of the hardware equipment provides the reliability and security of the server, such as power supply. The bonding also provides redundant support for network cards. The network network card is bound to an IP address, when a network card has physical damage, another NIC can also provide normal service.

dual NIC binding settings
First, edit the virtual network interface configuration file, specify the network card IP
Assuming that eth0 is a network card for external service, and has been debugged, Eth1 is the NIC that wants to provide service with eth0 at the same time.

# cd/etc/sysconfig/network-scripts/

#vi ifcfg-bond0

Write the following information and the original Ifcfg-eth0 configuration is actually similar.
So I propose to execute the following statement, copy the Ifcfg-eth0 and then change it.

# CP Ifcfg-eth0 Ifcfg-bon0

The Ifcfg-bon0 information is modified roughly as follows:

device=bond0ipaddr=10.199.xx.xxnetmask=255.255.255.xxxnetwork=10.199.xx.xxgateway=10.199.xx.xxonboot= yesbootproto=none#dns1=xx.xx.xx.xxuserctl=nobonding_opts= "Mode=1 miimon=100" Type=ethernetipv6init=no


Second, configure the real network card
Modify the Ifcfg-em1 as follows:

Device=em1#hwaddr=4c:76:25:42:f8:41onboot=yesbootproto=nonemaster=bond0 #如果不写, you must do the fourth step Slave=yes #如果不写, you must do the four Step Userctl=no#type=ethernet#ipv6init=notype=ethernetipv6init=no




Similarly repaired ifcfg-em2 are as follows:

device=em2#hwaddr=4c:76:25:42:f8:44onboot=yesbootproto=nonemaster=bond0slave=yesuserctl=no#type=ethernet# Ipv6init=notype=ethernetipv6init=no

Other similar


three, load the module, let the system support Bonding
By default, the kernel already supports bonding, so simply modify the/etc/modprobe.d/bond.conf configuration document: Add two lines

cat/etc/modprobe.d/bond.conf alias bond0 bonding#options bonding miimon=100 mode=1

Description
mode specifies the operating modes of the bond0, and in Redhat there are 0-6 total 7 working modes, 0 and 1 are commonly used.

mode=0 means load-balancing (Round-robin) is balanced, and both NICs work.  
mode=1 means that fault-tolerance (active-backup) provides redundancy, working in a master-slave manner, which means that only one NIC works by default, and the other is backed up.   
mode=2 indicates that the XOR policy is a balanced strategy. This mode provides load balancing and fault tolerance   
Mode=3 represents broadcast as a broadcast policy. This mode provides fault tolerance    the
Mode=4 represents the IEEE 802.3ad dynamic Link aggregation as a dynamically linked aggregation for IEEE 802.3ad. This policy can be changed from the default XOR policy to other policies through the Xmit_hash_policy option.   
Mode=5 represents Adaptive transmit load balancing for the adapter transport. necessary for this mode: Ethtool supports obtaining rate    for each slave;
Mode=6 represents Adaptive load balancing for adapter adaptive load balancing. This mode includes the BALANCE-TLB mode, plus receive load balancing for IPV4 traffic (receive load   balance, RLB) and does not require any switch (switch) support.   
Bonding can only provide link monitoring, i.e. whether the link from the host to the switch is connected. If only the switch's external link is down, and the switch itself does not fail, then bonding will assume that the link is not a problem and continue to use it.

For more information on binding NIC, please refer to another article of the author
http://czmmiao.iteye.com/admin/blogs/1044031
iv. Increase boot script
Add in the/etc/rc.d/rc.local.

#ifenslave bond0 em1 em2 em3 em4

If Eth0 and Eth1 both write master and slave, then the above steps do not matter.
Five, restart
Reboot or service network Restart can see the results.

Vi. to see which card is bound on the net

 cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0  ( september 26, 2009) bonding mode: fault-tolerance  (active-backup) Primary Slave:  nonecurrently active slave: em2mii status: upmii polling interval   (ms): 100up delay  (ms): 0down delay  (ms):  0slave interface:  em1mii status: upspeed: 10000 mbpsduplex: fulllink failure count:  3permanent hw addr: 4c:76:25:42:f8:75slave queue id: 0slave interface :  em2mii status: upspeed: 10000 mbpsduplex: fulllink failure count:  3permanent hw addr: 4c:76:25:42:f8:78slave queue id: 0slave interface :  em3mii status: upspeed: 10000 mbpsduplex: fulllink failure count:  3permanent hw aDdr: 4c:76:25:42:fb:f1slave queue id: 0slave interface: em4mii status:  upspeed: 10000 mbpsduplex: fulllink failure count: 3permanent hw  addr: 4c:76:25:42:fb:f4slave queue id: 0

VI. Testing
Ping An address, of course, it can ping the address. If the network is not detected, check the IFCFG-BOND0 network settings.
Then unplug a network cable, if the ping is not broken, prove to pull a backup line, not the main line, re-insert the first two minutes.
Unplug another cable at this time, it is estimated that you can now see the ping timeout or the card is there, wait for 10-30 seconds, Ping continues together with.
The test was successful.


This article is from the "Big Wind" blog, please be sure to keep this source http://lansgg.blog.51cto.com/5675165/1680219

Linux multi-NIC binding bonding

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.