Original link: http://www.wushiqin.com/?post=68
first, what is the NIC binding and simple principle
Nic binding is also known as "Nic Bundle", is the use of multiple physical network card virtual become a network card, to provide load balancing or redundancy, increase the role of bandwidth. When a network card is broken, it does not affect the business. This converged device appears to be a separate Ethernet interface device, which means that the NIC has the same IP address and the parallel link is aggregated into a single logical link operation. This technology, known as trunking and EtherChannel technology, is known as the bonding in the Linux 2.4.x kernel in network companies such as Cisco.
Ii. Classification of technology
1. Load Balancing
For bonding Network Load Balancing is we commonly used in the file server, such as the three network card, as a piece to use, to solve an IP address, traffic is too large, the server network pressure is too large problem. For file servers, such as NFS or samba file servers, none of the administrators can address the network load by making many IP addresses for the intranet's file servers. If in the intranet, the file server for management and application convenience, mostly with the same IP address. For a hundred m local network, the network pressure is great, especially for Samaba and NFS servers, in cases where the file server is used by multiple users at the same time. In order to solve the same IP address, break the limit of traffic, after all, network cable and network card to the data throughput is limited. The best way to achieve Network Load Balancing in the case of limited resources is to bonding.
2. Network redundancy
For the server, network equipment Stability is also more important, especially the network card. In a production-type system, the reliability of the NIC is even more important. In a production-type system, most of the redundancy of the hardware equipment provides the reliability and security of the server, such as power supply. The bonding also provides redundant support for network cards. To bind multiple network cards to an IP address, when a network card has physical damage, another network card automatically enabled, and provide the normal service, that is: By default, only one network card work, other network cards do backup.
third, bonding The reality of load balancing and network redundancy technology
1. Network environment
[[Email protected] to~]#Cat/etc/Issuecentos Release6.3(Final) Kernel \ r \m[[email protected]- to~]# getconf long_bit -[[Email protected]- to~]# IP A |grep-v Lo2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU theQdisc MQ State up Qlen +Link/ether f0:4d:a2:3e: $:xxBRD ff:ff:ff:ff:ff:ff inet172.28.2.31/ -Brd172.28.2.255Scope Global EM13: em2: <BROADCAST,MULTICAST> MTU theQdisc NoOp State down Qlen +Link/ether f0:4d:a2:3e: $: GenevaBRD FF:FF:FF:FF:FF:FF4: em3: <BROADCAST,MULTICAST> MTU theQdisc NoOp State down Qlen +Link/ether f0:4d:a2:3e: $:GenevaBRD FF:FF:FF:FF:FF:FF5: em4: <BROADCAST,MULTICAST> MTU theQdisc NoOp State down Qlen +Link/ether f0:4d:a2:3e: $: .BRD FF:FF:FF:FF:FF:FF
PS: Use four NIC for bonding binding: EM1, EM2, Em3, EM4
2. Check the bonding environment
The following methods are used to detect:
[email protected] to ~]# modinfo Bonding | grep bonding.kofilename: /lib/modules/2.6. -279.5. 2. el6.centos.plus.x86_64/kernel/drivers/net/bonding/bonding.ko[[email protected]- modprobe -l bond*kernel/drivers/net/bonding/bonding.ko
3. Load the Bonding module
[Email protected]lsmodgrep'bonding'[Email Protected]-modprobefile /etc/modprobe. conf, all config Files belong into/etc/modprobe. d/.
PS: When loading the module, a warning is issued, meaning: the current kernel version has been deprecated configuration file/etc/modprobe.conf, all configuration files belong to/ETC/MODPROBE.D, that is, the future loading module to write to the configuration file to write to/etc/ Modprobe.conf in this configuration file! Therefore, the bonding module configuration file Here is also a separate configuration file!
[Email protected]lsmodgrep'bonding'bonding 127806 0ipv6 322637 167 bonding8021q 25941 1 Bonding
Boot auto Load module to kernel:
Echo ' modprobe bonding &>/dev/null ' >>/etc/rc.local
4. Create a virtual network adapter profile
cd/etc/sysconfig/network-scripts/cat >> ifcfg-bond0 << eofdevice=bond0onboot =Yesbootproto=staticipaddr=172.28. 2.31 Broadcast=172.28. 2.255 NETMASK=255.255. 255.0 GATEWAY=172.28. 2.1 userctl=Notype=etherneteof
5. Modify the real card configuration file
mv ifcfg-em1 ifcfg-em1.bakmv ifcfg-em2 ifcfg-em2.bakmv ifcfg-em3 ifcfg- Em3.bak MV IFCFG-EM4 Ifcfg-em4.bak
EM1 Configuration file Contents:
Cat>> ifcfg-em1 <<Eofdevice="em1"Onboot="Yes"Bootproto="None"Userctl="No"MASTER="bond0"SLAVE="Yes"EOF
EM2 Configuration file Contents:
Cat>> ifcfg-em2 <<Eofdevice="em2"Onboot="Yes"Bootproto="None"Userctl="No"MASTER="bond0"SLAVE="Yes"EOF
EM3 Configuration file Contents:
Cat>> ifcfg-em3 <<Eofdevice="em3"Onboot="Yes"Bootproto="None"Userctl="No"MASTER="bond0"SLAVE="Yes"EOF
EM4 Configuration file Contents:
Cat>> IFCFG-EM4 <<Eofdevice="EM4"Onboot="Yes"Bootproto="None"Userctl="No"MASTER="bond0"SLAVE="Yes"EOF
6. Edit the module to load the configuration file, let the system support bonding
cat >>/etc/modprobed/bonding.conf <<eof#/etc/modprobe. Confalias bond0 bondingoptions bond0 miimon= mode=4 lacp_rate=1EOF
Ps:
A. mode=4 Here I use this mode, a total of 0,1,2,3,4,5,6 seven modes, the article will detail the following seven modes!
In this mode, the NIC has a MAC address, and the results will be verified later!
B. Miimon is used for link monitoring, such as: miimon=100, then the system every 100MS monitoring link connection status, if one line is not connected to another line;
C. Bonding can only provide link monitoring, i.e. whether the link from the host to the switch is connected. If only the switch's external link is down, and the switch itself does not fail, then bonding will assume that the link is not a problem and continue to use it.
7. Restart the server or restart the network for testing
[[Email protected] ~]# Service Network Restart[[email protected]~]# IP A |grep-v Lo2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> MTU theQdisc MQ Master bond0 State up Qlen +Link/ether f0:4d:a2:3e: $:xxBRD FF:FF:FF:FF:FF:FF3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> MTU theQdisc MQ Master bond0 State up Qlen +Link/ether f0:4d:a2:3e: $:xxBRD FF:FF:FF:FF:FF:FF4: em3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> MTU theQdisc MQ Master bond0 State up Qlen +Link/ether f0:4d:a2:3e: $:xxBRD FF:FF:FF:FF:FF:FF5: em4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> MTU theQdisc MQ Master bond0 State up Qlen +Link/ether f0:4d:a2:3e: $:xxBRD FF:FF:FF:FF:FF:FF6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> MTU theQdisc noqueue State up link/ether f0:4d:a2:3e: $:xxBRD ff:ff:ff:ff:ff:ff inet172.28.2.38/ -Brd172.28.2.255Scope Global bond0 Inet6 fe80::f24d:a2ff:fe3f:23c0/ -scope link Valid_lft forever preferred_lft forever[[email protected]~]#
PS: Under the production server, test slightly! Mode=4 This mode to configure the switch corresponding port to turn on link aggregation!
Iv. Simple Introduction to mode seven modes
Mode=0: Network Load Balancing and network redundancy can be achieved using a balanced round robin strategy (BALANCE-RR).
Features of this pattern:
A. So the network card is working, transmission packet sequence is transmitted sequentially, that is, the 1th packet through Eth0, the next packet through ETH1, has been circulating until the last packet transmission is complete.
B. This mode for the same connection from the different interfaces issued by the package, the transfer process through a different link, the client is likely to have an unordered packet arrival problem, and the unordered arrival of the packet needs to be sent again, so that the network throughput will be degraded.
C. The interchange connected to the NIC must be configured with a special configuration (both ports should be aggregated) because the two NICs that do bonding use the same MAC address
Mode=1: Only network redundancy is implemented with a master-backup strategy (ACTIVE/BACKUP).
Features of this pattern:
A. In this mode only one network card is working properly, and the other network card is backup state, when another piece of network card is broken and the other NIC is converted from backup to main immediately.
B. The benefit of this pattern is that it can provide high availability of network connections, but its resource utilization is low and only one interface is in working state.
mode=2: Mode provides load balancing and fault tolerance, using a balancing strategy (BALANCE-XOR).
The characteristic of this mode: transmit packets based on the specified transfer hash policy, this mode is used relatively little!
Mode=3: Provides fault tolerance only and uses broadcast policy.
This mode features: Each packet is transmitted on each slave interface.
MODE=4:IEEE 802.3ad dynamic Link aggregation, using the LAGP protocol, can double the bandwidth increase!
Features of this pattern:
Create an aggregation group that shares the same rate and duplex settings. Multiple slave are working under the same active aggregate according to the 802.3AD specification.
The slave election for outgoing traffic is based on the transfer hash policy, which can be changed from the default XOR policy to another policy through the xmit_hash_policy option.
The necessary conditions for this mode are:
Conditional 1:ethtool supports getting the rate and duplex settings for each slave
Conditional 2:switch (switch) supports IEEE 802.3ad Dynamic link Aggregation
Condition 3: Most switch (switches) require a specific configuration to support 802.3ad mode
Mode=5: Load balancer with adapter transport (BALANCE-TLB)
This mode features:
Features: no special switch (switch) supported channel bonding is required. Out-of-office traffic is allocated on each slave based on the current load (calculated based on speed). If the slave that is accepting data fails, the other slave takes over the MAC address of the failed slave.
Required for this pattern: Ethtool supports getting the rate per slave
MODE=6: Network Load Balancing and network redundancy can be implemented using the load Balancing (round-robin) approach.
Features of this pattern:
A. Two network cards are working, but the MAC address of each network card is not the same
B. There is no need to configure the switch in this mode because the two NICs that do bonding use different MAC addresses.
This mode is the most complex, no longer detailed introduction!
V. Summary
The most commonly used in production is the 0,1,6,4 model.
If you implement simple load balancing you can use Mode=0 or mode=6 (note the difference between mode=0 and mode=6 two modes)
If you implement simple network redundancy you can take mode=1 (this mode can be simulated with VM virtual machines)
This article originates from http://qiuzhijun.blog.51cto.com/2238445/1030710
(reprint) Bonding Technical Guide