Linux bonding Configuration steps detailed

Source: Internet
Author: User
Tags require centos

I. INTRODUCTION

Now almost all walks of life have built their own servers, because of the special status of the server, its reliability, availability and I/O speed is very important, maintaining high availability and security of the server is an important indicator of the enterprise IT environment, the most important point is the high availability of server network connectivity, To achieve these requirements, the server is now mostly using a multiple network card configuration, the system is now very popular Linux as a server work environment. Now bandwidth is not the bottleneck of improving the quality of service, the relative network equipment and server processing capacity has gradually become a new bottleneck. To improve the availability and reliability of the server's network connectivity, Sun's trunking technology, 3Com Company's dynamicaccess technology, Cisco The company's EtherChannel technology and so on are all studying the link aggregation technology that binds the server's multiple NIC interfaces, link aggregation technology provides a cheap and efficient way to extend the bandwidth of network devices and servers and improve the flexibility and usability of networks by making multiple links virtual into a logical link.

This article introduces the bonding technology under Linux, Linux 2.4.x kernel using this technology, the use of bonding technology can be the interface of many cards through the binding virtual into a network card, in the user's view this aggregation of equipment seems to be a separate Ethernet interface equipment, Popular point is that more than one network card with the same IP address and parallel connection aggregation into a logical link work.

Two, several algorithms of bond

The Linux bond supports 7 working modes and can refer to the kernel source package file: Documentation/networking/bonding.txt. Here is a description.

Pattern 1:mode=0, i.e.: (BALANCE-RR) round-robin policy (Balanced round-robin strategy)

Feature: Transmission packet order is transmitted sequentially (i.e.: the 1th package goes eth0, the next bag goes eth1 .... Keep looping until the last transmission is complete. This pattern provides load balancing and fault tolerance, but we know that if a connection or session packet is sent from a different interface, Halfway through the different links, the client is likely to be the problem of disorderly arrival of packets, and disorderly arrival of packets need to be sent again, so that the throughput of the network will decline.

Pattern 2:mode=1, i.e.: (active-backup) active-backup policy (primary-backup policy)

Feature: Only one device is active, and when one is down another is immediately converted to a primary device by a backup. The MAC address is externally visible, and from the outside it appears that the Bond's MAC address is unique to avoid confusion in the switch (switch). This pattern provides only fault tolerance; it can be seen that the advantage of this algorithm is to provide high network connectivity availability, but its resource utilization is low, only one interface in the working state, in the case of N network interface, resource utilization is 1/n

Pattern 3:mode=2, i.e.: (BALANCE-XOR) XOR policy (balancing policy)

Feature: Transmit packets based on the specified transport hash policy. The default policy is: (Source MAC address XOR destination MAC address)% slave number. Other transport policies can be specified through the Xmit_hash_policy option, which provides load balancing and fault tolerance.

Mode 4:mode=3, i.e.: Broadcast (broadcast policy)

Feature: Each packet is transmitted on each slave interface, which provides fault tolerance.

Mode 5:mode=4, i.e.: (802.3AD) IEEE 802.3adDynamic Link Aggregation (IEEE 802.3ad dynamic Link aggregation)

Feature: Create an aggregation group that shares the same rate and duplex settings. Multiple slave are worked under the same active aggregation under the 802.3AD specification.

The slave election for outgoing traffic is based on the transport hash policy, which can be changed from the default XOR policy to other policies through the Xmit_hash_policy option. It should be noted that not all transport strategies are 802.3AD compliant, especially given the problem of packet chaos mentioned in the 802.3AD standard 43.2.4 section. Different implementations may have different adaptations.

Necessary:

Conditional 1:ethtool support to obtain rate and duplex settings for each slave

Conditional 2:switch (switch) supports IEEE 802.3ad Dynamic link Aggregation

Condition 3: Most switch (switches) require a specific configuration to support 802.3ad mode

Mode 6:mode=5, i.e.: (BALANCE-TLB) Adaptive Transmit load balancing (adapter transport load Balancing)

Features: Channel bonding that do not require any special switch (switch) support. The outgoing traffic is allocated on each slave according to the current load (calculated according to the speed). If the slave that is receiving the data fails, another slave takes over the MAC address of the failed slave.

The necessary condition for this pattern: Ethtool supports the rate at which each slave is fetched.

Pattern 7:mode=6, i.e.: (BALANCE-ALB) Adaptive load Balancing (adapter Adaptive load Balancing)

Feature: This pattern includes the balance-tlb mode, coupled with receive load balancing for IPV4 traffic (receive load balance, RLB), and does not require any switch (switch) support. Receive load balancing is implemented through ARP negotiation. The bonding driver intercepts the ARP reply sent by the native and rewrites the source hardware address as the only hardware address of a slave in the bond, allowing different pairs to communicate using different hardware addresses.

Usually used is the mode=0, mode=1, mode=4, mode=6 algorithm.

Three, Bond configuration

A, centos/redhat under the configuration 1

1, dist.conf Configuration

# vim/etc/modprobe.d/dist.conf
# Edit the profile and add the following at the end of the line
Alias Bond0 Bonding
Options Bond0 miimon=100 mode=1
Alias Bond1 Bonding
Options Bond1 miimon=100 mode=1
Alias net-pf-10 off//This line is off IPv6 support, or you can not
Note: Miimon is used for link monitoring. If the miimon=100, then the system every 100MS monitoring link connection state, if one line does not pass to the other line; The value of mode represents the working mode. Some old Redhat/centos distributions have used/etc/modprobe.conf configuration files.

2, Network card configuration

# Cat Ifcfg-eth0
Device=eth0
Type=ethernet
Onboot=yes
Bootproto=none
Userctl=no
Master=bond1
Slave=yes
Only the configuration of the Eth0 network card is listed here. Assuming the server has four fast network cards, eth0, eth1 configured for Bond1,eth2, Eth3 configured for bond0. Referring to the above configuration, the same ETH1 network card simply copy a eth0 configuration, modify the device name, and then copy two copies of Eth2, ETH3, you need to modify the device and master bond name.

3, bond network card configuration

Take the Bond1 NIC for example:

# Cat Ifcfg-bond1
Device=bond1
Type=ethernet
Onboot=yes
Bootproto=static
Userctl=no
ipaddr=10.211.89.202
netmask=255.255.255.224
#GATEWAY =10.211.89.193

Note: Under Centos/redhat there are network and NetworkManager two service management network cards, two services open at the same time after the configuration bond will have an error. You can turn off the NetworkManager service by using the following configuration commands:

#/etc/init.d/networkmanager Stop
# chkconfig NetworkManager off
B, Centos/redhat under the configuration 2

The configuration of the bond selection can also be configured not in the dist.conf file, but directly in the Ifcfg-bond NIC interface, as follows:

# vim/etc/sysconfig/network-scripts/ifcfg-bond0
Device=bond0
Bootproto=none
Onboot=yes
ipaddr=192.168.0.180
netmask=255.255.255.0
gateway=192.168.0.1
Userctl=no
bonding_opts= "Mode=1 miimon=100"
Configuration under C, SLEs (SuSE)

SuSE is similar to Redhat/centos, which supports writing to modprobe profiles, and is also supported in the bond NIC configuration file. The following is the configuration in the Bond network card (note that the bonding parameter entries are slightly different from the above):

# vim/etc/sysconfig/network/ifcfg-bond0
bootproto= ' Static '
Ipaddr= ' 10.211.0.21 '
netmask= ' 255.255.255.0 '
Startmode= ' Onboot '
Bonding_master= ' yes '
Bonding_module_opts= ' mode=1 miimon=200 use_carrier=1 '
bonding_slave0= ' eth1 '
bonding_slave1= ' eth2 '

IV. Delete Bonding equipment

If the original configuration of the bonding device named Bond0, and then renamed to Bond1, resulting in the presence of two bonding devices, now need to delete bond0. First look at the next network device:

# ls/sys/class/net
Bond0 bond1 bonding_masters eth0 eth1 Lo
Deleting bond0 directly will prompt for no permissions.

You can remove the bond device through the Bonding_masters file:

# Cat Bonding_masters
Bond0 Bond1
Editing the Bonding_masters file directly prompts permission issues
# echo-bond0 > Bonding_masters
The echo-number indicates that the device is removed, + indicates the device is added
# Cat Bonding_masters
Bond1
# ls/proc/net/bonding
Bond1
You can see that the bond network port has been deleted successfully.

Add eth0 to Bond (BOND0):
# echo +eth0 >/sys/class/net/bond0/bonding/slaves
Remove eth0 from Bond (BOND0):
# Echo-eth0 >/sys/class/net/bond0/bonding/slaves
Add the first bond, with two e1000 interface, working in Active-backup mode, using ARP monitoring, you can use the following command:

# modprobe e1000
# echo +bond1 >/sys/class/net/bonding_masters
# echo Active-backup >/sys/class/net/bond1/bonding/mode
Or
Echo 1 >/sys/class/net/bond1/bonding/mode
# ifconfig Bond1 192.168.2.1 netmask 255.255.255.0 up
To increase the ARP destination address:
# echo +192.168.2.100/sys/class/net/bond1/bonding/arp_ip_target
# echo >/sys/class/net/bond1/bonding/arp_interval
# echo +eth2 >/sys/class/net/bond1/bonding/slaves
# echo +eth3 >/sys/class/net/bond1/bonding/slaves
To view the bond interface information:

# CAT/PROC/NET/BONDING/BOND1
Ethernet Channel Bonding driver:v3.6.0 (September 26, 2009)
Bonding Mode:fault-tolerance (active-backup)
Primary Slave:none
Currently Active Slave:eth0
MII Status:up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface:eth0
MII Status:up
speed:1000 Mbps
Duplex:full
Link Failure count:0
Permanent HW addr:a0:b3:cc:e5:97:68
Slave Queue id:0
Slave interface:eth1
MII Status:up
speed:1000 Mbps
Duplex:full
Link Failure count:0
Permanent HW addr:a0:b3:cc:e5:97:6c
Slave Queue id:0

Five, Ifensalve tools

Ifensalve is a load balancing tool under Linux, using Ifensalve configuration bonding, as follows:

# modprobe Bonding mode=1 miimon=100
# ifconfig bond0 192.168.1.10 netmask 255.255.255.0
# Ifenslave bond0 eth0 eth1

You can also use the Ifensalve tool to remove a physical interface from the bond interface, such as:

# ifenslave-d Bond0 eth0

You can also use the Ifensalve tool to take a physical interface as an active interface:

# ifenslave-c Bond0 eth0
You can also use Ifenslave bond0 to display the details of an interface.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.