Seven kinds of network card bindings mode _ switch

Source: Internet
Author: User
Tags configuration settings switches

Overview:

Currently there are seven kinds of network card binding mode (0~6) bond0, Bond1, Bond2, Bond3, Bond4, Bond5, Bond6

There are three kinds of commonly used:

Mode=0: Balance load mode, there is automatic backup, but need "Switch" support and settings.

Mode=1: Automatic standby mode, if one line is disconnected, the other lines will be automatically prepared for assistance.

MODE=6: Balanced load mode, there is automatic backup, do not need to "Switch" support and settings.

Description

It is important to note that if you want to achieve the load balance of mode 0, it is not enough to set the Optionsbond0 miimon=100 mode=0, the switch connected to the NIC must be specially configured (these two ports should be aggregated), Because the two network cards that make bonding are using the same MAC address. From the principle analysis (Bond run under MODE0):

The IP of the NIC that bond binds is changed to the same MAC address under mode 0. If these network cards are connected to the same switch, then the switch's ARP table, the MAC address of the corresponding port has more than one, then the switch received to the MAC address of the packet should be forwarded to which port. Under normal circumstances, the MAC address is the only global, a MAC address corresponding to multiple ports must make the switch confused. So mode0 bond. If you are connected to a switch, these ports should be aggregated (Cisco is called Ethernetchannel,foundry called PortGroup), because after the switch is aggregated, Several ports under aggregation are also bundled into a MAC address. Our solution is that two network cards are connected to different switches.

There is no need to configure the switch in MODE6 mode because the two NICs that make the bonding are using a different MAC address.

Seven kinds of bond mode description:

The first model: Mod=0, that is: (BALANCE-RR) round-robin policy (balanced Whirl cycle strategy)

Feature: Transmission packet order is transmitted sequentially (i.e.: the 1th package goes eth0, the next bag goes eth1 .... Keep looping until the last transmission is complete. This pattern provides load balancing and fault tolerance, but we know that if a connection or session packet is sent from a different interface, Halfway through the different links, the client is likely to be the problem of disorderly arrival of packets, and disorderly arrival of packets need to be sent again, so that the network throughput will decline

Second mode: Mod=1, that is: (active-backup) active-backup policy (primary-backup policy)

Feature: Only one device is active, and when one is down another is immediately converted to a primary device by a backup. The MAC address is externally visible, and from the outside it appears that the Bond's MAC address is unique to avoid confusion in the switch (switch). This pattern provides only fault tolerance; it can be seen that the advantage of this algorithm is to provide high network connectivity availability, but its resource utilization is low, only one interface in the working state, in the case of N network interface, resource utilization is 1/n

Third mode: mod=2, i.e.: (BALANCE-XOR) XOR policy (Balance strategy)

Feature: Transmit packets based on the specified transport hash policy. The default policy is: (Source MAC address XOR destination MAC address)% slave number. Other transport policies can be specified through the Xmit_hash_policy option, which provides load balancing and fault tolerance

Mode Fourth: Mod=3, that is: Broadcast (broadcast policy)

Feature: Transmit each packet on each slave interface, this pattern provides fault tolerance

Mode Fifth: Mod=4, that is: (802.3AD) IEEE 802.3ad dynamic Link aggregation (ieee802.3ad dynamically link aggregation)

Feature: Create an aggregation group that shares the same rate and duplex settings. Multiple slave are worked under the same active aggregation under the 802.3AD specification. The slave election for outgoing traffic is based on the transport hash policy, which can be changed from the default XOR policy to other policies through the Xmit_hash_policy option. It should be noted that not all transport strategies are 802.3AD compliant, especially given the problem of packet chaos mentioned in the 802.3AD standard 43.2.4 section. Different implementations may have different adaptations.

Necessary:

Conditional 1:ethtool support to obtain rate and duplex settings for each slave

Conditional 2:switch (switch) support IEEE802.3AD Dynamic Link Aggregation

Condition 3: Most switch (switches) require a specific configuration to support 802.3ad mode

Mode sixth: Mod=5, that is: (BALANCE-TLB) Adaptive Transmit load balancing (adapter transport load Balancing)

Features: Channel bonding that do not require any special switch (switch) support. The outgoing traffic is allocated on each slave according to the current load (calculated according to the speed). If the slave that is receiving the data fails, another slave takes over the MAC address of the failed slave.

The necessary condition for this pattern: ethtool support Gets the rate of each slave

Mode seventh: Mod=6, i.e.: (BALANCE-ALB) Adaptive load Balancing (adapter Adaptive load Balancing)

Feature: This pattern includes balance-tlb mode, coupled with receive load balancing for IPV4 traffic (receiveload balance, RLB), and does not require any switch (switch) support. Receive load balancing is implemented through ARP negotiation. The bonding driver intercepts the ARP reply sent by the native and rewrites the source hardware address as the only hardware address of a slave in the bond, allowing different pairs to communicate using different hardware addresses.

Receive traffic from the server side will also be balanced. When the native sends an ARP request, the bonding driver copies and saves the IP information from the ARP packet. When the ARP response arrives from the End-to-end, the bonding driver extracts its hardware address and initiates an ARP response to a slave in the bond. One problem with using ARP negotiation for load balancing is that the hardware address of bond is used every time the ARP request is broadcast, so after learning the hardware address to the end, the receiving traffic will flow to the current slave. This problem can be solved by sending all the End-to-end updates (ARP answers), which contain their unique hardware addresses, resulting in a redistribution of traffic. When a new slave is added to the bond, or if an inactive slave is reactivated, the receive traffic is also distributed again. The received payload is sequentially distributed (Roundrobin) in the bond of the highest speed slave a link is picked up, or a new slave added to the bond, receive traffic in all currently activated slave all reassigned, by using the specified MAC address to each The client initiates an ARP reply. The Updelay parameter described below must be set to a value that is greater than or equal to the switch (switch) forwarding delay, thus guaranteeing that the ARP reply sent to the end is not blocked by the switch (switch).

Necessary:

Conditional 1:ethtool supports the rate at which each slave is obtained;

Condition 2: The underlying driver supports setting the hardware address of a device so that there is always a slave (curr_active_slave) using the hardware address of bond, while ensuring that slave in each bond has a unique hardware address. If Curr_active_slave fails, its hardware address will be taken over by the newly elected Curr_active_slave in fact mod=6 and mod=0 difference: mod=6, the first eth0 flow, and then accounted for eth1,....ethx; 0, you will find that the 2-port flow is very stable, the basic same bandwidth. And Mod=6, will find the first mouth flow is very high, the 2nd mouth only accounted for a small amount of traffic

Linux Network Port bindings:

Through the network port binding (bond) technology, can easily achieve network port redundancy, load balancing, so as to achieve high availability and reliable purposes. Prerequisite Agreement:

2 Physical network ports are: eth0,eth1

The bonded virtual port is: bond0

Server IP is: 10.10.10.1

First step, configuration settings file: [plain]  view plain  copy  print? [root@woo ~]# vi  /etc/sysconfig/network-scripts/ifcfg-bond0   DEVICE=bond0    bootproto=none   onboot=yes   ipaddr=10.10.10.1   NETMASK=255.255.255.0    network=192.168.0.0      [root@woo ~]# vi  /etc/sysconfig/ network-scripts/ifcfg-eth0   device=eth0   bootproto=none   MASTER=bond0    slave=yes      [root@woo ~]# vi  /etc/sysconfig/network-scripts/ ifcfg-eth1   device=eth1   bootproto=none   master=bond0   SLAVE=yes   

Step two, modify the Modprobe related settings file and load the bonding module: [plain]  view plain  copy  print? 1. Here, we create a proprietary set of loading bonding directly/etc/modprobe.d/bonding.conf   [root@woo ~]# vi /etc/ modprobe.d/bonding.conf   alias bond0 bonding   options bonding mode=0  miimon=200      2. Load module (reboot the system without manual reload)    [root@woo ~]# modprobe  bonding      3. Confirm that the module is loaded successfully:   [root@woo ~]# lsmod | grep  bonding   bonding 100065 0  

The third step, restart the network, and then confirm the situation: [plain]  view plain  copy  print?  [root@db01 ~]# service network restart   Shutting down interface  bond0:  [  OK  ]   shutting down loopback  interface:  [  ok  ]   Bringing up loopback interface:   [  OK  ]   bringing up interface bond0:  [   OK  ]      [root@db01 ~]#  cat /proc/net/bonding/ bond0   ethernet channel bonding driver: v3.4.0-1  (October 7, 2008 )       bonding mode: fault-tolerance  (active-backup)    primary  slave: none   currently active slave: eth0   MII Status: up    mii polling interval  (MS): 100   up delay  (ms): 0   down delay  (ms): 0      slave interface: eth0   mii status: up   Speed: 1000  Mbps   duplex: full   link failure count: 0   Permanent  HW addr: 40:f2:e9:db:c9:c2      slave interface: eth1   mii status: up   speed: 1000 mbps   duplex: full   Link  Failure Count: 0   permanent hw addr: 40:f2:e9:db:c9:c3   [ root@db01 ~]#  ifconfig | grep hwaddr   bond0      Link encap:Ethernet  HWaddr 40:F2:E9:DB:C9:C2     eth0       Link encap:Ethernet  HWaddr 40:F2:E9:DB:C9:C2      Eth1      link encap:ethernet  hwaddr 40:f2:e9:db:c9:c2     

From the confirmation information above, we can see 3 important information:

1. The current bonding model is Active-backup

2. Now the active state of the network port is eth0

The physical address of the 3.bond0,eth1 is the same as the physical address of the eth0 in the active state, so as to avoid confusion on the upper switch.

Arbitrarily unplug a network cable, and then visit your server to see if the internet is still pass.

Fourth step, the system starts the automatic binding, increases the default gateway: [plain] View plain copy print? [Root@woo ~]# vi/etc/rc.d/rc.local #追加 ifenslave bond0 eth0 eth1 route add default GW 10.10.10.1


#如可上网就不用增加路由, 0.1 addresses are modified by environment.

------------------------------------------------------------------------

Note: The front is only 2 mesh ports bound to a bond0 case, if we want to set multiple bond ports, such as physical network eth0 and eth1 composed of Bond0,eth2 and Eth3 composed of Bond1,

Multiple Network port bindings:

Then the setting of the screen settings file is the same as the 1th step above, but the/etc/modprobe.d/bonding.conf settings can not be as simple as the following stack:

Alias Bond0 Bonding

Options Bonding mode=1 miimon=200

Alias Bond1 Bonding

Options Bonding mode=1 miimon=200

There are 2 ways to set up the correct method:

The first, you can see, in this way, multiple bond port patterns can only be set to the same: [Plain] View plain copy print? <span style= "color: #000000;" >alias bond0 Bonding alias bond1 bonding options Bonding max_bonds=2 miimon=200 mode=1 </span>

Second, in this way, the mode of the different bond ports can be set differently: [Plain] View plain copy print? <span style= "color: #000000;" >alias bond0 Bonding Options bond0 miimon=100 mode=1 install bond1/sbin/modprobe bonding-o bond1 miimon=200 mode= 0 </span>

Take a closer look at these 2 settings, now if you want to set 3, 4, or even more bond mouth, you should also be.

Postscript:

Briefly introduce the meaning of some parameters in the options when loading the bonding module:

Miimon Monitor the frequency of network links in milliseconds, we set the 200 milliseconds.

Number of bond ports configured by Max_bonds

Mode bond pattern, mainly has the following several, in the general practical application, 0 and 1 use more.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.