Dual-Nic binding technology in Centos for load balancing and failure protection

Source: Internet
Author: User
In Linux, dual-Nic binding technology achieves load balancing and failure protection (in Centos5 and Centos6). maintaining high server availability is an important factor in the enterprise IT environment. The most important thing is the high availability of server network connections. NIC binding helps ensure high availability and provide other advantages to improve network performance. We will introduce the dual-Nic binding technology in Linux to achieve load balancing and failure protection (applicable to Centos5 and Centos6)
Maintaining the high availability of servers is an important factor in the enterprise IT environment. The most important thing is the high availability of server network connections. NIC binding helps ensure high availability and provide other advantages to improve network performance.
We will introduce how to bind two NICs in Linux to a virtual network card. the aggregated device looks like a separate Ethernet interface device, in general, two NICs have the same IP address, and parallel links are aggregated into a logical link. In fact, this technology already exists in Sun and Cisco, known as Trunking and Etherchannel technology. it is also used in Linux 2.4.x kernel, known as bonding. The earliest application of bonding technology was on the cluster beowulf, designed to improve data transmission between cluster nodes. Next we will discuss the principles of bonding. what is bonding? it should start with the promisc mode of the NIC. We know that, under normal circumstances, the network adapter only receives the target hardware Address (MAC Address) as its own Mac Ethernet frame, and filters out all other data frames to reduce the burden on the driver. However, the NIC also supports another mode called hybrid promisc, which can receive all frames on the network. for example, tcpdump runs in this mode. Bonding also runs in this mode, and modifies the mac address in the driver, changing the Mac address of the two NICs to the same, can receive data frames of a specific mac, then, the data frame is sent to the bond driver for processing.
After talking about the theory for half a day, the configuration is actually very simple. There are four steps in total:
The operating system of the experiment is Redhat Linux Enterprise 3.0.
Prerequisites: the chipset model is the same, and the NIC should have its own independent BIOS chip.

Topology of dual-Nic binding (see)

1. edit the virtual network interface configuration file and specify the nic ip address
Vi/etc/sysconfig/network-scripts/ifcfg-bond0
[Root @ rhas-13 root] # cp/etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0

2 # vi ifcfg-bond0DEVICE = bond0
BOOTPROTO = static
IPADDR = 172.31.0.13
NETMASK = 255.255.252.0
BROADCAST = 172.31.3.254
ONBOOT = yes
TYPE = Ethernet do not specify the IP address, subnet mask, or Nic ID of a single network adapter. Specify the above information to the virtual adapter (bond0.
[Root @ rhas-13 network-scripts] # vi ifcfg-eth0DEVICE = eth0
ONBOOT = yes
BOOTPROTO = none
MASTER = bond0
SLAVE = yes [root @ rhas-13 network-scripts] # vi ifcfg-eth1DEVICE = eth1
ONBOOT = yes
BOOTPROTO = none
MASTER = bond0
SLAVE = yes3 # vi/etc/modprobe. conf
Edit the/etc/modprobe. conf file and add the following line to enable the system to load the bonding module at startup. the external virtual network interface device is bond0.
Add the following two lines: alias bond0 bonding
Options bond0 mode = 0 miimon = 250 use_carrier = 1 updelay = 500 downdelay = 500 Note: miimon is used for link monitoring. For example: miimon = 100, the system monitors the link connection status every MS. If one line fails, it is transferred to another line. The value of mode indicates the working mode, which has a total, 2, 3 and other four modes, commonly used for 0, 1.
Mode = 0 indicates that the load balancing (round-robin) method is load balancing, and both NICs work.
Mode = 1 indicates that fault-tolerance (active-backup) provides redundancy, working in the active/standby mode. that is to say, by default, only one network card works and the other is backed up.
Bonding can only provide link monitoring, that is, whether the link from the host to the switch is connected. If the external link of the switch is down and the switch is not faulty, bonding considers that the link is correct and continues to be used.

After the configuration is completed, restart the service.

Service network restart

The following information is displayed after restart, indicating that the configuration is successful.
................
Bringing up interface bond0 OK
Bringing up interface eth0 OK
Bringing up interface eth1 OK
................

Next we will discuss the situations where the following modes are 0, 1

Mode = 1 works in master/slave mode, and eth1 is no arp as the backup Nic
[Root @ rhas-13 network-scripts] # ifconfig verify Nic configuration information
Bond0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running master multicast mtu: 1500 Metric: 1
RX packets: 18495 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 480 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX bytes: 1587253 (1.5 Mb) TX bytes: 89642 (87.5 Kb)

Eth0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running slave multicast mtu: 1500 Metric: 1
RX packets: 9572 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 480 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 833514 (813.9 Kb) TX bytes: 89642 (87.5 Kb)
Interrupt: 11

Eth1 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running noarp slave multicast mtu: 1500 Metric: 1
RX packets: 8923 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 0 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 753739 (736.0 Kb) TX bytes: 0 (0.0 B)
Interrupt: 15
That is to say, in the active/standby mode, when a network interface fails (for example, when the active switch loses power), no network interruption will occur, and the system will follow cat/etc/rc. d/rc. the specified Nic in local works sequentially, and the machine can still provide external services, enabling the function of failure protection.

In mode = 0 server load balancer mode, it can provide twice the bandwidth. let's take a look at the Nic configuration information.
[Root @ rhas-13 root] # ifconfig
Bond0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running master multicast mtu: 1500 Metric: 1
RX packets: 2817 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 95 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX bytes: 226957 (221.6 Kb) TX bytes: 15266 (14.9 Kb)

Eth0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running slave multicast mtu: 1500 Metric: 1
RX packets: 1406 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 48 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 113967 (111.2 Kb) TX bytes: 7268 (7.0 Kb)
Interrupt: 11

Eth1 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running slave multicast mtu: 1500 Metric: 1
RX packets: 1411 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 47 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 112990 (110.3 Kb) TX bytes: 7998 (7.8 Kb)
Interrupt: 15

In this case, the failure of a Nic only results in a decrease in the outbound bandwidth of the server and does not affect network usage.


You can view bond0's working Status query to learn about the working status of bonding in detail.
[Root @ rhas-13 bonding] # cat/proc/net/bonding/bond0
Bonding. c: v2.4.1 (September 15,200 3)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (MS): 0
Up Delay (MS): 0
Down Delay (MS): 0
Multicast Mode: all slaves

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: 0e: 7f: 25: d9: 8a

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: 0e: 7f: 25: d9: 8b

In Linux, Nic binding technology increases server reliability and available network bandwidth, providing users with uninterrupted key services.

The first two ports are bound to a bond0. if we want to set multiple bond ports, for example, the physical network ports eth0 and eth1 form bond0, and eth2 and eth3 form bond1,

The method for setting the network port settings file is the same as the method described in step 1 above, but the setting of/etc/modprobe. d/bonding. conf cannot be as simple as the following:

Alias bond0 bonding

Options bonding mode = 1 miimon = 200

Alias bond1 bonding

Options bonding mode = 1 miimon = 200

There are two correct setting methods:

First, you can see that in this way, the mode of multiple bond ports can only be set to the same:

Alias bond0 bonding

Alias bond1 bonding

Options bonding max_bonds = 2 miimon = 200 mode = 1

Second, in this way, the mode of different bond ports can be set to different:

Alias bond0 bonding

Options bond0 miimon = 100 mode = 1

Install bond1/sbin/modprobe bonding-o bond1 miimon = 200 mode = 0

Take a closer look at the above two setting methods. if you want to set 3, 4, or even more bond ports, you should have done the same!

Postscript: briefly introduces the meanings of some parameters in options when loading the bonding module:

Miimon monitors the network connection frequency. the unit is Millisecond. we set the frequency to 200 milliseconds.

Number of bond ports configured by max_bonds

Mode bond mode, mainly including the following: in general practical applications, 0 and 1 are used more,

If you want to gain a deeper understanding of the characteristics of these models, you need to check the information and practice on your own.

0 or balance-rr rotation policy, provides load balancing and fault tolerance functions, and sends packets in turn to the network ports included in the bond port in sequence.

1. active-backup master-backup policy, which provides high fault tolerance, simple logic, one active, one failed, and the other automatically activated.

2 or balance-xor XOR policy to provide load balancing and fault tolerance functions.

3 or broadcast policy, fault tolerance function. Send the data to all network ports included in the bond port as a broadcast.

4 or 802.3ad IEEE 802.3ad dynamic link set.

5 or balance-tlb automatically adapts to the transmission load balancing policy.

6 or balance-alb automatically adapts to the load balancing policy.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.