Bind dual NICs in Linux to achieve load balancing

Source: Internet
Author: User
Article title: Dual-Nic binding technology in Linux for load balancing. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Maintaining the high availability of servers is an important factor in the enterprise IT environment. The most important thing is the high availability of server network connections. NIC binding helps ensure high availability and provide other advantages to improve network performance.
We will introduce how to bind two NICs in Linux to a virtual network card. the aggregated device looks like a separate Ethernet interface device, in general, two NICs have the same IP address, and parallel links are aggregated into a logical link. In fact, this technology already exists in Sun and Cisco, known as Trunking and Etherchannel technology. it is also used in Linux 2.4.x kernel, known as bonding. The earliest application of bonding technology was on the cluster beowulf, designed to improve data transmission between cluster nodes. Next we will discuss the principles of bonding. what is bonding? it should start with the promisc mode of the NIC. We know that, under normal circumstances, the network adapter only receives the target hardware Address (MAC Address) as its own Mac Ethernet frame, and filters out all other data frames to reduce the burden on the driver. However, the NIC also supports another mode called hybrid promisc, which can receive all frames on the network. for example, tcpdump runs in this mode. Bonding also runs in this mode, and modifies the mac address in the driver, changing the Mac address of the two NICs to the same, can receive data frames of a specific mac. Then, the data frame is sent to the bond driver for processing.
After talking about the theory for half a day, the configuration is actually very simple. There are four steps in total:
The operating system of the experiment is Redhat Linux Enterprise 3.0.
Prerequisites: the chipset model is the same, and the NIC should have its own independent BIOS chip.
  
1. edit the virtual network interface configuration file and specify the nic ip address
Vi/etc/sysconfig/network-scripts/ifcfg-bond0
[Root @ rhas-13 root] # cp/etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
2 # vi ifcfg-bond0
Change the first line to DEVICE = bond0
# Cat ifcfg-bond0
DEVICE = bond0
BOOTPROTO = static
IPADDR = 172.31.0.13
NETMASK = 255.255.252.0
BROADCAST = 172.31.3.254
ONBOOT = yes
TYPE = Ethernet
Do not specify the IP address, subnet mask, or Nic ID of a single NIC. Specify the above information to the virtual adapter (bonding.
[Root @ rhas-13 network-scripts] # cat ifcfg-eth0
DEVICE = eth0
ONBOOT = yes
BOOTPROTO = dhcp
[Root @ rhas-13 network-scripts] # cat ifcfg-eth1
DEVICE = eth0
ONBOOT = yes
BOOTPROTO = dhcp
  
3 # vi/etc/modules. conf
Edit the/etc/modules. conf file and add the following line to enable the system to load the bonding module at startup. the external virtual network interface device is bond0.
   
Add the following two rows
Alias bond0 bonding
Options bond0 miimon = 100 mode = 1
Note: miimon is used for link monitoring. For example: miimon = 100, the system monitors the link connection status every MS. If one line fails, it is transferred to another line. The value of mode indicates the working mode, which has a total, two or three modes, commonly used: 0, 1.
Mode = 0 indicates that the load balancing (round-robin) method is load balancing, and both NICs work.
Mode = 1 indicates that fault-tolerance (active-backup) provides redundancy, working in the active/standby mode. that is to say, by default, only one network card works and the other is used for backup.
Bonding can only provide link monitoring, that is, whether the link from the host to the switch is connected. If the external link of the switch is down and the switch is not faulty, bonding considers that the link is correct and continues to be used.
4 # vi/etc/rc. d/rc. local
Add two rows
Ifenslave bond0 eth0 eth1
Route add-net 172.31.3.254 netmask limit 255.255.0 bond0
  
After the configuration is complete, restart the machine.
The following information is displayed after restart, indicating that the configuration is successful.
................
Bringing up interface bond0 OK
Bringing up interface eth0 OK
Bringing up interface eth1 OK
................
  
Next we will discuss the situations where the following modes are 0, 1
  
Mode = 1 works in master/slave mode, and eth1 is no arp as the backup Nic
[Root @ rhas-13 network-scripts] # ifconfig verify Nic configuration information
Bond0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running master multicast mtu: 1500 Metric: 1
RX packets: 18495 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 480 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX bytes: 1587253 (1.5 Mb) TX bytes: 89642 (87.5 Kb)
  
Eth0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running slave multicast mtu: 1500 Metric: 1
RX packets: 9572 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 480 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 833514 (813.9 Kb) TX bytes: 89642 (87.5 Kb)
Interrupt: 11
  
Eth1 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running noarp slave multicast mtu: 1500 Metric: 1
RX packets: 8923 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 0 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 753739 (736.0 Kb) TX bytes: 0 (0.0 B)
Interrupt: 15
That is to say, in the active/standby mode, when a network interface fails (for example, when the active switch loses power) and the network is not interrupted, the system will follow cat/etc/rc. d/rc. in local, the specified Nic works in sequence, and the machine can still provide external services, enabling the failure protection function.
  
In mode = 0 server load balancer mode, it can provide twice the bandwidth. let's take a look at the Nic configuration information.
[Root @ rhas-13 root] # ifconfig
Bond0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running master multicast mtu: 1500 Metric: 1
RX packets: 2817 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 95 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX bytes: 226957 (221.6 Kb) TX bytes: 15266 (14.9 Kb)
  
Eth0 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running slave multicast mtu: 1500 Metric: 1
RX packets: 1406 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 48 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 113967 (111.2 Kb) TX bytes: 7268 (7.0 Kb)
Interrupt: 11
  
Eth1 Link encap: Ethernet HWaddr 00: 0E: 7F: 25: D9: 8B
Inet addr: 172.31.0.13 Bcast: 172.31.3.255 Mask: 255.255.252.0
Up broadcast running slave multicast mtu: 1500 Metric: 1
RX packets: 1411 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 47 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 fig: 1000
RX bytes: 112990 (110.3 Kb) TX bytes: 7998 (7.8 Kb)
Interrupt: 15
  
In this case, the failure of a Nic only results in a decrease in the outbound bandwidth of the server and does not affect network usage.
  
You can view bond0's working Status query to learn about the working status of bonding in detail.
[Root @ rhas-13 bonding] # cat/proc/net/bonding/bond0
Bonding. c: v2.4.1 (September 15,200 3)
  
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (MS): 0
Up Delay (MS): 0
Down Delay (MS): 0
Multicast Mode: all slaves
  
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: 0e: 7f: 25: d9: 8a
  
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00: 0e: 7f: 25: d9: 8b
  
In Linux, network card bonding technology not only increases server reliability, but also increases available network bandwidth, to provide users with uninterrupted key services. All of the above methods have been tested successfully in multiple redhat Versions, and the results are good. you can try it now!
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.