Linux dual-nic binding test

Source: Internet
Author: User

First, we will introduce the situation that both server A and server B are CentOS 4.6 systems. Now we need to build an HA Cluster. To avoid split-brain attacks, we need to improve the reliability of the heartbeat link, it is the current connection condition. eth2 and eth3 of server A are connected to eth2 and eth3 of server B respectively (there is no sequential relationship). All NICs are Gigabit NICs, And the topology is shown below:

In this article, we will discuss the hardware situation. Server A is an HP DL380 G5 server, which has been used for more than two years. It has 4 Core 8 GB memory and 5 2.5-inch GB hard disks as RAID 5. Service period B is DELL 2950, a new machine just purchased a few months ago, 8-core 16 GB memory, 3 3.5 300 GB SAS hard drive for RAID 5.
The service switch is a Gigabit Switch of DELL. It is used only when the access switch is not configured.
The blue line in the figure uses the cat5e unshielded twisted pair Cable several years ago.
The red lines in the figure use the newly purchased cat6 unshielded twisted pair wires.
The test method is very simple. Compare the transmission time of A 3.4G ISO from server A scp to server B.
Data goes through the Business Link and bonding technology is not used.
############# No Binding ##############
[Root @ rac-node01 tmp] # time scp rhel-5.1-server-x86_64-dvd.iso 10.168.0.202:/tmp
Root@10.168.0.202's password:
Rhel-5.1-server-x86_64-dvd.iso 100% 3353 MB 44.1 MB/s 0

Real 1m20. 105 s
User 0m34. 752 s
Sys 0m11. 002 s
############# Fast
The data follows the heartbeat link and uses the bonding technology. The mode is set to 6, that is, the Server Load balancer does not require the switch to participate.
It is strange that some data packets are lost in this mode, which may be caused by this strange topology.
############# Model = 6 ##############
[Root @ rac-node01 tmp] # time scp rhel-5.1-server-x86_64-dvd.iso 192.168.0.202:/tmp
Root@192.168.0.202's password:
Rhel-5.1-server-x86_64-dvd.iso 100% 3353 MB 21.4 MB/s

Real 2m47. 812 s
User 0m34. 965 s
Sys 0m19. 421 s
[Root @ rac-node01 tmp] # netstat-I # @ Receive
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR Flg
Bond1 1500 0 5123831 2045 0 5138747 0 0 0 BMmRU
Eth0 1500 0 2847 0 0 0 703 0 0 0 BMRU
Eth2 1500 0 2562665 11 0 0 2569378 0 0 0 BMsRU
Eth3 1500 0 2561166 2034 0 2569369 0 0 0 0 BMsRU
Lo 16436 0 2261 0 0 2261 0 0 0 LRU
############# Packet loss
The data follows the heartbeat link and uses bonding technology. The mode is set to 0, that is, the Server Load balancer that requires the switch to participate.
In this mode, packet loss does not occur as if mode = 6, and the traffic between eth2 and eth3 is almost average. The RX-ERR in the test data below is left over from the test data above.
############# Model = 0 ##############
[Root @ rac-node01 tmp] # time scp rhel-5.1-server-x86_64-dvd.iso 192.168.0.202:/tmp
Root@192.168.0.202's password:
Rhel-5.1-server-x86_64-dvd.iso 100% 3353 MB 38.1 MB/s 0

Real 1m33. 508 s
User 0m34. 539 s
Sys 0m19. 363 s
[Root @ mailserver tmp] # netstat-I
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR Flg
Bond1 1500 0 11133871 2045 0 11180462 0 0 0 BMmRU
Eth0 1500 0 1334477 0 0 0 2575981 0 0 0 BMRU
Eth2 1500 0 5567685 11 0 0 5590236 0 0 0 BMsRU
Eth3 1500 0 5566186 2034 0 5590226 0 0 0 0 BMsRU
Lo 16436 0 2270 0 0 2270 0 0 0 LRU
############# No packet loss
The data follows the heartbeat link and uses bonding technology. The mode is set to 1, namely, Active-Backup and FailOver.
This mode has A problem. When eth2 of server A and eth3 of server B are Active devices, server A cannot communicate with server B through the heartbeat link, in this case, unplug one of the heartbeat wires and insert them again.
############# Model = 1 ##############
[Root @ rac-node01 ~] # Time scp/tmp/rhel-5.1-server-x86_64-dvd.iso 192.168.0.202:/tmp/
Root@192.168.0.202's password:
Rhel-5.1-server-x86_64-dvd.iso 100% 3353 MB 41.4 MB/s

Real 1m24. 162 s
User 0m35. 007 s
Sys 0m13. 455 s

[Root @ mailserver ~] # Netstat-I
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR Flg
Bond1 1500 0 3436804 0 0 1774259 0 0 0 BMmRU
Eth0 1500 0 3962 0 0 0 773 0 0 0 BMRU
Eth2 1500 0 3436804 0 0 1774254 0 0 0 0 BMsRU
Eth3 1500 0 0 0 0 5 0 0 0 BMsRU
Lo 16436 0 3071 0 0 3071 0 0 0 LRU
############# No packet loss, only for a single Nic
Conclusion:
The above results show that the speed alone is indeed the fastest without binding a single network card, but there is no fault tolerance capability. The second is the FailOver mode after binding, but this mode may have some problems. The load balancing mode with mode = 6 may cause packet loss, which is dangerous.
The load balancing mode with mode = 0 does not seem to be able to increase the bandwidth, but it is the best choice to increase the maximum availability.


Author "diving into the ocean of Technology"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.