Abstract: virtual machines can generate several virtual network device types, such as e1000, rtl8139, and virtio, which are purely virtualized. What is the efficiency of Virtual Machine Communication when different network devices are loaded? This document records the test process and results.
Introduction
KVM virtual machines can generate several virtual network device types, such as e1000, rtl8139, virtio, and ne2k_pci and pcnet compatible with older network adapters. This article tests the communication efficiency of a virtual machine for external services when it loads different network devices.
Test Method
Network Communication is a complex process that is affected by many external factors. Therefore, in this test, a simple test environment is constructed to minimize the impact of external factors. A simple method is used to test the communication efficiency of virtual network devices, in addition, the Service bandwidth (throughput) data is recorded. Other data such as latency and errors are not considered.
The specific method of the test is to construct a closed m network, two physical hosts, one as the host P1, and install and run a Virtual Machine V, and the other as the client P2, run the SCP command and use SSH to copy a MB file from host V to its hard disk. The SCP command collects and reports the speed of remote copy as a throughput data record. Each time the virtual machine needs to be shut down and the type of the virtual network card needs to be modified through virt-manager, and then restarted to verify that the virtual network card is loaded correctly, SCP, repeat this process until several major KVM-supported virtual network card types are tested.
Test procedure:
1. Modify the virtual network card type.
2. Start Virtual Machine v.
3. log on to the V console and use commands respectively.
# Lspci | grep Ethernet
# Ethtool-I eth0
# Dmesg | grep eth0
Verify that the virtual network card is properly loaded;
4. log on to the P2 console and run SCP to copy a MB file from V to P2.
5. Record throughput data reported by SCP.
6. Close V and repeat the above process.
Test Environment
Network: a m closed network (preferably tested on a gigabit network but not available). The network segment is 10.0.112.0/24, and the host P1 uses the bridging mode to configure the IP address.
Table 1 Host p1 (host) Configuration
CPU |
Pentium (r) dual-core CPU e5800 @ 3.20 GHz |
Memory |
2G |
Nic |
Marvell 88e8057 PCI-E Gigabit Ethernet controller Mbit/s Nic |
IP |
10.0.112.39 |
OS |
Centos6.2 x86 |
Table 2 host P2 Configuration
CPU |
Pentium (r) dual-core CPU e5800 @ 3.20 GHz |
Memory |
2G |
Nic |
Marvell 88e8057 PCI-E Gigabit Ethernet controller Mbit/s Nic |
IP |
10.0.112.38 |
OS |
Centos6.2 x86 |
Table 3 host V (Virtual Machine) Configuration
CPU |
Qemu virtual CPU version (cpu64-rhel6) |
Memory |
512 m |
Nic |
Based on test changes |
IP |
10.0.112.160 |
OS |
Centos6.0 i386 |
Test Results
Table 4 testing speed of different virtual NICs
Virtual Network Card Type |
Transmission speed |
Network Status |
Virtio |
10.9-11.2 MB/S |
Stability |
E1000 |
10.8-11.2 MB/S |
Stability |
Rtl8139 |
10.8-11.2 MB/S |
Stability |
Ne2k_pci |
6.5-6.7 MB/S |
Stability |
Pcinet |
9.1 MB/S |
Unstable, 85% VM Nic crash |
Virtio paravirtual is a unified virtual I/O interface driver on the Linux Virtual Machine Platform. Generally, a host needs to create a variety of virtual devices, such as disks, NICS, graphics cards, clocks, and USB, to make the client run as in a real environment. These virtual devices greatly reduce the performance of the client. If the client does not pay attention to these hardware devices, it can replace them with a unified virtual device, which can greatly improve the performance of the virtual machine. This unified standardized interface is virtio on Linux. It should be noted that virtio runs in kernel 2.6.24 or later versions to take advantage of its performance advantages. In addition
The KVM project team also released the virtio driver on the Windows platform, which greatly improves the network performance of Windows clients.
Virtio/e1000/rtl8139 has reached the theoretical maximum value of M network.
Appendix: Virtual Nic device verification record
===== Virtual Machine NIC: virtio
# Lspci | grep Ethernet
00:03. 0 Ethernet controller: Red Hat, INC virtio Network Device
# Ethtool-I eth0
Cannot get driver information: operation not supported
# Dmesg | grep eth0
Eth0: No IPv6 routers present
===== Virtual Machine NIC: e1000
# Lspci | grep Ethernet
00:03. 0 Ethernet controller: Intel Corporation 82540em Gigabit ethernetcontroller (Rev 03)
# Ethtool-I eth0
Driver: e1000
Version: 7.3.21-k6-NAPI
Firmware-version: N/
Bus-Info: :00:03. 0
# Dmesg | grep eth0
E1000: eth0: e1000_probe: Intel (r) Pro/1000 network connection
E1000: eth0 Nic link is up 1000 Mbps full duplex, flow control: RX
Eth0: No IPv6 routers present
===== Virtual Machine NIC: rtl8139
# Lspci | grep Ethernet
. 0 Ethernet controller: RealTek semiconduco., Ltd. RTL-8139/8139c/8139c + (Rev 20)
# Ethtool-I eth0
Driver: 8139cp
Version: 1.3
Firmware-version:
Bus-Info: :00:03. 0
# Dmesg | grep eth0
Eth0: RTL-8139C + at0xe1134000, 52: 54: 00: 4f: 1b: 07, irq11
Eth0: link up, 100 Mbps, full-duplex, LPA 0x05e1
Eth0: No IPv6 routers present
===== Virtual Machine NIC: ne2k_pci
# Lspci | grep Ethernet
. 0 Ethernet controller: RealTek semiconduco., Ltd. RTL-8029 ()
# Ethtool-I eth0
Driver: ne2k-pci
Version: 1.03
Firmware-version:
Bus-Info: :00:03. 0
# Dmesg | grep eth0
Eth0: RealTek RTL-8029 found at 0xc100, IRQ 11, 52: 54: 00: 4f: 1b: 07.
Eth0: No IPv6 routers present
===== Virtual Machine NIC: pcnet
# Lspci | grep Ethernet
00:03. 0 Ethernet controller: Advanced Micro Devices [amd] 79c970 [pcnet32 Lance] (Rev 10)
# Ethtool-I eth0
Driver: pcnet32
Version: 1.35
Firmware-version:
Bus-Info: :00:03. 0
# Dmesg | grep eth0
Eth0: registered as pcnet/pci ii 79c970a
Eth0: link up
Eth0: No IPv6 routers present