In the above (http://blog.csdn.net/yeasy/article/details/39178187) mentioned the Virtual Machine network traffic access exchange problem.
This article describes the interaction between a VM and a server's physical Nic.
Generally, a server runs dozens of virtual machines, and the demand for computing and network is amazing. The former boosts the development of current multi-core technology, while the latter cannot simply be implemented using multiple NICs.
Imagine that every VM requires 10 Gbit/s of switching capability, and the server needs to configure dozens of physical NICs, not to mention whether the motherboard supports so many interfaces, which is simply unacceptable.
In addition, if the interfaces allocated to the VM are all virtual interfaces of the software switch, maintaining these interfaces and forwarding will consume a lot of server computing resources.
Therefore, the industry has introduced vmdq and SR-IOV Technology to Improve the Virtual Machine network performance.
Vmdq
Vmm allocates an independent queue for each VM in the physical Nic of the server, so that the traffic from the VM can be directly sent to the specified queue through the software switch, and the software switch does not need to perform sorting and routing operations.
However, the vmm and vswitch still need to copy the network traffic between the vmdq and VM.
SR-IOV
For the SR-IOV, it is more thorough, by creating different virtual functions (VF), presented to the virtual machine is an independent NIC, therefore, the virtual machine directly with the NIC communication, you do not need to go through a software switch.
High-speed data transmission between VF and Vm through DMA.
SR-IOV performance is the best, but need a series of support, including Nic, motherboard, vmm and so on.
[1] http://windowsitpro.com/virtualization/q-are-vmdq-and-sr-iov-performing-same-function
Nic virtualization technology: vmdq and SR-IOV