Macvtap and Vhost-net technology principles
VMS <----> virtual network cards <----> Virtualization Layer <----> Core Bridge <----> Physical network card
Vhost-net technology enables virtual machine network traffic to bypass the virtualization layer of the user space and must use a virtio semi-virtualized NIC
Macvtap is the bridge that skips the kernel.
Traditional Bridging solutions
<interface type= ' bridge ' >
<mac address= ' 00:16:42:34:45:6f '/>
<source bridge= ' br0 '/>
<targetdev= ' Tap0 '/>
<model type= ' Virtio '/>
<alias name= ' net0 '/>
<address type= ' PCI ' domain= ' 0x0000 ' bus= ' 0x00 ' slot= ' 0x03 ' function= ' 0x0 '/>
</interface>
Vhost-net is optimized for Vritio, Virtio is originally designed to perform front-end and VMM back-end communication with the client system, reducing the transformation of root and non-root modes in hardware virtualization
Vhost-net is the back-end optimization scheme.
Do not use Vhost-net, enter the CPU root mode, need to enter the user state to send data to the TAP device, once again into the kernel state, after the use of Vhost-net mode, after entering the kernel state does not need to do the kernel state and user state switching, further reduce the cost of privileged switching.
Macvtap Solutions
> IP link Add link eth1 name MacVTap0 type Macvtap
> IP link Set MacVTap0 address 1a:34:34:34:45:42 up
> IP link Show MacVTap0
<interface type= ' direct ' >
<mac address= ' 1a:343434:45:42 '/>
<source dev= ' eth1 ' mode= ' Bridge '/>
<model type= ' e1000 '/>
<address type= ' PCI ' domain= ' 0x0000 ' bus= ' 0x00 ' slot= ' 0x03 ' function= ' 0x0 '/>
</interface>
Interrupt of network card and multi-queue
The Linux kernel calls the interrupt handler's function to handle interrupts, and because interrupts occur frequently, it is necessary to peel off some of the heavy and unimportant tasks from the interrupt handler, which is called the lower part of Linux.
Method of processing the lower half in 3
Soft interrupt
Tasklet
Work queue
High-load network card is a soft interrupt generation, it is easy to form a bottleneck. And for a long time it was CPU0 to deal with
Data requests for network card multi-queue NICs are processed by multiple CPUs
RSS is a hardware feature that implements multiple queues, sends different streams to different CPUs, and the same traffic is always on the same CPU, avoiding CPU parallelism and TCP sequence collisions.
ls/sys/class/net/eth0/queues/
You will see that the network card corresponds to multiple send and receive queues
Irqbalance Optimizing Interrupt Allocation
Working mode
Performance
Power-save
https://gist.github.com/syuu1228/4352382 NIC Binding Interrupt Script
Multi-queue Virtio NIC
Network card PCI Passthrough Technology
> Virsh Nodedev-list-tree
> Virsh Nodedev-dumpxml.pci_0000_04_00_0
SR-Iov Technology
The single Root I/O virtualization is a standard for sharing pci-e devices with virtual machines. Multi-use on network devices, in theory can also support other PCI devices
SR-Iov Technology is a hardware solution that provides a virtualization layer that bypasses the system from hardware, and allows no virtual functions to have separate memory addresses, interrupts, DMA streams
PFs physical functions has a full-featured PCI-E
VFs virtual Functions Only lightweight pci-e can have up to 64,000 virtual function VF per pf
Nic SR-Iov
You need to configure the PF for the host and then use the subnet card exclusively for the virtual machine.
Modprobe IGB
Modprobe IGB max_vfs=7
Gigabit NIC supports up to 8 VF (0-7)
Gigabit NIC supports up to 64 VF
BIOS Open SR-Iov
Modprobe-r IGB
echo "option IGB max_vfs=7" >>/etc/modprobe.d/igb.conf
Lspci |grep 82576
Virtual machines use network cards exclusively for using sub-NICs
Virsh Nodedev-list|grep 0b
Open VSwitch
> Ovs-vsctl add-br br0 #创建网桥
> Ovs-vsctl add-port br0 eth1
Edit a virtual machine file
<interface type= ' bridge ' >
<mac address= ' xx:xx:xx:xx:xx:xx '/>
<source bridge= ' br0 '/>
<virtualport type= ' KVM '/>
<vlan>
<tag id= ' 2 '/>
</vlan>
<targetdev= ' Tap1 '/>
<model type= ' Virtio '/>
</interface>
Ovs-vsctl Show
>ovs-vsctl set port Tap1 tag=3 VLAN tag by changing the corresponding TAP network card of the virtual machine
Ovs-vsctl ADD-BR Br0
Ovs-vsctl add-bond br0 bond0 eth2 eth3 lacp=active
Ovs-vsctl Set Port bond0 bond_mode=balance-slb
Ovs-appctl Bond/show bond0
Connecting the grid open Vswitch Bridge
1. Using Veth Equipment
Ovs-vsctl ADD-BR Br0
Ovs-vsctl ADD-BR BR1
IP link Add name veth0 type Veth Peer name Veth1
Ovs-vsctl Add-port br0 Veth0
Ovs-vsctl Add-port BR1 veth1
2. PACTH type of port using open vswitch
Ovs-vsctl ADD-BR Br0
Ovs-vsctl ADD-BR BR1
Ovs-vsctl Add-port br0 Patch-to-br0
Ovs-vsctl set Interface Patch-to-br0 Type=patch
Ovs-vsctl set Interface Patch-to-br0 OPTIONS:PEER=PATCH-TO-BR1
Ovs-vsctl Add-port BR1 PATCH-TO-BR1
Ovs-vsctl set Interface PATCH-TO-BR1 Type=patch
Ovs-vsctl set Interface PATCH-TO-BR1 options:peer=patch-to-br0
Host multi-NIC bindings
Cat/etc/modprobe.d/nicbond.conf
Alias Bond0 Bonding
Options Bond0 mode=1 miimon=100
Configuring the Aggregation interface
Cd/etc/sysconfig/network-scripts
Cat Ifcfg-bond0
Device=bond0
Onboot=yes
Bridge=br1
Configuring sub-NICs
Cat ifcfg-enp4s0f0
Hwaddr=
Type=ethernet
Bootproto=none
Name=enp4s0f0
Uuid=
Onboot=yes
Master=bond0
Slave=yes
Configure a virtual Network bridge
Cat IFCFG-BR1
Device=br1
Type=bridge
ipaddr=192.168.20.200
netmask=255.255.255.0
Onboot=yes
Content from < depth practice kvm> a book summary
This article is from the "Lionel" blog, make sure to keep this source http://reliable.blog.51cto.com/10292844/1782437
Network virtualization technology and application scenarios