Summary of KVM Virtualization CPU Technology
A NUMA technology introduction
NUMA is a technology solution that solves the multi-CPU work, and we first review the technical architecture history of the multi-CPU work together. There are three main architectures for multi-CPU work, namely SMP MPP NUMA architecture. SMP MPP NUMA is all designed to solve the problem of multiple CPUs working together.
Early in the day, each server is a single CPU, with the development of technology, there is a need for multi-CPU work together, the first multi-CPU technology is SMP.
Smp
Multiple CPUs access memory through a bus, so SMP systems are sometimes referred to as consistent memory access (UMA) architectures, and consistency means that the processor can only hold or share a unique value for each data in memory at any time.
The disadvantage of SMP is that scalability is limited, because when the memory interface is saturated, increasing the processor does not achieve higher performance, so the SMP mode supports a limited number of CPUs.
Mpp
The MPP mode is a distributed memory mode that allows more processors to be included in a system's memory. A distributed memory mode has multiple nodes, each with its own memory, can be configured as SMP mode, or it can be configured as a non-SMP mode. A single node joins together to form a total system. The MPP can be approximated as an SMP scale-out cluster, and the MPP generally relies on software implementations.
Numa
Each processor has its own memory, and each processor can also access the memory of the other processor.
Numa-q
Is IBM's first commercial solution to apply NUMA technology to i386, and it can support more x86 CPUs working together.
KVM Virtual Machine NUMA tuning
Because NUMA architectures each processor can access its own and other processor's memory, access to its own storage is much faster than access to other memory, NUMA tuning goal is to allow the processor to access its own storage, to improve processing speed.
The current CPU hardware can be seen through Numactl--hardware
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/59/01/wKiom1TEI_DxDwxSAAF3NRraZ2k306.jpg "title=" 1.png " alt= "Wkiom1tei_dxdwxsaaf3nrraz2k306.jpg"/>
NUMA Management for Libvirt
Use the Numastat command to view memory statistics for each node
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/58/FF/wKioL1TEJOSSRZTKAAFaWRIAf8o626.jpg "title=" 2.png " alt= "Wkiol1tejossrztkaafawriaf8o626.jpg"/>
Use the Numatune command to view or modify the NUMA configuration of a virtual machine
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/59/01/wKiom1TEJBuD3I-KAADmHcQo0cI539.jpg "title=" 3.png " alt= "Wkiom1tejbud3i-kaadmhcqo0ci539.jpg"/>
Numu balancing strategy for Linux
Linux system is automatically numu balance policy by default, if you want to turn off automatic balancing of Linux system, use the following command
# echo 0 >/proc/sys/kernel/numa_balancing
If you want to turn on, use the following command
Echo 1 >/proc/sys/kernel/numa_balancing
NUMA works by strict, specifying the CPU, or auto using the system's Numad service
<numatune> <memory mode= ' strict ' placement= ' auto '/></numatune><numatune> <memory Mode= ' strict ' nodeset= ' 0,2-3 '/></numatune>
Virsh numatune rhel7--nodeset ' 0,2-3 '
Settings for VPCU
<vcpu placement= ' auto ' >8</vcpu><vcpu placement= ' static ' cpuset= ' 0-10,5 ' >8</vcpu>
<vcpu> and <numatune> need to maintain a consistent,<numatune> configuration is the kernel of the CPU configured by the physical cpu,<vcpu>, including hyper-threading-generated cores;
<numatune> using the static mode,<nodeset> must also be;
You can also set up a virtual machine to give 32 virtual CPUs, but only 8 can be used at the beginning, and then the CPU will be added to the virtual machine based on system pressure.
<vcpu placement= ' auto ' current= ' 8 ' >32</vcpu>
You can also assign a specific physical machine CPU pinning policy to each virtual machine CPU
<cputune> <vcpupin vcpu= "0" cpuset= "1-4,2"/> <vcpupin vcpu= "1" cpuset= "0,1"/> <v Cpupin vcpu= "2" cpuset= "2,3"/> <vcpupin vcpu= "3" cpuset= "0,4"/></cputune>
You can also use the Emulatorpin method
The Emulatorpin tag can specify a specific physical CPU, which is the CPU and memory used by the virtual machine within a physical machine CPU
<cputune> <emulatorpin cpuset= "1-3"/></cputune>
Command mode is
Virsh Emulatorpin Rhel7 1-3
1-3 of the cores are inside a physical CPU.
By default, the system uses an auto-balanced NUMA policy.
NUMA topologies for virtual machines
You can set the use of a virtual machine for NUMA resources
<cpu> ... <numa> <cell cpus= ' 0-3 ' memory= ' 512000 '/> <cell cpus= ' 4-7 ' memory= ' 5120 xx '/> </numa> ...</cpu>
Cell or NUMA node for cell NUMA
CPU CPU range that a physical CPU can use
Memory size that can be used in kilobytes
Numa-aware and KSM
KSM can merge the same memory pages, even for different NUMA nodes,
Set the/sys/kernel/mm/ksm/merge_across_nodes parameter to 0 to turn off memory consolidation for cross-NUMA nodes
Or you can turn off memory consolidation for virtual machines
<memoryBacking> <nosharepages/></memoryBacking>
Two Host-passthrough technology and application scenario
KVM Definition of CPU model
Libvirt the definition of the CPU to refine the standard of several types in/usr/share/libvirt/cpu_map.xml can be found
<cpus> <arch name= ' x86 ' > <!-- vendor Definitions --> <vendor name= ' Intel ' string= ' GenuineIntel '/> <vendor name= ' AMD ' string= ' AUTHENTICAMD '/> < !-- standard features, edx --> <feature name= ' FPU ' > <!-- cpuid_fp87 --> <cpuid function= ' 0x00000001 ' edx= ' 0x00000001 '/> </feature> < Feature name= ' VME ' > <!-- CPUID_VME --> < cpuid function= ' 0x00000001 ' edx= ' 0x00000002 '/> </feature>...<!-- models --> <model name= ' 486 ' > <feature name= ' FPU '/> <feature name= ' VME '/> <feature name= ' PSE '/> </model>... <model name= ' Haswell ' > <model name= ' Sandybridge '/> < Feature name= ' FMA '/> <feature name= ' pcid '/> <feature name= ' Movbe '/> <feature name= ' fsgsbase '/> <feature name= ' bmi1 '/> <feature name= ' Hle '/> <feature Name= ' avx2 '/> <feature name= ' SMEP '/> <feature name= ' bmi2 '/> <feature name= ' erms '/ > &nbsP; <feature name= ' invpcid '/> <feature name= ' RTM '/> </model>
The main types are the following CPU models.
' 486 ', ' Pentium ' pentium2 ' pentium3 ' pentiumpro ', ' Coreduo ', ' Pentiumpro ', ' n270 ', ' Coreduo ' and ' Core2duo ', CP ' QEMU32 ' U64-rhel5 "Cpu64-rhel6" kvm64 "Qemu64" Conroe "Penryn", "Nehalem", Westmere "Sandybridge" ' Haswell ' Athlon "Opteron_g1" Opteron_g2 "Opteron_g3" Opteron_g4 "Opteron_g5" POWER7 "power7_v2.1"
This scenario is primarily intended to ensure compatibility between host hosts when a virtual machine is migrated.
There are several modes of CPU configuration mode:
Custom own definition
<cpu mode= ' custom ' match= ' exact ' > <model fallback= ' Allow ' >kvm64</model> ... <feature policy= ' re Quire ' name= ' monitor '/> </cpu>
Host-model according to the characteristics of the physical CPU, select the closest standard CPU model, if not specified CPU mode, the default is to use this mode, the XML configuration file is:
<cpu mode= ' Host-model '/>
Host-passthrough directly exposes the physical CPU to the virtual machine, and the virtual machine is fully visible on the physical CPU model; The XML configuration file is:
<cpu mode= ' Host-passthrough '/>
Use the Host-model to see the Vcpus
Processor:3vendor_id:genuineintelcpu Family:6model:44model Name:westmere e56xx/l 56XX/X56XX (nehalem-c) ...
Use the Host-passthrough to see the Vcpus
processor : 3vendor_id : GenuineIntelcpu family : 6model : 44model name : intel (R) xeon (r) CPU x5650 @ 2.67ghz
Application Scenarios
Host technology is suitable for the following scenarios:
1 CPU pressure is very large;
2 Some features of the physical CPU need to be passed to the virtual machine;
3 need to see in the virtual machine and the physical CPU identical CPU brand model, this in some public cloud very meaningful;
Note: Host mode virtual machines cannot be migrated to different models of CPUs;
Three CPU Hot add
CPU Hot Add is a new feature of CENTOS7, requiring both host and virtual machines to be CENTOS7
How to use
When we assign a virtual machine, we use the reserved CPU
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/59/01/wKiom1TEJNeTfEzIAAEBWypo640076.jpg "title=" 5.png " alt= "Wkiom1tejnetfeziaaebwypo640076.jpg"/>
Currently, you can see 4 CPUs in a virtual machine.
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/58/FF/wKioL1TEJdLB40fcAAGgc2UkvQ8376.jpg "title=" 6.png " alt= "Wkiol1tejdlb40fcaaggc2ukvq8376.jpg"/>
We've changed the CPU online into 5
Virsh Setvcpus Centos7 5--live
The 5th CPU is activated inside the virtual machine
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/59/01/wKiom1TEJQ3xCUl3AAA_TtNCL04409.jpg "title=" 7.png " alt= "Wkiom1tejq3xcul3aaa_ttncl04409.jpg"/>
You can see that the CPU of a virtual machine has become 5
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/58/FF/wKioL1TEJffwcKLDAAFftr4eH3w821.jpg "title=" 8.png " alt= "Wkiol1tejffwckldaafftr4eh3w821.jpg"/>
In the same way, we can increase the CPU to 6
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/59/01/wKiom1TEJTDwSORjAAFgQLu_LY4003.jpg "title=" 9.png " alt= "Wkiom1tejtdwsorjaafgqlu_ly4003.jpg"/>
Because we initially reserved the 10, all the most time, can be hot add CPU to 10.
Application scenario: For virtual machine running application is very important, can not stop, and the performance of the scenario is severely insufficient, CPU hot added technology is a good solution.
Quad Nested virtual machine nesting (KVM on KVM)
Nested technology, simply put, is to run a virtual machine on a virtual machine.
KVM Virtual machine nesting and VMware principle is different, VMware first layer is the use of hardware virtualization technology, the second layer is full software simulation, so VMware can only do two layers of nesting. KVM is the physical CPU to all the characteristics of the virtual machine, all theoretically can be nested n multilayer.
Configuration method
Because nested technology CentOS is not officially supported, it is recommended to test with the latest fedora when testing.
The first step is to open the KVM kernel module nested feature
Modprobe Kvm-intel nested=1
or modify MODPROBE.D edit/etc/modprobe.d/kvm_mod.conf to add the following:
Options Kvm-intel Nested=y
Check whether the nested feature is turned on
Cat/sys/module/kvm_intel/parameters/nested Y
The second step of the first layer of the virtual machine configuration file, to the physical machine CPU features are all passed to the virtual machine, using CPU host technology
<cpu mode= ' Host-passthrough '/>
The third step and the host, the first tier of virtual machines according to the host configuration, according to the corresponding components, then you can install the second tier of virtual machines.
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/58/FF/wKioL1TEJmiTW70lAAJwA3anQcs644.jpg "title=" 11.png "alt=" Wkiol1tejmitw70laajwa3anqcs644.jpg "/>
This article is from the "xiaoli110 blog" blog, make sure to keep this source http://xiaoli110.blog.51cto.com/1724/1607863
Summary of KVM Virtualization CPU Technology