If you need to reprint the original text, indicate the source and author of the original text.
Chen Rui Chen @ kiwik
17:53:39
Preface:
During this time, I have been on a business trip in Mexico. I have encountered various bad things. My weibo followers may all know this, but what I want to say is that there are also some gains, one is that I finally found a little tips on learning English in Mexico at the age of 30; the other is that I have been trying to implement a ceilometer blueprint during this time, because I used libvirt, qemu and KVM, I have made some progress in understanding virtualization, summarized it, and wrote this blog.
This article provides explanations and clarification on some concepts in KVM, qemu, and libvirt. If you have any errors, please keep them updated.
KVM
KVM isKernel-based Virtual Machine
. Linux Kernel 2.6.20 and later versions are included in the Linux kernel code. You can use KVM directly. KVM relies on the support of host CPU virtualization features (Intel-Vt & AMD-V), similar to the xen hvm, with no requirements for guest OS kernel, you can directly create virtual machines for Linux and Windows.
The following figure details the operating principle of KVM:
KVM supports user-mode (userspace) processes to create virtual machines through the KVM kernel module using the virtualization technology of CPU. The Virtual Machine's vcpu is mapped to the thread in the process. The virtual machine's Ram is mapped to the memory address space of the process. Io and peripheral devices are virtualized through the process, that is, through qemu. So we can see that creating a virtual machine in openstack corresponds to the qemu process on a computing node.
Let's take a look at the KVM memory model, which is related to the blueprint I am working on (memory balloon stats ).
As mentioned earlier, the address space of the guest OS Ram is mapped to the memory address space of the qemu-KVM process, so that the process can easily control the ram of the guest OS, when the guest needs to use Ram, qemu-KVM divides a segment of its process memory space for guest. After maxmemory and currentmemory are set for the guest OS, the RAM upper limit of the guest OS is also available, that is, maxmemory. If the current guest does not actually use so much RAM, you can reduce the currentmemory, return the excess memory to the host. The memory size shown in guest is currentmemory.Memory Balloon
Features. Qemu
When talking about KVM, I always said that processes can create virtual machines through the KVM module. The so-called process is actually qemu. The KVM Team maintains a qemu branch version (qemu-KVM) so that qemu can use KVM to accelerate on the X86 architecture to provide better performance. The purpose of the qeum-KVM version is to integrate all features into the upstream qemu version. Later, this version will be discarded and the qemu master version will be used directly.
In fact, there was a qemu version maintained by xen, called qemu-xen. Since all the features have been integrated into qemu1.0, xen now uses qemu directly.
Before you install openstack, you need to install a KVM package. I personally think this package is actually qemu, and KVM is already included in Linux kernel, so you do not need to install it again. The package name is a bit confusing with KVM. This can be seen in the Ubuntu package description.
[email protected]:~$ [master]$ dpkg -l | grep qemuii kvm 1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu14.13 dummy transitional package from kvm to qemu-kvmii qemu 1.0+noroms-0ubuntu14.13 dummy transitional package from qemu to qemu-kvmii qemu-kvm 1.0+noroms-0ubuntu14.13 Full virtualization on i386 and amd64 hardware
There isvirt_type
Configuration item, which can be configured as KVM or qemu. The KVM here actually refers to qemu-KVM. If you want to configure KVM, you need the host's CPU to support Intel-Vt and AMD-V technology, and to load the KVM kernel module, because of hardware acceleration, the performance of the created Guest OS is better than qemu; the qemu configuration item refers to the full qemu virtualization without hardware acceleration. It is mainly used in older CPU and operating system environments, or when a virtual machine is created in a virtual machine, of course, the full qemu virtualization performance is worse than qemu-KVM.
Qemu functions are roughly divided into two categories:
- Emulator)
- Virtualization (virtualization alizer)
Simulation: It is to simulate another CPU architecture and run the program in one CPU architecture. For example, simulate an arm running environment, execute an arm program, or simulate an x86 Instruction Set in a PowerPC environment.
Virtualization: runs Guest OS commands on Host OS and provides virtual CPU, Ram, Io, and peripheral devices for guest OS.
We use the qemu virtualization function in openstack.
The qemu process directly uses the KVM interface./dev/kvm
To send commands for creating and running virtual machines to KVM. The framework code is as follows:
1 open("/dev/kvm") 2 ioctl(KVM_CREATE_VM) 3 ioctl(KVM_CREATE_VCPU) 4 for (;;) { 5 ioctl(KVM_RUN) 6 switch (exit_reason) { 7 case KVM_EXIT_IO: /* ... */ 8 case KVM_EXIT_HLT: /* ... */ 9 }10 }
After the VM is running, when the guest OS issues hardware interruptions or other special operations, the KVM exits and qemu can continue to execute, in this case, qemu simulates Io operations based on the KVM exit type to respond to guest OS.
Because qemu is a common user-state process, and guest OS is running in qemu, the host cannot bypass qemu to directly observe Guest OS. However, qemu provides a series of interfaces to export the running status, memory status, and Io status of guest OS.
The following is a qemu process:
109 1673 1 1 May04 ? 00:26:24 /usr/bin/qemu-system-x86_64 -name instance-00000002 -S -machine pc-i440fx-trusty,accel=tcg,usb=off -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid f3fdf038-ffad-4d66-a1a9-4cd2b83021c8 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=2014.2,serial=564d2353-c165-6238-8f82-bfdb977e31fe,uuid=f3fdf038-ffad-4d66-a1a9-4cd2b83021c8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000002.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/f3fdf038-ffad-4d66-a1a9-4cd2b83021c8/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/data/nova/instances/f3fdf038-ffad-4d66-a1a9-4cd2b83021c8/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=26,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:db:86:d4,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/f3fdf038-ffad-4d66-a1a9-4cd2b83021c8/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Haha, maybe because it was a Java programmer, the more qemu looks like a Java Sandbox Model, the more java sandbox runs Java programs and qemu runs as virtual machines. Libvirt
Finally, let's take a look at libvirt.
Libvirt is relatively simple. It is a unified virtualization management interface. Currently, the following virtualization implementations are supported:
- The KVM/qemu Linux hypervisor
- The xen hypervisor on Linux and Solaris Hosts.
- The lxc Linux container system
- The openvz Linux container system
- The user mode Linux paravirtualized Kernel
- The virtualbox hypervisor
- The VMware ESX and gsx hypervisors
- The VMware Workstation and player hypervisors
- The Microsoft hyper-V hypervisor
- The IBM powervm hypervisor
- The parallels hypervisor
- The bhyve hypervisor
The first one is qemu. The virsh of libvirt can pass the qemu monitoring command. The blueprint I want to implement is to use libvirt to pass the qemu monitoring command and enable the qemu memory usage statistics, then, use libvirt to obtain the memory usage of the VM.
[email protected]:~$ [master]$ virsh versionCompiled against library: libvirt 1.2.2Using library: libvirt 1.2.2Using API: QEMU 1.2.2Running hypervisor: QEMU 2.0.0[email protected]:~$ [master]$ virsh help qemu-monitor-command NAME qemu-monitor-command - QEMU Monitor Command SYNOPSIS qemu-monitor-command <domain> [--hmp] [--pretty] {[--cmd] <string>}... DESCRIPTION QEMU Monitor Command OPTIONS [--domain] <string> domain name, id or uuid --hmp command is in human monitor protocol --pretty pretty-print any qemu monitor protocol output [--cmd] <string> command