Virtualization Technology Series-Overview of Core Virtualization Technologies

Source: Internet
Author: User
Tags exception handling

Starting from this chapter, we will enter the core technology chapter of virtualization technology. This article deals only with the basic concepts, which are embodied in the following chapters.

The so-called virtualization Technology core technology, refers to how to support the multiple virtual machines on the same physical machine run up. Computer physical resources, in simple terms can be divided into three categories: CPU, memory, IO devices.

Before unfolding the explanation hypervisor about the implementation of CPU resource virtualization, let's recall how the OS supports multiple users.

When it comes to multi-users in the OS, it's natural to think of the process/task/thread concept in the OS, which is different in the OS, not the distinction here: the time slice is assigned to the user on the OS.

In a single nuclear physics machine, to run multiple users of the program, the method is simple: cut from the Time dimension. According to a certain scheduling algorithm, strategy, priority, the time slices are assigned to different user programs;

On multi-nuclear physics machine, the program that runs multi-user should consider one dimension: space. There is also a need to relate which users can run on which kernel, whether symmetric, load balancing, scheduling priorities and how the policy is, and so on.

Of course, the OS kernel is never so understated when it comes to implementing the above, and scheduler's code is often a priority in the OS. You can refer to the kernel code implementation of the relevant OS itself.

The above description can lead to the following conclusions: The CPU resources are dispatched to different users, from the time and space of two dimensions to start.

Now go back to the virtual machine platform. The virtual machine platform manages objects that are multiple virtual machines, while the OS manages multiple tasks or multiple processes. So, in essence, the two are consistent. If you want to say the difference, no matter the scheduling object, from the heavyweight, the context of the virtual machine scheduling is greater than the context of the task or process. But it is certain that the virtual machine platform on the allocation of CPU resources across multiple virtual machines, there is no outside of the time-sharing and partition space two dimensions of management, so whether it is Xen or KVM, if you go to see its scheduler part of the source code, will find and Linux Scheduler implementation of the amazing similarity (sorry, KVM seems to be using a Linux scheduler directly)

The virtualization of CPU resources, conceptually, is so simple. Nowadays, multi-core processors are quite common, more than 8 cores of processors abound, so many times, virtual machine co-core deployment method is not necessarily adopted, physically nuclear isolation, to avoid the sharing of CPU resources between virtual machines, the benefits are obvious: the isolation of resources means the wrong isolation, During operation, the crash of a virtual machine does not affect the normal operation of other virtual machines, while the decrease of the context scheduling also means the performance is improved.

Next, memory resource virtualization is described. Let's go back to the OS and consider the problem. Take Linux's memory management as an example.

We know that the memory management of Linux is divided into several levels: physical memory management, process virtual address space management. The kernel is responsible for the allocation of physical memory, as well as the mapping of individual process virtual addresses. Each process sees the same virtual address space 0-4g (32 bits for example), and the kernel is responsible for the management of the page table. Linux memory management implementation although complex, but seemingly do things mainly two: physical memory allocation, Process page table management.

Under Hypervisor, the virtualization management of memory resources is conceptually similar to the OS.

First, hypervisor also needs to be responsible for the allocation of physical memory resources between virtual machines, in addition to the need to communicate with other virtual machines through shared memory, the memory of a virtual machine is generally exclusive. Example: The system has 4G of physical memory, 2 virtual machines, each virtual machine allocated 2G of physical memory, each other can not access the memory resources of each other; shared memory between virtual machines, typically used for device sharing management, inter-VM communication, is not unfolding here for the time being.

Second, virtual machines also need to maintain real-world address mapping management, but the virtual machine's actual address mapping is a bit more complicated than the OS because of the triple address relationship: Virtual machine virtual Memory VA, virtual machine physical memory PA, machine true physical memory Ma. The system needs to maintain two page tables to complete the Va->pa->ha conversion, and then HA is the machine-readable address. In general, the virtual machine OS itself maintains a page table to complete the conversion from VA to PA, while Hypervisor maintains another page table to complete the PA-to-ma conversion. The specific implementation of different virtual machine platform, the difference is large, with the hardware support is also strongly related. For example, X86 's EPT Expansion page table can be used to provide acceleration and simplification of address translation work, and arm's SMMU technology is a similar hardware virtualization solution. This is described in detail in the following Memory virtualization implementation section.

IO virtualization. IO contains a variety of character devices, block devices, and network devices.

The two previously mentioned resources, memory and CPU naturally have a divisible gene, but in the home server, IO Peripherals are often only one set.

Traditional IO device virtualization, using software simulation approach to solve;

Solution 1: Device emulation. such as the QEMU simulator, software can be used to simulate the behavior of the hardware. When the virtual machine accesses the peripheral, the system will trigger an exception event, in exception handling, hypervisor will give control of the program to the emulator or device to access the simulation program, complete a device access, and the results of the device access after exiting the exception returned to the virtual machine. This kind of scheme, the performance degradation is relatively large, and the implementation of the simulator itself is more complex than arrogance, after all, different peripherals are difficult to reuse simulator code.

Solution 2: Device Access Proxy mode. Deploy a true device access driver in hypervisor or in a virtual machine to use as a back-end agent and to deploy the previous program on another virtual machine. The previous program can make requests for device access, the request data is captured by the backend program, and the backend program accesses the device based on the captured data and returns the access results to the previous driver. This type of solution, also known as semi-virtualized drive. Xen's front-end drive and KVM Virtio are all part of this solution.

Solution 3: Hardware support Virtualization. This kind of virtualization scenario, in fact, is to extend the device itself to support virtualization. The most popular is the Sriov equipment. such as Intel's 82599 network card, can be a network card device, divided into multiple VF devices, each virtual machine can have one or more VF devices, the VF device has a different PCI configuration space, can be presented as a separate network card, so this kind of device is also the same as memory, CPU-owned division possible. The Sriov itself is an extension of the PCIe protocol.

It is worth mentioning that IO virtualization is the last short board of virtualization technology, especially the virtualization of high-speed devices such as network cards, the use of software virtualization is difficult to improve performance, often require hardware support. The major equipment manufacturers are also in this piece of force. But in the concrete application, often is the above several schemes and uses, the different equipment flexibly adopts the different solution. In general, only high-speed equipment will adopt the hardware virtualization scheme.

In this chapter, it is only conceptually to make a statement about the solution of virtualization, which does not involve a specific implementation. After understanding the information in this article, the next step is to have a more in-depth understanding of the CPU, memory, and peripherals virtualization.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.