Server Virtualization Technology In-depth Analysis

Source: Internet
Author: User
Keywords top virtualization software server virtualization virtualization
Tags cloud computing server virtualization virtualization top virtualization software virtualization technologies

Server virtualization technology should be traced to the IBM mainframe virtualization z / VM, and server virtualization is implemented on z series mainframes (the non-virtualized operating system is z / OS). It can run hundreds of virtual machines based on z / VM. Later, the KVM technology on Power was PowerKVM; and AIX virtualized PowerVM, which supports both vSCSI and NPIV technologies (the virtual system is called VIOS). Today's content covers CPU virtualization, memory virtualization, Intel hardware-assisted technology, IO virtualization, and GPU virtualization. Technical in-depth popular science articles; old drivers are asked to let their cars ignore the content of today.

Many readers may think that server virtualization technology has been impacted by container technology and may be outdated. In fact, in many scenarios, virtualization technology is not a substitute for containers. Therefore, as a beginner who wants to step into the field of cloud computing, it is still necessary to have a deep understanding of server virtualization. Let ’s take a look at the development history and external factors and driving forces of virtualization.

Partitioning technology enables the virtualization layer to divide server resources for multiple virtual machines; enables you to run multiple applications on a server, and each operating system can only see the virtual hardware provided by the virtualization layer for it.

Virtual machine isolation allows virtual machines to be isolated from each other. The crash or failure of one virtual machine (for example, operating system failure, application crash, driver failure, etc.) will not affect other virtual machines on the same server.

Encapsulation means that the entire virtual machine (hardware configuration, BIOS configuration, memory status, disk status, CPU status) is stored in a small set of files independent of the physical hardware. This way, you can just copy a few files and copy, save, and move virtual machines anytime, anywhere.

CPU virtualization development
Server virtualization can be divided into full virtualization, paravirtualization, and hardware-assisted virtualization according to the degree of virtualization.

The conditions and technical difficulties of CPU virtualization. The CPU itself has different levels of operation, and these levels correspond to different permissions. When the virtual machine executes these sensitive instructions, errors are likely to occur, which will affect the stability of the entire machine, so the VM is not allowed to execute directly. Then a virtualization platform is needed to solve this problem.

  Full virtualization: The position of the VMM in the software stack is where the operating system is traditionally located, and the location of the operating system is where traditional applications are located. Each Guest OS needs binary conversion for special instruction access communication in order to provide interfaces to physical resources (such as processors, memory, storage, graphics cards, and network cards, etc.) to simulate the hardware environment.

Paravirtualization: Part of the code of the Guest OS has been changed, so that the Guest OS will convert all operations related to privileged instructions into Hypercalls (hypercalls) sent to the VMM, and the VMM will continue to process and return results.

  Hardware-assisted virtualization: Introduces new instructions and operating modes, enabling VMM and Guest OS to run in different modes (ROOT mode and non-ROOT mode), respectively, and Guest OS running under Ring 0. The core instructions of Guest OS can be directly implemented to the computer system hardware for execution without going through VMM.
Classification of virtualization software architecture

Server virtualization is one of the key technologies for cloud computing. The meaning of virtualization is broad, including server, storage, network, and data center virtualization. Its purpose is to abstract any form of resources into another form of technology is virtualization. Today we discuss the classification of server virtualization architecture.

Resident virtualization: Virtualization management software acts as a common application on the underlying operating system (Windows or Linux, etc.), and then creates corresponding virtual machines through it to share underlying server resources.

Bare metal virtualization: Hypervisor refers to a virtual machine monitoring program that runs directly on physical hardware. It mainly implements two basic functions: first is to identify, capture and respond to CPU privileged instructions or protection instructions issued by the virtual machine; second, it is responsible for processing virtual machine queues and scheduling, and returning the processing results of the physical hardware to the corresponding virtual machine.

Operating system virtualization: There is no independent hypervisor layer. Instead, the host operating system itself is responsible for allocating hardware resources among multiple virtual servers and keeping those servers independent of each other. An obvious difference is that if operating system layer virtualization is used, all virtual servers must run the same operating system (although each instance has its own application and user account), Virtuozzo / OpenVZ / Docker, etc.

Hybrid virtualization: The hybrid virtualization model uses the host operating system as with hosted virtualization, but instead of placing the hypervisor on top of the host operating system, it inserts a kernel-level driver into the host operating system kernel. This driver acts as a Virtual Hardware Manager (VHM) to coordinate hardware access between the virtual machine and the host operating system. As you can see, the hybrid virtualization model relies on the memory manager and the CPU scheduling tools of the existing core. Just like bare metal virtualization and operating system virtualization architecture, the lack of redundant memory managers and CPU scheduling tools greatly improves the performance of this model.

Comparison of various architectures

Bare metal virtualization architecture and hybrid virtualization architecture will be the development trend of future virtualization architectures. Together with hardware-assisted virtualization, it can reach the performance of physical machines. KVM, Hyper-V, VMware and other mainstream server virtualization support hardware-assisted virtualization.

Memory virtualization

In a virtual environment, the virtualization management program must simulate so that the virtualized memory still conforms to the assumptions and understanding of the guest OS on memory. From the perspective of a virtual machine, physical memory must be used by multiple guest OSs at the same time; solve the problem of continuity of physical memory allocated to multiple systems and guest OS memory.

To solve the above problems, a new layer of client physical address space is introduced to allow the virtual machine OS to see a virtual physical address, and the virtualization management program is responsible for converting the physical address to a physical processor for execution. That is, given a virtual machine, the mapping relationship between the physical address of the client and the physical address of the host is maintained; the access of the virtual machine to the physical address of the client is intercepted and converted into a physical address.

Full memory virtualization: The virtualization management program maintains a shadow page table for each Guest. The shadow page table maintains the mapping between virtual addresses (VA) and machine addresses (MA).

 Memory paravirtualization technology: When the Guest OS creates a new page table, it will register the page table with the VMM. After the guest runs, the VMM will continuously manage and maintain this table, so that the programs on the Guest can directly Access to the appropriate address.

 Hardware-assisted memory virtualization: On the basis of the original page table, an EPT (Extended Page Table) page table is added. Through this page table, the physical address of the guest can be directly translated into the physical address of the host.


I / O virtualization technology

After the virtualization, the server's Ethernet port is divided into multiple, the network, storage, and traffic between the servers may not be enough. When I / O bottlenecks are encountered, the CPU will be idle waiting for data, and the computing efficiency will be greatly reduced. So virtualization must also be extended to I / O systems, dynamically sharing bandwidth among workloads, storage, and servers to maximize the use of network interfaces.

The goal of I / O virtualization is not only to allow virtual machines to access the I / O resources they need, but also to do a good job of isolating them, and more importantly, to reduce the overhead caused by virtualization.

Full virtualization: Virtualization is achieved by simulating I / O devices (disks, network cards, etc.). For the Guest OS, what it can see is a unified set of I / O devices. The VMM intercepts the access requests from the Guest OS to the I / O devices, and then simulates the real hardware through software. This method is very transparent to the guest, without having to consider the situation of the underlying hardware. For example, Guest operates disk types, physical interfaces, and so on.

Paravirtualization: Through the front-end and back-end architecture, the guest I / O requests are passed to a privileged domain (also known as Domain0) through a ring queue. Because this method has more details, it will be analyzed in depth later.

 Hardware-assisted virtualization: The most representative are Intel's VT-d / VT-c, AMD's IOMMU, and PCI-SIG's IOV. This technology also requires the corresponding network cards to be implemented. Common network cards are currently divided into ordinary network cards, VMDq passthrough, and SR-IOV.

Domin0 bridge queues are used for common network cards.

VMDq allocates an independent queue for each virtual machine in the server's physical network card through VMM. The traffic from the virtual machine can be sent directly to the designated queue through the software switch. The software switch does not need to perform sequencing and routing operations. 

SR-IOV uses different independent virtual functions (VF) to create a virtual independent physical network card for the virtual machine, so that the virtual machine can directly communicate with the hardware network card without going through a software switch, reducing the address translation at the virtualization management program layer.

GPU and GPU virtualization technology

GPU passthrough directly passes the GPU device to the virtual machine; GPU sharing passes the GPU device directly to the GPU server virtual machine, and the GPU server can share its GPU device with the GPU client; GPU virtualization means that the GPU device can be virtualized into n vGPUs. The corresponding n virtual machines can directly use the GPU device at the same time, and the GPU device supporting virtualization can be configured as a pass-through or virtualization type.

GPU virtualization uses the VGX GPU hardware virtualization function to virtualize a physical GPU device into multiple virtual GPU devices for use by virtual machines. Each virtual machine can directly access some hardware resources of the physical GPU through the bound vGPU. All vGPUs have A 3D graphics engine and a video codec engine that can access the physical GPU in a time-sharing manner and have independent video memory.

The GPU virtualization function supports a physical GPU device that can be used by multiple virtual machines at the same time, and a GPU device in GPU passthrough can only be used by one virtual machine. GPU virtualization allows virtual machines that use the same GPU physical device to not affect each other. The system automatically allocates the processing power of the physical GPU device to multiple virtual machines, while GPU sharing uses the GPU server to mount GPU devices and establish GPUs on the host The high-speed communication mechanism between the server and the GPU client enables the GPU client to share the GPU devices of the GPU server. Whether the GPU client enjoys the GPU function depends entirely on the GPU server.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.