Comparative Analysis of Four Mainstream Virtual Technology Architectures for Cloud Computing

Source: Internet
Author: User
Keywords virtualization technologies cloud computing top virtualization software
Tags cloud computing server virtualization virtualization top virtualization software virtualization technologies
Cloud computing is inseparable from the underlying virtualization technology support. Wikipedia lists more than 60 virtualization technologies, more than 50 based on the X86 (CISC) system, and some based on the RISC system. Among them, 4 virtualization technologies are currently the most mature and widely used, respectively: VMWARE ESX, Microsoft Hyper-V, open source XEN and KVM. What kind of virtualization technology to choose for the cloud computing platform will be a problem that cloud computing construction will face. This article compares and analyzes the architecture levels of the four mainstream virtualization technologies.

1 Introduction

The cloud computing platform needs a resource pool to provide capacity output. This capacity includes computing capacity, storage capacity, and network capacity. In order to dispatch these capabilities to where they are needed, the cloud computing platform also needs to perform scheduling management on the capabilities. These Capabilities are provided by virtualized resource pools.

2. Analysis of virtualization architecture

From the perspective of the implementation of virtualization, there are two main forms of virtualization architecture: host architecture and bare metal architecture. The virtual machine in the host architecture is scheduled and managed as a process of the host operating system. There is no host operating system under the bare metal architecture. It runs directly on the physical hardware with Hypervisor, even if it has a similar host operating system. The parent partition or Domain 0 also exists as a virtual machine in a bare metal architecture. The host architecture is usually used for virtualization on personal PCs, such as Windows Virtual PC, VMware Workstation, VirtualBox, Qemu, etc., while the bare metal architecture is usually used for server virtualization, such as the four virtualization technologies mentioned in the article.
ESX is VMware's enterprise-level virtualization product. ESX 1.0 was released in 2001, and ESX 4.1 Update 1 was released in February 2011.

When the ESX server starts, first start the Linux Kernel, and load virtualization components through this operating system. The most important is ESX's Hypervisor component, which is called VMkernel. VMkernel will completely take over control of the hardware from LinuxKernel, and this Linux Kernel is used as VMkernel's first virtual machine is used to host ESX's serviceConsole and implement some local management functions.

VMkernel is responsible for scheduling all hardware resources for the hosted virtual machine, but different types of hardware will have some differences.

The virtual machine directly accesses the CPU and memory resources through the VMkernel, which minimizes the overhead. The direct access of the CPU benefits from CPU hardware-assisted virtualization (Intel VT-x and AMD AMD-V, the first generation of virtualization technology) Direct memory access benefits from MMU (Memory Management Unit, a feature in the CPU) hardware-assisted virtualization (Intel EPT and AMD RVI / NPT, second-generation virtualization technology).

There are multiple ways for virtual machines to access I / O devices. Taking a network card as an example, there are two ways to choose from: One is to use I / O MMU hardware-assisted virtualization (Intel VT-d and AMD-Vi). VMDirectPath I / O enables virtual machines to directly access hardware devices, thereby reducing CPU overhead. The second is the use of paravirtualized device VMXNETx. The physical driver of the network card is in VMkernel, and the virtual driver of the network card is loaded in the virtual machine. The pairing of these two to access the network card has higher efficiency than the emulated network card (IntelE1000). The installation of paravirtualized devices is implemented by the VMware tool in the virtual machine, which can be found in the lower right corner of the Windows virtual machine. The former two methods of the network card have significant advancement, but the latter is more commonly used because VMDirectPath I / O is incompatible with some core functions of VMware virtualization, such as: hot migration, snapshot, fault tolerance, and excessive memory. Use etc.

ESX physical drivers are built into the Hypervisor, and all device drivers are pre-implanted by VMware. Therefore, ESX has a strict compatibility list for hardware. ESX will refuse to install hardware that is not on the list.
Hyper-V is Microsoft's next-generation server virtualization technology. The first version was released in July 2008. The latest version is the R2 SP1 version released in April 2011. Hyper-V has two release versions: one is the standalone version. For example, Hyper-V Server 2008 is a free version that implements operation control through a command line interface. The second is an embedded version, such as Windows Server 2008. Hyper-V is an optional enabled role.

For a Windows Server 2008 that does not have the Hyper-V role enabled, this operating system will directly operate the hardware device. Once the Hyper-V role is enabled in it, the system will require a server restart. Although the system after the restart appears to be no different, the architecture is completely different from the previous one. During this restart, Hyper-V's Hypervisor took control of the hardware device. Previously Windows Server 2008 became Hyper-V's first virtual machine, called the parent partition, and is responsible for other virtual machines (called Sub-partitions) and management of I / O devices. Hyper-V requires the CPU to have hardware-assisted virtualization, but MMU hardware-assisted virtualization is an enhanced option.

In fact, Hypervisor only implements CPU scheduling and memory allocation, and the parent partition controls the I / O devices. It directly accesses network cards, storage, etc. through physical drivers. To access the I / O device of the child partition, the VSC (virtualization service client) in the child partition operating system is required. The request for VSC is passed by VMBUS (virtual machine bus) to the VSP (virtualization service provided in the parent partition operating system) (Or), and then redirected by the VSP to the physical driver in the parent partition. Each I / O device has its own VSC and VSP pairing, such as storage, network, video, and input devices. The entire I / O device access process is The operating system of the child partition is transparent. In fact, in the sub-partition operating system, VSC and VMBUS are virtual drivers used as I / O devices. It is an installation of the integrated service pack provided by Hyper-V when the sub-partition operating system is first started. This is also considered a paravirtualized Devices, making virtual machines independent of physical I / O devices. If the operating system of the sub-partition does not have Hyper-V Integration Service Pack installed or does not support Hyper-V Integration Service Pack (for this type of operating system, Microsoft calls it Unenlightened OS, such as uncertified supported Linux versions and older Windows versions ), This sub-partition can only run in the simulation state. In fact, the Enlightenment operating system that Microsoft claims is an operating system that supports paravirtualization drivers.

Hyper-V's Hypervisor is a very streamlined software layer that does not contain any physical drivers. The device drivers for the physical server are located in the parent partition of Windows Server 2008. The driver installation and loading methods are not the same as traditional Windows systems. the difference. Therefore, as long as it is hardware supported by Windows, it can also be compatible with Hyper-V.

XEN was originally an open source research project by Cambridge University Xensource. The first version of XEN 1.0 was released in September 2003. In 2007, Xensource was acquired by Citrix. The open source XEN was transferred to www.xen.org and continued to be promoted. Members of the organization include individuals and Companies (such as Citrix, Oracle, etc.). Currently the organization released the latest version of XEN 4.1 in March 2011.

Compared to ESX and Hyper-V, XEN supports a wider range of CPU architectures. The former two only support CISC's X86 / X86_64 CPU architecture. In addition, XEN also supports RISC CPU architectures, such as IA64, ARM, etc.

ENXEN Hypervisor is the first program loaded by the server after BIOS startup, and then starts a virtual machine with specific permissions, which is called Domain 0 (Dom 0 for short). Dom 0's operating system can be Linux or Unix, which implements control and management functions on the Hypervisor. Among the hosted virtual machines, Dom 0 is the only virtual machine that can directly access the physical hardware (such as storage and network cards). It provides access to storage and other virtual machines (Domain U, DomU for short) through the physical driver loaded by itself. Network card bridge.

XEN supports two types of virtual machines, one is paravirtualization (PV), and the other is full virtualization (XEN calls it HVM, Hardware Virtual Machine). Paravirtualization requires a specific kernel operating system, such as the Linux kernel based on the Linux paravirt_ops (a set of compilation options for the Linux kernel) framework, and the Windows operating system cannot be supported by XEN's paravirtualization due to its closed nature. A special feature of virtualization is that it does not require the CPU to have hardware-assisted virtualization, which is very suitable for the virtualization of old servers before 2007. Full virtualization supports native operating systems, especially for Windows and other operating systems. XEN full virtualization requires the CPU to have hardware-assisted virtualization. It modifies Qemu to emulate all hardware, including: BIOS, IDE controller, VGA display card. , USB controller and network card. In order to improve I / O performance, full virtualization especially uses disks and network cards to use paravirtualized devices instead of emulated devices. These device drivers are called PV on HVM, in order to make PV on HVM have the best performance. The CPU should have MMU hardware-assisted virtualization.

XEN's Hypervisor layer is very thin, less than 150,000 lines of code, and does not contain any physical device drivers. This is very similar to Hyper-V. The drivers for physical devices reside in Dom 0 and can be reused. Some Linux device drivers. Therefore, XEN also has a wide range of hardware compatibility. Linux supports it, and it supports it.
KVM stands for Kernel-based Virtual Machine, which literally means kernel-based virtual machine. It was originally an open source project developed by Qumranet. It was first integrated into the Linux 2.6.20 core in January 2007. In 2008, Qumranet was acquired by RedHat, but KVM itself is still an open source project, which is owned by RedHat, IBM, etc. Vendor support. KVM, as a module in the Linux kernel, was released with the Linux kernel. The latest version up to January 2011 is kvm-kmod 2.6.37.

Similar to XEN, KVM supports a wide range of CPU architectures. In addition to the X86 / X86_64 CPU architecture, it will also support mainframes (S / 390), minicomputers (PowerPC, IA64), and ARM.

KVM makes full use of the hardware-assisted virtualization capabilities of the CPU and reuses many functions of the Linux kernel, making KVM itself very thin. The founder of KVM, Avi Kivity, claims that the KVM module has only about 10,000 lines of code, but we cannot consider KVM Hypervisor is this amount of code, because in a strict sense, KVM itself is not a hypervisor. It is only a loadable module in the Linux kernel. Its function is to convert the Linux kernel into a bare metal hypervisor. Compared to other bare metal architectures, this is very special. Some are similar to the host architecture. Some people in the industry even call it a semi-bare metal architecture.

The Linux kernel is transformed into a hypervisor by loading the KVM module. KVM adds a guest mode on the basis of the user mode and kernel mode of the Linux kernel. Linux itself runs in kernel mode, the host process runs in user mode, and the virtual machine runs in guest mode, so that the transformed Linux kernel can uniformly manage and schedule the host process and virtual machine. This is also the origin of the KVM name.

VMKVM uses the modified QEMU to provide BIOS, graphics, network, and disk controller emulation, but for I / O devices (mainly network cards and disk controllers), it will inevitably bring low performance. Therefore, KVM also introduces para-virtualized device drivers. By combining virtual drivers in the virtual machine operating system with physical drivers in the host's Linux kernel, it provides performance similar to native devices. You can see
The physical device supported by KVM is the physical device supported by Linux.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.