Differences between Hyper-V architecture and VMware ESXi

Source: Internet
Author: User
Tags virtual environment

Microsoft's Hyper-V and VMware ESXi are architecturally different, but many virtualization administrators are unaware of these differences. And many administrators are also confused about how Hyper-V is running on the host operating system.

A common misconception about Microsoft Hyper-V is that installing Hyper-V requires the use of the Windows operating system, and Hyper-V runs on top of the host operating system rather than directly on bare metal. It is important to note that once the Hyper-V role is enabled through Server Manager, the hypervisor code is actually configured to start within Windows kernel space. Components running in kernel space have direct access to the hardware, and the same applies to Hyper-V. On the other hand, VMware's ESXi takes a completely different approach, and ESXi hypervisor is encapsulated as a separate ISO file, which is actually a Linux kernel operating system.

Both Hyper-V and ESXi are Type 1 hypervisor. The type 1 hypervisor runs directly above the hardware and is designed to further divide the type 1 hypervisor into two categories: microkernelized and monolithic. There are some subtle differences between microkernelized design and monolithic design. The only difference between the two types of designs is the device-driven location and the control function.

As shown in the monolithic design, the drive is included as part of the hypervisor. VMware ESXi uses the monolithic design to implement all virtualization capabilities, including virtualized device drivers. VMware has been using the monolithic design since the launch of the first virtualization product. Because device drivers are included in hypervisor, virtual machines running on ESXi hosts can communicate directly with the physical hardware, with the help of the hypervisor code, and no longer rely on intermediate devices.

The Microsoft Hyper-V architecture uses the microkernelized design, and the hypervisor code does not include device drivers when it runs.

As shown, device drivers are installed within the host operating system, and requests for virtual machine access to hardware devices are processed by the operating system. In other words, access to the hardware is controlled by the host operating system. There are two types of device drivers running within the host operating system: synthetic and simulated. Synthetic device drivers are faster than simulations. A virtual machine can access a composite device driver only if the Hyper-V integration service is installed on the virtual machine. Integration Services implements the VMBUS/VSC design within the virtual machine, making it possible to directly access the hardware. For example, to access a physical NIC, the network VSC driver running within the virtual machine communicates with the network VSP driver running within the host operating system. The communication between the network VSC and the network VSP takes place above the VMBus. The network VSP driver communicates with the physical network card directly using the virtual Composite Device driver Library. VMBus, which runs within the host operating system, is actually running within the kernel space to improve communication between the virtual machine and the hardware. If the virtual machine does not implement the VMBUS/VSC design, it can only rely on device impersonation.

Regardless of which design the virtualization vendor chooses, a control function must be in place to control the hypervisor. Control features help create a virtual environment. The Microsoft Hyper-V architecture implements control functions within its Windows operating system. In other words, the host operating system controls the hypervisor that run directly above the hardware. In VMware ESXi, the control function is implemented in the ESXi kernel and is controlled by the Linux core shell.

It's hard to say which design is better. However, each design has its own advantages and disadvantages. Because the device driver is encoded as part of the ESXi kernel, ESXi can only be installed on supported hardware. The Microsoft Hyper-V architecture does not have this limitation and can run hypervisor code on any hardware. This reduces the overhead of maintaining the device driver library. Another advantage of using the microkernelized design is that you do not need to install a single device driver on each virtual machine. There is no doubt that ESXi also deploys virtualized components that directly access the hardware, but you cannot add additional roles or services. Although it is not recommended to install any other roles and features on Hypervisor, hosts running Hyper-V can also be configured to have other roles, such as DNS and failover clustering.

Differences between Hyper-V architecture and VMware ESXi

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.