How virtualization offers advantages in cloud computing

Source: Internet
Author: User
Keywords Cloud

Virtual Machine Management program is a commodity, then where do we go?

Virtualized physical computers are the backbone of the public cloud [note] and private cloud [note], enabling enterprises to optimize hardware utilization, enhance security, support multi-tenant, and so on.

Early virtualization approaches were based primarily on analog CPUs, such as x86 on PowerPC Macs, enabling users to run DOS and Windows. Not only do you need to simulate CPUs, but you also simulate other components in your hardware environment, including graphics adapters, hard disks, network adapters, memory, and interfaces.

At the end of the 90, VMware had a major breakthrough in virtualization technology, and the technology they introduced would allow most of the code to be executed directly on the CPU without being translated or emulated.

Before VMware, two or more operating systems running on the same hardware would interfere with each other because they would scramble for resources and attempt to execute privileged directives. VMware intelligently intercepts these types of directives, dynamically rewrites the code, and stores new translation for reuse and quick execution.

In short, these technologies run faster than previous emulators, helping define the x86 virtualization we now know, including the old "hypervisor" mainframe concept, which is the platform for it to create and run virtual machines.

Key changes

Over the years, VMware and its patents dominate the realm of virtualization. On the server side, VMware's esx runs on bare metal, and it becomes the leading first class (or native) management program. On the client side, VMware workstation runs within an existing desktop operating system, which is a second class (or managed) management program.

Virtualization is not just a technology for developers or Cross-platform software, but virtualization is a powerful technology that improves efficiency and manageability by placing servers in an alternative virtualization container.

Over the years, some interesting open-source projects have sprung up, including Xen and QEMU (fast simulators). These are not as fast and flexible as VMware, but they give us a way to develop and lay the groundwork.

Around 2005, AMD and Intel developed new processors to extend to the x86 architecture, providing hardware support for privileged directives. AMD and Intel, respectively, call it amd-v and Vt-x, which changed the pattern and eventually brought server virtualization to more vendors. Soon after, Xen used these new extensions to create hardware virtual machines (HVM) that use QEMU's device emulation and hardware assistance from vt-x and AMD-V extensions to support proprietary operating systems, such as Microsoft Windows.

A company called Qumranet has also started adding virtualized infrastructure to the Linux kernel, known as a kernel based virtual machine (KVM), and began using the QEMU facility to host virtual machines. Microsoft eventually joined the field and launched Hyper-V in 2008.

The birth of a new industry

When virtualization eventually becomes "free" or at least does not require an expensive license fee to access, we begin to see the emergence of new use cases. Specifically, Amazon began using the Xen platform to rent out its extra computing power to third party customers. Through their APIs, they started the prologue to the resilient cloud computing [note] revolution, which means that the application itself can allocate resources to meet your workload.

Now, open source virtual machine management programs are becoming more sophisticated and pervasive in cloud computing. In addition to VMware, organizations are attempting to use the architecture of KVM or Xen management programs. These attempts are not about controlling costs, but about leveraging the resilience of cloud computing and the standards that these open-source alternatives are building.

Future: High Performance resilient infrastructure

Through the commercialization of virtual machine management programs, innovation is now focused on the private/public cloud hardware architecture and the software ecosystem around them: storage architecture, software definition network [note], intelligent and autonomous orchestration, and application APIs.

Traditional servers have slowly given way to flexible, custom cloud [note] applications, which are the future of computing, although they co-exist for a while.

Looking ahead, the IT department's response to virtualized commercialization can be grouped into the following categories:

Status quo: Change is hard, and some companies will get used to the solutions they've been using for years. This means that they need to face a storage and management architecture that has been around for 20-25 years. This also means continuing to pay for management license fees, locked in a virtualized platform dedicated to traditional applications, and unable to support resilient cloud computing applications within the enterprise.

Public cloud: This removes the burden of managing your own infrastructure. However, public clouds may not be the best place to run Legacy server applications because they require specialized resources and enhanced security. Moreover, while public cloud resources are initially cost-effective, the cost of scaling up makes internal capital investment look more attractive.

Cloud Framework: This includes toolkit options, such as OpenStack, an excellent open source framework for cloud computing. Rackspace and other companies allow it to expand. However, few IT companies can actually build and manage OpenStack deployments.

Super Fusion Infrastructure: Nimboxx and other companies are offering turnkey solutions that provide the same elastic cloud advantage as frameworks and workflows to support legacy applications in a single modular device. These data center machines allow companies to start small and gradually expand outward. They can also serve as a bridge between traditional applications and resilient cloud applications.

When considering hyper-fusion infrastructure solutions, the enterprise must understand the important difference between stack owner and stack dependency. A stack dependency is a solution that runs in a virtual machine and sits on top of another vendor's hypervisor. A stack owner is a vendor that runs on bare metal and builds the entire stack on its own.

The following are the effects of these differences:

License Fee: The stack owner uses the same open source management program (KVM or Xen) as the primary cloud service provider, and the enterprise does not have to pay expensive software licensing fees. Stack-dependent people typically provide support for multiple management programs, but only limited support for open source versions.

Performance: Stack owners run on bare metal, allowing them to directly control the storage and computing hardware resources. Stack dependencies run in virtual machines, which means that each I/O operation follows a low efficiency path. Stack-dependently claim 16000 IOPS from three nodes, while stack owners can provide 180,000 iops from a single node.

Simplicity: The stack owner manages the entire terminal from a single pane to the terminal infrastructure, providing a common cloud experience within a private enterprise internal solution. While stack dependencies mitigate some of the storage management complexities, system and virtual machine management still needs to be managed from multiple applications across multiple interfaces.

Security: The stack owner has direct control over all aspects of the hardware and can support technologies such as static data encryption. Stack dependencies lack this control because they run in virtual machines. Inherent in their design is the need for other things before the stack starts, such as the management program, which hinders their ability to contain sensitive parts of the dataset.

Software definition: Stack owner has everything, which means that software defines anything that is possible, including real-time self-learning systems that can increase or decrease resources as needed, and redistribute workloads. A stack-dependent person simply owns a storage pool.

The real breakthrough is to make these complex technologies available to businesses, as well as small businesses. The next generation of VMware-like companies will provide the benefits of a truly resilient private cloud [note] In simple, easy to deploy, scalable, and high-performance products, while supporting vendors of traditional workloads

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.