It's amazing. virtual networks in Linux

Source: Internet
Author: User
Tags kvm hypervisor

It's amazing. virtual networks in Linux
GuideWith the rapid development of platform virtualization, it is not uncommon to virtualize other parts of the company's ecosystem. One of the most recent ones is virtual networks. In the early stages of platform virtualization, virtual hosting was created, but today, more of the network is being virtualized, for example, a switch that supports communication between VMS on the same server or distributed between servers. Focus on NIC and switch virtualization and explore ideas behind the virtual network.

Computing is now thriving again. Although virtualization emerged decades ago, the real potential of it was realized only now through the use of commodity hardware. Virtualization enhances the efficiency of server load, but other parts of the server ecosystem have become the option to be enhanced in the future. Many people regard virtualization as the consolidation of CPU, memory and storage, but this is too simple solution. Network is a key aspect of virtualization, representing the first element in virtualization settings.

Virtual Network

We started to explore the problem from a high level, and then went deep into Linux®Various methods for building and supporting network virtualization.

In traditional environments (see figure 1), a series of physical servers support the required application settings. To implement inter-server communication, each server contains one or more network interface cards (interfaces) that are connected to an external network. NICs with network software stacks support communication between endpoints through network facilities. As shown in figure 1, this function is represented as a vswitch, which supports efficient packet communication between endpoints.

Figure 1. Traditional Network Infrastructure

The key reform behind server consolidation is the abstraction of physical hardware, allowing multiple operating systems and applications to share hardware (see figure 2 ). This change is called hypervisor (or virtual machine [VM] monitor ). Each VM (set by one operating system and application) is considered as a non-shared underlying hardware and a complete machine, even if some of them may not exist or are shared by multiple VMS. A virtual NIC is an example. The hypervisor creates one or more vrouters for each VM. These slaves can act as physical slaves for VMS, but they only represent NIC interfaces. The hypervisor also allows the dynamic creation of virtual networks, completed by virtual switches, and supports communication between configurable VM endpoints. Finally, the hypervisor allows communication with the physical network infrastructure. by connecting the physical hosts of the server to the logic facilities of the hypervisor, The hypervisor can efficiently communicate with VMs in the hypervisor, and efficient communication with external networks.

Figure 2. Virtual Network Facilities

Virtual Network facilities also support other interesting innovations, such as virtual devices. In addition to the virtual network elements, we also focus on these content as part of this exploration.

Vswitch

One of the key developments in virtual network facilities is the development of virtual switches. The vswitch connects the vswitch to the physical connection of the server. More importantly, it connects the vswitch to other vswitches on the server for local communication. This is interesting because the limitation in a vswitch is not related to the network speed, but to the memory bandwidth. It allows efficient communication between local VMs and minimizes the overhead of network facilities. This saving is because the physical network is only used for inter-server communication, and cross-VM communication during service is isolated.

However, because Linux already contains a layer-2 switch in the kernel, some may ask why a virtual switch is needed? The answer includes multiple attributes, but one of the most important is defined by the new classification of these switch types. The new class is named Distributed Virtual Switch. It adopts a method that makes the underlying server architecture more transparent and supports cross-server bridging. The Virtual Switches in one server can transparently connect to the Virtual Switches in other servers (see figure 3), making VM migration between servers (and their virtual interfaces) easier, because they can connect to the distributed vswitch of another server and transparently connect to its vswitch network.

Figure 3. Distributed vswitch

One of the most important projects in this period is Open vSwitch, which will be discussed in this article.

One problem with isolating local traffic on the server is that the traffic is not externally visible (for example, for network analysts ). The implementation solves this problem through various plans, such as OpenFlow, NetFlow, and sFlow. They are also used to Output Remote Access to control and monitor traffic.

Open vSwitch

The early implementation of the distributed vswitch has ended and is limited by the operations of the Management Program proprietary settings. However, in today's cloud environment, it is ideal to support heterogeneous environments where multiple management programs coexist.

An Open vSwitch is a multi-layer virtual switch that can be used as an Open resource under the Apache 2.0 license. As of January May 2010, Open vSwitch version 1.0.1 is available and supports a series of useful functions. Open vSwitch supports leading Open-source management program solutions, including kernel-based VM (KVM), VirtualBox, Xen, and XenServer. It is also the drop-down replacement of the current Linux bridge module.

An Open vSwitch is composed of core modules that manage stream-based switches. There are also various other daemon and entities for managing switches (especially in OpenFlow ). You can run Open vSwitch in the user space, but doing so will lead to performance degradation.

In addition to providing a production quality switch for the VM environment, Open vSwitch also has an impressive feature roadmap that competes with other similar and proprietary solutions.

Network Device Virtualization

NIC hardware Virtualization has existed for a period of time in various forms-before the appearance of a vswitch. This section describes implementation and hardware acceleration, which can be used to improve the speed of network virtualization.

QEMU

Although QEMU is a platform simulator, it also provides software simulation for various hardware devices, including middleware. In addition, QEMU provides an internal Dynamic Host Configuration Protocol server for IP Address allocation. QEMU works with KVM to provide platform simulation and independent device simulation. It provides a platform for KVM-based virtualization.

Virtio

Virtio is a quasi-virtualization framework for Linux input/output (I/O). It simplifies and accelerates I/O communication between VMS and hypervisor. Virtio creates a standardized transmission mechanism between VMS and management programs for virtual block devices, General Peripheral Component Interconnect (PCI) devices, and network devices.

TAP and TUN

Virtualization has been implemented in the network stack for some time, allowing VM guest network stacks to access the host network stack. The second part of the plan is TAP and TUN. TAP is a virtual network kernel driver that implements Ethernet devices and operates on the Ethernet framework. The TAP Driver provides an Ethernet "tap", which allows the visitor's Ethernet framework to communicate with each other. TUN (or network "channel") simulates network-layer devices and communicates at a higher layer of IP packets. These packets provide some optimizations, because the underlying Ethernet device can manage the layer-2 framework of tun ip packets.

I/O Virtualization

I/O virtualization comes from a standardized plan for PCI-Special Interest Group (SIG) that supports accelerated virtualization on the hardware layer. In particular, Single-root IOV (SR-IOV) provides an interface through which an independent PCI Express (PCIe) card can appear as a multi-PCIe card to numerous users, allow multiple independent drivers to connect to the PCIe card without mutual understanding. The SR-IOV is implemented by extending virtual functionality to a variety of users, which is a physical feature of the PCIe space, but represented as a shared feature in the card.

The benefit of SR-IOV to network virtualization is performance. Compared with the management program that implements physical NIC sharing, the card itself achieves composite, allowing direct access from the guest vm I/O interface to the card.

Linux today includes support for SR-IOV, which is good for KVM hypervisor. Xen also includes support for the SR-IOV, allowing it to efficiently display vNIC to guest VMs. Support for SR-IOV is on the roadmap of Open vSwitch.

Virtual LANs

Although related, virtual LANs (VLANs) are physical methods of network virtualization. VLANs provides the ability to create virtual networks across distributed networks, so that different hosts (on independent networks) may appear if they are part of the same broadcast domain. VLANs completes this by using the VLAN information tag framework to identify the member relationships of a Specific LAN (according to the Institute of Electrical and Electronics Engineers [IEEE] 802.1Q standard. The host and VLAN switch work together for physical network virtualization. However, although VLANs provide the illusion of independent networks, they share the same network and available bandwidth, affecting the blocking results.

Hardware acceleration

Many I/O-oriented virtualization acceleration is emerging, addressing the consumer and other devices. Intel®Virtualization Technology for Directed I/O (VT-d) provides the function of isolating I/O resources for improved reliability and security, including direct memory access through re ing (using multi-level page tables) device-related interrupt re ing, supporting uncorrected and virtualized guest. Intel Virtual Machine Device Queues (VMDq) also accelerates network communication flow in virtualization settings through embedded sorting and intelligent sorting in hardware, the CPU utilization and overall system performance of the management program are improved to a greater extent. Linux supports both.

Network Virtual Device

So far, this article has discussed the virtualization of NIC devices and switches, the current implementation of part of the content, through the hardware accelerated virtualization part of the method. Now, we will extend this discussion to common network services.

One of the interesting innovations in the scope of virtualization is the ecosystem evolved from server integration. Compared to putting applications into a specific hardware version, part of the server is isolated from the powerful VMS of the server expansion service. These VMs are called virtual devices because they focus on a specific application and are deployed for virtualization settings.

Virtual Devices are usually connected to hypervisor-or have good network settings for hypervisor-to expand specific services. This is unique because the processing functions (such as cores) and I/O bandwidth of the merged servers can be dynamically configured for Virtual Devices. This feature makes it more cost effective (because an independent server is not isolated for it), and you can find it based on the needs of other applications running on the server, dynamically change its functions. Virtual devices can also be easier to manage because applications are bound to the Operating System (within the VM ). No special configuration is required because the VM is pre-configured as a whole. This is a benefit worth considering for Virtual Devices, which is also why it has been developing today.

Virtual devices have been developed for many enterprise software, including WAN optimization, routers, virtual private networks, firewalls, systems that prevent/detect intrusions, mail classification and management. In addition to network services, virtual devices are also used for storage, security, application frameworks, and content management.

Conclusion

Once upon a time everything was manageable or physically possible. But today, physical devices and services have disappeared in our ever-expanding virtualization world. The physical network is separated by virtualization, allowing communication isolation and cross-geographic Physical Virtual Network Construction. Applications disappear from virtual devices. These devices are separated between powerful server cores. Although they add a lot of complexity to managers, they also provide better flexibility, improved management capabilities. Of course, Linux is at the forefront.

From: http:// OS .51cto.com/art/201203/324185_all.htm

Address: http://www.linuxprobe.com/linux-vm-network.html


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.