How to improve the network of Hyper-V

Source: Internet
Author: User
Keywords Hyper-V

The main purpose of this article is to highlight the improvements to Hyper-V in virtual machine networks, which may not be particularly clear, but simply refer to single root input/output virtualization (IOV), receive-side extensions, and quality of service (QoS).

Single Input/output virtualization

Iov may be one of the most noteworthy features of all the improvements to the network in Windows Server 2012, but its specific application and limitations should be noted when planning the deployment of a new Hyper-V cluster.

In the early days of virtual machine hypervisor virtualization, Intel and AMD realised that to have helps provide better performance, can offload some functionality from the software to the processor itself . This mechanism is now known as INTEL-VT and Amd-v, and is a requirement for most modern virtual machine management programs. Iov also shifts network functionality from software to hardware to improve performance and flexibility.

If you have a server with a BIOS that supports IOV, and a network card that supports IOV, the server can provide virtual functions to virtual machines, which means virtual replicas of the servers themselves. If you want to use Iov extensively, it's important to understand that the network cards that support it today are limited in terms of the number of virtual functions they provide; some have only 4 virtual features per block, some support 32, and some support 64.

Iov is not required because of bandwidth, because only a Gigabit Ethernet connection can be crammed into the Hyper-V virtual machine bus, but it occupies approximately one processor core for calculation. So Iov is the safest option if you require a low rate of processor usage. If latency is extremely important, Iov gives you approximate local bare-metal network performance, so that's another scenario for Iov to shine.

You have to pay a price while enjoying the benefits of Iov, especially in terms of flexibility. If you use the Hyper-V extensible switch and configure the Port access Control List (ACL), you may also have one or more extensions configured, which will be bypassed by IOV because the switch never sees Iov traffic. You also cannot aggregate multiple blocks on the host to support Iov; however, you can have two (or more) physical IOV network card on the host, provide these network cards to the virtual machine, and in the virtual machine, the virtual network card can be used to build a network card group to improve performance and failover mechanism.

Iov can indeed be used in conjunction with live migration (live migration), which VMware's vsphere 5.1 cannot. In the background of each virtual function, Hyper-V uses the usual virtual machine bus network card to build a "light" network card group; If you move the virtual machine to a host without a IOV network card, it simply switches to the software network card.

For more in-depth understanding of IOV in Hyper-V, see this blog series written by John Howard of the Hyper-V development team (http://blogs.technet.com/b/jhoward/archive/2012/03 /12/everything-you-wanted-to-know-about-sr-iov-in-hyper-v-part-1.aspx), then click here (http://blogs.technet.com/b/ jhoward/archive/2012/03/21/everything-you-wanted-to-know-about-sr-iov-in-hyper-v-part-8.aspx), you can understand troubleshooting skills.

Note: When you create a virtual switch for the IOV network card, you have to enable Iov at that time, and then you cannot convert the IOV switch.

Figure 1: Enabling Iov is a simple thing if your system meets your requirements.

Dynamic expansion

On a physical server, the receive-side extension (RSS) Handles inbound network traffic so that the flow rate is not slowed by a single processor core. This is accomplished by assigning compute tasks to multiple processor cores. For Hyper-V hosts with multiple virtual machines and large inbound traffic, dynamic virtual machine queues (DVMQ) perform the same actions that RSS performs for physical servers. The destination MAC address is hashed; traffic to a virtual machine is fed into a specific queue, and the interrupt operation of the processor core is distributed. All this is done by unloading these functions onto the NIC.

VMQ appears in Hyper-V in Windows 2008 R2, but you have to manage interrupt binding (Interrupt coalescing), which may require a good manual tuning. The--DVMQ feature is enabled by default in Hyper-V in Windows Server 2012, which allows you to handle tasks such as tuning and cross-node balancing loads. If the feature is disabled for some reason, you can enable it either in the GUI or through the PowerShell cmdlet ENABLE-NETADAPTERVMQ.

Monitoring and capturing

One of the problems with server virtualization and virtual networking is that many traditional troubleshooting methods do not work, or must be altered to accommodate virtualized environments. The new extensible switch in Hyper-V allows you to define ports as monitoring ports (port mirroring), just as you define ports on physical switches so that tools such as wireshark and receptacle monitor can capture traffic delivered over the switch. Closely related to this feature is the unified tracking (unified tracing), the new parameter (capturetype) for Netsh trace commands. It allows you to define whether to capture traffic delivered through a virtual switch (=vmswitch), or through a physical network (=physical), or both. For more information about Netsh tracing in Windows Server 2012, please visit this (http://technet.microsoft.com/en-us/library/dd878517.aspx).

Using port ACLs also allows you to measure network traffic (inbound or outbound) between a virtual machine and a specified range of IP addresses. Although with PowerShell (Add-vmnetworkadapteracl–vmname name–remoteipaddress x.y.z.v/w–direction outbound–action Meter) It's interesting to measure, but I still want this feature to be integrated into a comprehensive logging and monitoring solution such as SCVMM 2012.

Service Quality

Quality of Service (QoS) has been improved in Windows Server 2012 and Hyper-V. It provides bandwidth management, classification and tagging, flow control, and Policy-based QoS. Although previous versions of Windows Server had the concept of maximum bandwidth, version 2012 also provided a minimum bandwidth concept. This means that when traffic congestion is not present, the workload can use the maximum bandwidth allocated to it, but when traffic congestion occurs, the workload can only use the minimum guaranteed bandwidth. You can use either maximum bandwidth or minimum bandwidth, or you can use it at the same time, depending on the environment of a virtual machine or a group of virtual machines.

Note: If you use Server Message Block Passthrough (SMB Direct), this new feature of Windows Server 2012 implements remote direct content access (RDMA) technology on compatible network adapters so that the latency of network traffic is relatively short and the processor overhead is small. Then the QoS is bypassed. In this case, you can implement a network card that supports data center bridging (DCB) to control traffic, which is similar to QoS. DCB allows the definition of eight different types of traffic and allocates minimal bandwidth to them during congestion.

In previous versions, you had to sort the traffic yourself; Windows Server 2012 has built-in filters in PowerShell that can be categorized for common traffic such as iSCSI, NFS, SMB, and real-time migrations. In addition to the current tag based on IP headers-This feature is based on differentiated service Code points (DSCP), Windows Server 2012 also adds 802.1p markup on Ethernet frames at layer 2nd;

Policy-based QoS uses Group Policy (POLICY) to develop and implement QoS policies for your physical networks and hosts, and to simplify deployment and management because you have implemented other policies with Group Policy. See this article (http://technet.microsoft.com/en-us/library/jj159288.aspx) to learn more about QoS-oriented Group Policy.

In the case of Hyper-V, however, Microsoft combines the QoS segment with an extensible Virtual switch to control minimum bandwidth and maximum bandwidth by using PowerShell or WMI, in accordance with the switch port number, thereby ensuring stable network performance, allowing the host to comply with service level agreements (SLAs). Of course, these features are also useful in private cloud infrastructure and can be managed using SCVMM SP1. With this fine-grained QoS control, there may be one or more million Gigabit Ethernet network cards, they "split" for the storage flow, real-time migration and virtual machine traffic, like many of today's servers have more than a dedicated gigabit network card for different types of traffic that.

Figure 2: If it is a small environment, it is easy to configure minimum bandwidth and maximum bandwidth for each virtual machine.

Again, network card aggregation (NIC teaming), we will be introduced in the next article. The network card aggregation itself is built into Windows Server; We will focus on all the features described so far and see how they affect how you design your cluster and data center in the future.

Http://virtualizationreview.com/articles/2013/03/06/hyper-v-dive-3-network.aspx

"Edit Recommendation"

Windows Server-Hyper-V PK VMware Performance Economical The Windows Server 2012 calendar in primary School of the 51CTO technology NIU man train Windows Server 2012 train (Phase III) Hyper-V Depth Evaluation fifth: New improvements in storage "executive editor: Xiao Yun TEL: (010) 68476606"

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.