Definition and performance of network card aggregation

Source: Internet
Author: User
Keywords Hyper-V

Hyper-V is the new concept of a recent virtualization network, with recent news that its many features of the latest edition can affect network design. In previous articles, we briefly described some improvements in network virtualization and some improvements in virtual machine networks. This article introduces another network improvement: Network card Aggregation (NIC teaming), and a brief introduction to some of the new Hyper-V options for data center design.

Network card aggregation is also called load balancing failover (load balancing Fail over), and now it is built into the Hyper-V platform Currently, network card aggregation in all servers in small and medium enterprises Usage is large, and accounts for approximately 75% to provide aggregation bandwidth and fault-tolerant mechanisms. However, it is always difficult for Microsoft to support it because each vendor implements the network card aggregation (and only the network adapter of its own model) in different ways. When it comes to finding the root of a network problem, a common request is often to disable the network card pack, which is a factor in the problem. Network card aggregation can aggregate different vendors to provide 32 different speeds of the network card, and even the Wired interface and wireless interface aggregation, but actually do not recommend aggregation of the latter two.

The network card swarm can be configured as a switch-independent mode, which is appropriate if you have an unattended switch or you do not have access to change the configuration of the switch. If your sole purpose is to ensure redundancy, this pattern is very useful--as long as you use two network cards to set one of the network cards in standby mode, then when the cleaners accidentally rip off the network cable of the work card, the standby card will take over the task. The benefit of this pattern is that each NIC can be connected to a different physical switch to provide redundancy at the switch level.

Switch-independent mode can be set up to use the address hash or Hyper-V Port load Balancing mode. If you need load balancing and you want all of the network cards in the network card group to be active, then there are two scenarios where this pattern is appropriate. As for the address hashing mode, outbound traffic is balanced on all interfaces, but inbound traffic is transmitted only through a single network card. This is appropriate for Web servers and media servers, because of the large outbound traffic and the small inbound traffic. If you have more than one virtual machine on a host, another mode, Hyper-V port, is appropriate. However, no virtual machine consumes the speed of inbound or outbound traffic exceeding the speed provided by a single NIC in the network card swarm, because this mode allocates inbound traffic and outbound traffic to the network card cluster, and each virtual machine is assigned to a single network card.

If you are faced with other network situations, the switch-dependent mode may be a more appropriate option, and it has two versions: Static or Link Aggregation Control Protocol (LACP, called IEEE 802.1ax; During development, it is known as IEEE 802.3ad). Static versions do not provide the ability to identify cables that are not connected correctly, and only apply to very static environments where network configuration changes are not very frequent. LACP can automatically identify the network card group, should also be able to identify network card group expansion and change.

Figure 1: Only a few clicks of the mouse, it is easy to use one or more pieces of network cards to establish a network card group.

Configure network card aggregation is simple: In the Server Manager of the local server section, there is a network card aggregation link, if clicked on the link will open the Network card Group manager. Click the Tasks (Task), new Team (Network), select which adapters should be members of the network card group, then click Additional properties (extra attributes) so that you can select the network card group mode, load Balancing mode, if you like, You can also select an alternate adapter. Once a NIC is a member of the network card group, the network adapter attribute only lists the Microsoft Network Adaptor Multiplexing protocol as an enabled protocol, and the new network adapter cluster is displayed as an interface with a configurable protocol.

If you need a virtual local area network (VLAN) to use with the network card group, you can create multiple group interfaces (up to 32) for the network card cluster, each interface can respond to a specific VLAN identifier, as long as you set the switch port into trunk mode. In fact, you can use a network card to build a network card group, and then based on VLAN, using the group interface to isolate traffic. If you have multiple virtual machines on a host and want to respond to different VLAN identifiers, do not use a group interface. Instead, the VLAN access mechanism is established through the properties of the Hyper-V switch and the virtual network card of each virtual machine.

If you want to use the network card group in the virtual machine, it may be because you are using the IOV adapter (see the first one), make sure to use the Set-vmnetworkadapter in PowerShell or use the GUI, Each Hyper-V virtual machine port that is connected to the network card cluster is configured to allow MAC address spoofing, or to enable the "allowteaming" parameter.

As with almost every feature in Windows Server 2012, you can use PowerShell to configure network card aggregation; Here's a list of cmdlets (http://technet.microsoft.com/en-us/library/jj130849.aspx).

Figure 2: Once you have created the network card group, it is easy to manage them through this simple user interface.

New options for Hyper-V data Center design

In large data centers, you know that Windows Server 2012 provides functionality to support data Center TCP (DTCP), which is compatible with switches that enable explicit congestion notification (ECN,RFC 3168). This allows TCP to detect congestion rather than to detect congestion only as TCP protocols do. As a result, the buffer space used in the switch is significantly reduced and throughput is increased, especially in networks with large data traffic.

If you are considering implementing IP address management (IPAM), a new feature in Windows Server 2012, by consolidating the DHCP server and DNS servers (leaving that Excel spreadsheet) to manage IP addresses, and using SCVMM 2012 Manage the virtualization infrastructure, SP1 will provide a script (IPAMINTEGRATION.PS1) to export all the IP addresses that SCVMM assigns to Ipam on a scheduled basis.

When you are designing a cluster, we will bring new options to the two of web improvements discussed and other new, improved features that are described in the following areas of storage and scalability. Many new clusters transform storage traffic and other network traffic from the use of different networks to a single architecture, and from multiple Gigabit Ethernet connections to a smaller number of Gigabit Ethernet (or faster) connections. This fusion architecture idea may be in many forms, with different vendors having different approaches, but in general, Hyper-V in Windows Server 2012 meets this new situation, with network virtualization and software-defined networks (SDN), and SCVMM 2012 SP1 centrally managed powerful scalable virtual switches, support for network virtualization gateways, Iov, Dvmq, QoS, and built-in network card aggregation.

"Edit Recommendation"

Windows Server-Hyper-V PK VMware Performance Economical The Windows Server 2012 calendar in primary School of the 51CTO technology NIU man train Windows Server 2012 train (Phase III) Hyper-V Depth Evaluation fifth: storage aspects of the new improved Hyper-V depth evaluation first: Network Improvement "responsible editor: Xiao Yun TEL: (010) 68476606"

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.