Three elements of network, storage and virtualization to build a new network

Source: Internet
Author: User
Keywords Virtualization server can if link

Although every part of modern data centers is critical, the network is the Absolute foundation of all communications. That's why it's necessary to design and build a reasonable network from the start. After all, without a solid network, even the best servers and storage can do nothing.

To this end, we provide several design points and best practices to help you consolidate this communication base.

Consideration of the core of the network

The word "network" can be applied to multiple domains from LAN to San, WAN. All of these areas require a network core. Our discussion begins here.

The size of a unit determines the size and capability of the core switch. In most http://www.aliyun.com/zixun/aggregation/13748.html "> infrastructures, the composition of the data center differs greatly from the core composition of the LAN. Let's look at a hypothetical network, assuming it's in a building that serves hundreds of or thousands of users, and the datacenter is centrally situated, we usually find that there are large switches in the center and there are converged switches on the edges.

Ideally, the core is made up of two switching platforms that deliver data from the edge through gigabit fibers (in the same room as the server and storage infrastructure). If you want to link to a socket with 100 switching ports, just two gigabit fiber is sufficient to meet most business needs. If this connection does not meet your needs, it may be best to combine multiple gigabit connections instead of upgrading to a single gigabit. This situation will change as the price of million-bit products declines. Now, however, it is much cheaper to aggregate several gigabit ports than to increase the gigabit capabilities to the core and edge.

Deploying a small modular switch at the edge of the network is also beneficial if you are deploying VoIP, which allows the Poe module (power over Ethernet) to be installed on the same switch as a non-Poe port. Alternatively, it is possible to deploy the aggregation (backbone) port to each user. This allows a single port to be used for VoIP and desktop access tasks.

In our familiar star topology model, the core will have at least two links to connect to the edge of the aggregation switch, either directly through the copper wire to the server's infrastructure, or through the server aggregation switch in the rack. You must make a decision on the spot based on the distance limit of the copper wiring.

In any case, deploying the server to an aggregation switch in each rack and laying only a few cables to the core switch is simpler than simply inserting all the cables into several large switches. In addition, the use of a server's aggregation switch makes it possible to reach redundant core switches, so that communication with the server is not lost when a core switch fails. If your organization's budget allows, and planning permits, it is a good choice to use the server's aggregation switch.

Regardless of its physical design method, the core switches need to maintain redundancy in every possible way: redundant power supplies, redundant interconnections, redundant routing protocols, and so on. Ideally, the core switch should have a redundant control module, and if the budget doesn't allow it, you can barely handle it.

The core switches need to be held accountable for almost every packet in the infrastructure, so they need to be stable. Take full advantage of HSRP (hot Backup routing protocol) or VRRP (Virtual Routing Redundancy Protocol). Thus, two separate switches can effectively share an IP address and a MAC address and use it as the default route for a VLAN. Once a core switch fails, these VLANs are still accessible.

Finally, the proper use of the Spanning Tree Protocol (STP) is essential for proper network operation. A comprehensive discussion of these technologies is beyond the scope of this article, but the proper configuration of these two elements has a significant impact on the proper operation and elasticity of any three-tier switching network.

Care Storage

Once you've built the core of your network, you can start to store network problems. While you can use a lot of technology, when you connect a server to a storage array, a viable choice might boil down to a familiar question: Is it a choice between Fibre Channel or iSCSI?

Fibre Channel is generally faster than iSCSI, and its delivery latency is lower, but for most applications it is really unnecessary. Fibre Channel requires specific Fibre channel switches, and each server also has expensive FC HBAs (Fibre Channel host bus adapters) and preferably two FC HBAs to achieve network redundancy; iSCSI uses standard Gigabit copper ports to perform well. If you have a business-oriented application, such as a large database with a large number of users, you might choose iSCSI, which does not affect performance but can save a lot of money.

The Fibre Channel network has nothing to do with the rest of the network, which is essentially independent and connects to only the primary network that does not host any business traffic through the management link. iSCSI networks can be built with Ethernet switches that can handle normal network traffic, although iSCSI networks should be at least limited to their own VLANs and may be built on a specific set of Ethernet switches that require separation of communications for performance reasons.

Be careful to select the switches that are used to store the network. Some manufacturers ' switches perform well under normal load, but they are disappointing in the face of iSCSI communication because of the internal structure. In general, if the manufacturer of a switch claims "specifically for iSCSI enhancements," it might be able to handle the iSCSI load well.

In any case, your storage network should reflect the primary network and should be as redundant as possible: redundant switches, redundant links to servers (either FC HBAs, standard Ethernet ports, or iSCSI accelerators). The server "also does not like to see" its own memory suddenly disappeared, so the redundancy here is at least as important as in the network.

Focus on Virtualization

When it comes to storage networks, if your organization wants to run enterprise-class virtualization, it may need some form of it. The Virtualization host (the virtualization host referred to in this article refers to the physical host running the Virtualization management software) requires that the virtual server be able to migrate absolutely and stably in a virtualized group device and require fast centralized storage. This storage can be accessed via Fibre Channel, ISCSI, and even in most cases through NFS, but the key is that all host servers can access reliable, centralized storage networks.

However, networked virtualization hosts are not as common as server networking. Although a server may have two links to the front end and back end, a virtualized host may have more than six Ethernet interfaces. One reason is performance problems: Virtualized hosts generate more traffic than regular servers because of the simple fact that a large number of virtual machines are running on the same host. Another reason is redundancy: because there are so many virtual machines running on a physical machine, you don't want a faulty NIC to suddenly "kick" a large number of virtual servers out of the network.

To address this problem, virtualized hosts should build at least two dedicated front-end links, two back-end links, and, ideally, a separate management link. If this infrastructure is to host services in a "semi secure" network that is installed in zones such as the DMZ, then there is reason to add physical links to these networks unless you are satisfied with their "semi trusted" packets being transmitted through the core network as a VLAN. Physical separation is still the safest method and is not susceptible to human error. If you can add interfaces to a virtualized host to physically isolate the communication, do so.

Each pair of interfaces should use some form of link aggregation to implement the Union, such as LACP (Link Pooling Control Protocol) or 802.3AD. Either way, your switch may only support one form. Binding these links enables load balancing and enables failover protection at the link level, which is an indisputable requirement, and because it is difficult to find a switch that does not support this feature.

In addition to binding these links, the front-end links should converge with 802.1q. This makes it easier to deploy and manage virtualized devices by having multiple VLANs on a separate logical interface. You can then deploy the virtual server on any host VLAN or VLAN combination without worrying about the configuration of the virtual interface. You also don't have to add physical interfaces to the host just to connect to a different VLAN.

There is no need to bind or converge on a virtualized host's storage link unless your virtualized server communicates with a large number of back-end storage arrays. In most cases, only one storage array will be used, and binding these interfaces does not necessarily improve performance for a single server.

However, if your network requires heavy server and server communication, such as front-end Web servers and backend database servers, then it is recommended that you give this traffic to a dedicated binding link. Although there may not be a need for link pooling, binding these links will provide load balancing and redundancy between hosts.

Although a dedicated management interface is not an absolute requirement, it can make it easier to manage virtualized hosts, especially when modifying network parameters. Changing the link that hosts management traffic can easily lead to loss of communication with the virtualized host.

If you count, you may find seven or more interfaces in a busy virtualization host. Obviously, this increases the number of ports that virtualization requires to implement, and should be carefully planned. The increasing popularity of 10G networks and the cost reduction of 10G interfaces will reduce the cable requirements for each unit, so hosts with management interfaces use only a pair of converged and bound 10G interfaces. Of course, the prerequisite is that your budget allows.

(Responsible editor: admin)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.