Network, storage, and Virtualization

Source: Internet
Author: User

Network core considerations
The term "network" can be applied to multiple fields from LAN to SAN and WAN. All these fields require a network core. Our discussion starts from here.
The size of a unit determines the size and capability of the core switch. In most infrastructures, the structure of the data center is very different from that of the LAN core. Let's look at a hypothetical network. Suppose it serves hundreds or thousands of users in a building, and the data center is in the center, we usually find that, there are some large switches in the center and aggregation switches at the edge.
Ideally, the core is composed of two exchange platforms that transmit data from the edge to a gigabit fiber (in the same room as the server and storage infrastructure ). To link to a socket with 100 switching ports, only two Gigabit Optical fiber cables are sufficient for most business needs. If the above connection cannot meet your needs, the best way is to combine multiple gigabit connections instead of upgrading to 10 thousand MB. This situation will change with the decline in the price of 10 Gigabit products. However, aggregation of several Gigabit ports is much cheaper than adding 10 Gigabit ports to the core and edge.
If you want to deploy VoIP, deploying a small modular switch on the edge of the network is also very beneficial, which allows the PoE module (Powerover Ethernet) to be installed on the same switch like a non-PoE port. Another option is to deploy the aggregation (backbone) port to each user. This allows a single port to be used for VoIP and desktop access tasks.
In the star topology mode that we are familiar with, the core must connect to the edge aggregation switch through at least two links, or directly connect to the server infrastructure through copper wires, you can connect to the cluster by using the server aggregation switch in the rack. You must make a decision at the site based on the distance limit of copper wiring.
In any case, deploying servers to an aggregation switch in each rack and laying only several optical cables that reach the core switch is more concise than simply inserting all the cables into several large switches. In addition, using the server's aggregation switch will make it possible to connect to the redundant core switch. In this way, when a core switch fails, communication with the server will not be lost. If your organization's budget permits and planning permits, using the server's aggregation switch is a good choice.
Regardless of the physical design method, the core switch must maintain redundancy in every possible way: redundant power supply, redundant interconnection, and redundant routing protocols. Ideally, the core switch should also have redundant control modules. If the budget is not allowed, you can barely cope.
The core switch is responsible for almost every packet in the infrastructure, so it must be stable. Make full use of HSRP (hot backup routing protocol) or VRRP (Virtual routing redundancy protocol ). In this way, the two switches can effectively share an IP address and a MAC address, and use it as the default route for a VLAN. Once a core switch fails, these VLANs are still accessible.
Correct use of Spanning Tree Protocol (STP) is crucial for proper network operation. The comprehensive discussion of these technologies is beyond the scope of this article, but correct configuration of these two elements has a great impact on the correct operation and elasticity of any layer-3 switching network.
Care about storage
Once the core of the network is set up, you can start to store network problems. Although many technologies can be used, when you connect a server to a storage array, a viable option may come down to a familiar question: Is it fiber channel or iSCSI?
Fiber Channel is generally faster than iSCSI and has lower delivery latency, but it is really unnecessary for most applications. Fiber Channel requires a specific Fiber Channel switch, and each server must have a very expensive FCHBA (Fiber Channel host bus adapter), and preferably two FCHBA to achieve network redundancy; iSCSI uses standard Gigabit copper ports to deliver excellent performance. If you have business-oriented applications, such as large databases with a large number of users, you may wish to select iSCSI, which will not affect performance but will save a lot of money.
The fiber channel network is independent of the rest of the network. It basically exists independently and is connected only to the main network that does not carry any business communication through the management link. An iSCSI network can be built using an Ethernet switch that can handle common network communication. Although the iSCSI network should be limited to at least its own VLAN, in addition, it may need to be built on a specific set of Ethernet switches, which need to separate communications for performance reasons.
Be sure to select the vswitch used to store the network with caution. Some Manufacturers' switches can perform well under normal loads, but they are disappointing in the face of iSCSI communication, because of their internal structure. Generally, if the vendor of A vswitch declares that it is "especially for iSCSI enhancement", it may be able to handle iSCSI load well.
In any case, your storage network should reflect the main network and achieve redundancy as much as possible: redundant switches, redundant links to servers (whether FCHBA, standard Ethernet ports, or iSCSI accelerators). The server "doesn't like to see" its memory suddenly disappears, So redundancy here is at least as important as in the network.
Focus on Virtualization
When talking about the storage network, if your organization wants to run enterprise-level virtualization, it may need some form of it. Virtual Hosts (the virtual hosts mentioned in this Article refer to physical hosts running virtualization management software) require that virtual servers be absolutely and stably migrated in virtualization group devices, and requires fast centralized storage. This type of storage can be stored through fiber channels, iSCSI, or even NFS in most cases, but the key is that all host servers can access a reliable and centralized storage network.
However, networked virtual hosts are not as networked as common servers. Although a server may have two frontend and backend connections, a VM may have more than six Ethernet interfaces. One of the reasons is the performance problem: the virtualized host generates more communication than the common server, which is due to the simple fact that a large number of virtual machines run on the same host. Another reason is redundancy: because there are so many virtual machines running on one physical machine, you don't want a faulty Nic to suddenly "kick out" a large number of virtual servers.
To cope with this problem, the virtual host should build at least two dedicated front-end links and two backend links. In addition, an independent management link is also required ideally. If this infrastructure is intended for host services installed on "semi-secure" networks in DMZ and other regions, we also have reason to add physical links to these networks, unless you are satisfied with the "semi-trusted" packets transmitted over the core network as a VLAN. Physical separation is still the safest method and is not susceptible to human errors. If you can add interfaces to a virtual host to physically separate such communication, do so.
Each pair of interfaces should use some form of link aggregation for joint, such as LACP (link aggregation Control Protocol) or 802.3ad. Either of them can meet the requirements, although your switch may only support one form. Binding these links can achieve load balancing and implement failover protection at the link level. Therefore, this is an unquestionable requirement, it is also difficult to find a vswitch that does not support this feature.
In addition to binding these links, frontend links should be aggregated with 802.1q. In this way, multiple VLANs exist on an independent logical interface, making it easier to deploy and manage virtualization devices. Then, you can deploy a virtual server on any host's VLAN or VLAN combination without worrying about the virtual interface configuration. You do not have to add physical interfaces to the host just to connect to a different VLAN.
There is no need to bind or aggregate the storage links of a VM, unless your virtualization server will communicate with a large number of backend storage arrays. In most cases, only one storage array is used. Binding these interfaces may not improve the performance of a single server.
However, if your network requires heavy server-to-server communication, such as front-end Web servers and back-end database servers, we recommend that you hand over this communication to a dedicated binding link. Although there may be no need for Link aggregation, binding these links will provide load balancing and redundancy between hosts.
Although a dedicated management interface is not absolutely required, it makes it easier to manage virtual hosts, especially when modifying network parameters. Changing the bearer management communication link may lead to loss of communication with the virtualization host.
If you count, you may find that there are seven or more interfaces in a busy virtualization host. Obviously, this will increase the number of ports on the vswitch required by virtualization, so be careful when planning. The increasing popularity of 10G networks and the cost reduction of 10g interfaces will reduce the cable demand of each organization. Therefore, only one pair of aggregation and bound 10g interfaces can be used for hosts with management interfaces. Of course, the premise is that your budget permits.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.