When talking about implementing cloud computing, there are always many topics about sharing and managing computer resources and improving application experience. However, there are few topics about how to optimize cloud computing networks. However, when enterprises decide to use cloud computing on a large scale, they only focus on strategies, planning implementation, and significant changes in Application Management-all of which have an impact on the network. This impact requires a new approach called cloud network.
Today, network planners realize that "Enterprise Network" is a combination of "Resource Network" and "Access Network. The former connects to IT components to create data centers, and the latter allows users to access applications running in these data centers. The transformation of cloud computing will change such resources and access networks, and add a new category: Alliance network, or a network from one cloud to another.
Cloud computing performance is the sum of network connection performance and IT resource performance. In a cloud network, network administrators perform two different tasks: create a resource pool related to servers and storage, which is as much as a single virtual resource with fixed performance as possible; connecting the resource pool to users can achieve minimal performance changes wherever they are located. Solving these problems in a specific order is the easiest way to complete these tasks.
Cloud network in the data center: Solves the loss and delay
In cloud computing mode, if all resources in the resource pool share the same performance and availability, the resource pool is unique and effective. This means that establishing a network connection to the resource pool is the most important task.
Almost all clouds first establish a "data center cloud", connect to these data centers using a local network. Two specific variables may determine whether the data center network is successful, that is, the loss and delay of two L. All network protocols use re-transmission to damage messages to prevent data loss, especially information packets related to storage protocols, this may cause the risk of creating a corrupted file or making the storage device in a bad running state. However, the problem is that the retransmission of lost data packets takes time, and latency is a special problem in the data center and storage network, because it will quickly accumulate through tens of millions of operations.
Cloud computing network: a flat network means fewer interfaces in the future.
Network experts know that the delay accumulation in the network is largely proportional to the number of data packets transmitted from the source to the destination, and each switch that processes data packets has the risk of loss, in addition, the total latency is also increased. The best solution is to reduce the number of interfaces transmitted from the source to the destination. As a practical problem, it means to reduce the number of switches.
Most data center network planners recognize that the best network is a "flat" network as much as possible, meaning that the network should not include many layer devices that create connectivity. Several very large switches provide better performance than small switches, but the risk of failure may also be increased if the switches are concentrated on a few devices. For a vswitch, this means that the highest possible MTBF is critical and redundant components support automatic failover during operations.
If you cannot change to a flat network: Manage the trunk and port connections in a layered cloud network
When a multi-layer switch is required, the general traffic management rule is to ensure that the master connection or its internal connection is 10 times faster than the port connection speed. For Gigabit Ethernet ports, you need 10g trunk. Obviously, this type of proportion will not achieve extremely fast switch to the server or storage port connection, and in these cases, through the so-called "structure" Switch (InfiniBand is an example) the created flat topology provides better performance.
Inter-network data center of the cloud network
Creating a cloud usually means connecting to a data center to create a seamless resource pool, though not always. These connections must take effect as soon as possible, and they are definitely the key to controlling packet loss.
The storage network protocol and other protocols provided for packet error recovery may be necessary under any circumstances, but no protocol will reduce the high utilization of the trunk between data centers. When the utilization rate exceeds 50%, both loss and delay will increase, and the cloud performance will also be affected. This is an issue that must be taken into account when managing traffic routes between cloud data centers.
Cloud network traffic management: starts from user connection
Connecting users to the cloud is a good start to consider cloud traffic management. When a user enters a specific single data center, the traffic will surely pass through the trunk between your data centers, and then reach and obtain resources from other data centers. This can quickly reduce performance.
The best way is to ensure that the user (device and branch network) is directly connected (resident) to multiple data centers and control the allocation of cloud resources, so that the application of the service user runs in the data center directly connected to the user. This saves the backbone load between data centers for data exchange between application components.
Virtual Component address management in the cloud network
This type of network connection must support cloud computing, which is similar to those that need to support traditional Client/Server Computing processing-one exception: when resources are flexible, once resources are allocated, there must be a processing mechanism to access applications or components.
It is best to query the network of data centers and the policies of IT vendors in cloud Virtual Component address management. Currently, the solution is deployed based on the Management Domain Name Server (DNS), which decodes the URLs address of the logic application into an IP address, or uses the Network Address Translation (NAT) form, similar to the "elastic IP Address" used in Amazon EC2 ". Amazon's elastic IP addresses are related to the user's account rather than an example, and they can be used to map public IP addresses to an instance associated with the user's account.
Connect Public and Private clouds
Almost inevitably, companies will deploy private cloud computing networks and public cloud devices. This may require a mix of the two to create a resource pool that can make the Public and Private clouds look like the same. To achieve this, you can make the two clouds part of the public VPN at the same time, or use cloud management and cloud interconnection standards in the form of network union. Unfortunately, there is no fixed joint standards at this point, so it is necessary to check cloud providers, your internal networks, and IT vendors, to ensure that you have the connectivity and compatibility conditions.