When we talk about how to implement cloud computing, there's a lot of computer resources to manage and share, as well as applications that need to be improved, but there are very few topics and solutions on how to optimize cloud computing. However, when companies are going to decide to use cloud computing on a large scale, we are only focused on recommendations, application management, and major changes in planning - all of which will have an impact on the web. This requires a new approach to cloud computing.
Today's network planners realize that "corporate networks" are the true combination of "resource networks" and "access networks." The former connects IT components to create data centers while the latter allows users to access applications running in these data centers. The transformation of cloud computing will change this resource and access the network, and will add a new category: the coalition network, or a cloud to another cloud network.
Cloud computing performance is the sum of network connectivity and IT resource performance. The job of a network administrator in a cloud network is to accomplish two different tasks: create a server and storage-related resource pool that behaves as much as a single, fixed-performance virtual resource; connect the resource pool with the user, regardless of their Where the location, can do the smallest change in performance. Solving these problems in a specific order is the easiest way to accomplish these tasks.
Cloud Networking in the Data Center: Solving Lost and Delayed
In cloud computing mode, a resource pool is the only one that is valid if all of the resources in the resource pool exhibit the same performance and availability. This means establishing the network connection of the resource pool is the most important work.
Almost all of the cloud first set up a "cloud of data centers," using local network connections and then connecting to these data centers. There are two specific variables that may determine the success of a data center network, known as the loss and delay of two L's. All network protocols protect the data against loss by retransmitting corrupted messages, especially those associated with storage protocols, as this creates the risk of creating a corrupted file or keeping the storage device in a poor state of operation. The problem, however, is that it takes time to retransmit a lost packet, and latency is a particular problem in data centers and storage networks because it accumulates quickly through tens of millions of operations.
Cloud Computing Network: A flat network means there will be fewer interfaces in the future
Network experts all know that latency accumulation in the network is largely proportional to the number of packets being transmitted from the source to the destination, and that there is a risk of loss for each switch handling the packet, in addition to exacerbating the total delay. The best solution is to reduce two L to reduce the number of interfaces passed from source to destination. As a matter of fact, it means reducing the number of switches.
Most data center network planners recognize that the best network is a network that is as "flat" as possible, which means that the network should not include many layers of equipment that create connectivity. Several very large switches will provide better performance than smaller ones, but concentrating the switch on a few devices may also increase the risk of failure. For switches, this means that the highest possible mean time between failures (MTBF) is important and the components are redundant and support for automatic failover in operation.
If you can not change to a flat network: Manage trunk and port connections in a hierarchical cloud network
When multi-layer switches are required, the common traffic management rule is to make sure that the trunk connection or its intrinsic connection is ten times faster than the port connection. For Gigabit Ethernet ports, you need a 10G trunk. Obviously, this type of ratio will make it impossible to achieve very fast switch-to-server or storage port connections and in these cases the flat topology created by the so-called "fabric" switch (an example of Infiniband) will be better Performance.
Cloud network of inter-network data center
Establishing a cloud generally refers to connecting a data center to create a seamless pool of resources, although this is not always the case. These connections must take effect as soon as possible, and they will definitely be the key to controlling packet loss.
Storage networking protocols and other protocols provided for packet error recovery may be necessary in any case, but no protocol will reduce the high utilization of the backbone between data centers. When utilization exceeds 50%, both loss and latency increase, and cloud performance can suffer as well. This is a problem that must be considered when managing traffic routing between cloud data centers.
Cloud Network Traffic Management: Start from the user's connection
Connecting users to the cloud is a good place to start thinking about cloud traffic management. As users enter a particular single data center, traffic flows through the backbone between your data centers and then to and from other data center resources. This will quickly reduce performance.
The best way to do this is to ensure that users (devices and branch networks) are directly connected (hosted) to multiple data centers and control the allocation of cloud resources so that service user applications run in the data center to which users connect directly. This will save data center backbone load for data exchange between application components.
Virtual Components Address Management in Cloud Networks
This type requires that the network connections that support cloud computing be roughly the same type as those that need to support traditional client / server computing - with one exception: in the case of flexible resource locations, once a resource is allocated, there must be an access application or Component processing mechanism.
It is best to query the data center networking in the virtual component address management of the cloud, and the tactics that IT vendors regard. Currently, solutions are often deployed based on managing the Domain Name Server (DNS), decoding logical application URLs into IP addresses, or using Network Address Translation (NAT), similar to the "Flexible IP Addressing" used in Amazon EC2 ". Amazon's flexible IP address is associated with the user's account rather than with one instance, and they can be used to map public IP addresses to a case associated with the user's account.
Connect public and private clouds
Almost unavoidable, the company will deploy private cloud computing networks and public cloud devices. This may require a mix of the two, creating a way to make public and private clouds look like one and the same resource pool. To do this, make both clouds part of a public VPN at the same time, or use the cloud management and cloud connectivity standards in a networked federation. Unfortunately, there is not a single, fixed, joint standard at this point, so it is worth checking out the cloud providers, your internal network, and the IT vendors to make sure you have the conditions to make the interconnection compatible.
【Editor's Choice】
Dwell on the Ubuntu cloud computing environment Cloud computing with VMware's free chat Cloud computing is a beautiful network application model Talk about cloud computing IBM Rational software delivery service A general overview of cloud computing and mobile Internet [Editor: Zhi Jing TEL: (010) 68476606】