With the advent of the cloud computing era, data centers have been pushed to the top of the http://www.aliyun.com/zixun/aggregation/33377.html "> Wave"-How to get data centers to better support the growing cloud computing services, Become the focus of data center operators.
In order to achieve this goal, we build larger data centers, buy more and better servers, and develop richer applications ... so how does the network of data centers change? How to change? Never let the network become the Achilles heel of the data center.
1 The era of cloud computing, the network should also move on demand
With the rise of cloud computing, as the core of cloud, data center is carrying more and more business and application. And the rich of business and application, also gave birth to the construction of data center climax.
Compared with the past, the data center in the cloud computing era has great differences in demand and planning. These differences have also led to a direct change in the network of data centers. The first is the change of data center traffic model, which brings new requirement to data center network.
According to forecasts, the cloud computing era, the data center network traffic, will be from the early "80% North-South Flow", into "70% for the east-west flow."
Fig. 1 Evolution of Data center network traffic model
Why is there such a big change?
For the early data center, its business is the data center external access to the data center, so traffic is mainly north-south. Based on business characteristics, as well as export bandwidth constraints, network design is generally based on a certain proportion, the gradual convergence, namely: Data Center network access to the side of the bandwidth, is the network convergence area/core bandwidth of several times. The common bandwidth convergence ratio is: 1:3~1:20.
With the advent of cloud computing, more and more business has a great impact on the data center traffic model. such as search, parallel computing, such as large 920.html "> Data Services, requires a large number of servers to form a cluster, work together to complete the task, which caused the traffic between the server becomes very large.
In addition, the complexity of the cloud computing era of demand, but also brought about the uncertainty of traffic, we can no longer accurately predict the flow of the server, can not be designed to plan the bandwidth of the network. At the same time, the virtual machine dynamic migration capability brought by virtualization further leads to the complexity of the network traffic model and the increasing of the flow of things to traffic.
With the change of data center traffic model, the traditional convergence network will no longer meet the data center's business requirements. We need to deploy a non-blocking network within the data center, that is, within the data center, the line-speed interaction between any server can be.
2 Fat tree architecture, so that the data center network is no longer congested
Currently, the industry's universally recognized technology for achieving non-blocking networks is: The Fat Tree Architecture (Fat-tree, presented by Charles E. Leiserson in the 80 's). The basic idea is to use a large number of low performance switches to build a large-scale non-blocking network.
2.1 Under the structure of the Fat tree, network bandwidth does not converge
In the traditional tree network topology, the bandwidth is convergent by layer, and the network bandwidth at the root is much smaller than the sum of all the bandwidth of each leaf.
And the Fat Tree network is more like a real tree, more to the root, the thicker branches, that is: from the leaves to the root, network bandwidth does not converge. This is the foundation of the Fat Tree architecture to support non-blocking networks.
Fig. 2 Comparison of logical topology between fat tree network and traditional network
As shown in the figure above, in order to achieve convergence of network bandwidth, each node in the Fat Tree network (except the root node) needs to ensure that the uplink bandwidth and downlink bandwidth are equal, and each node provides the ability to transmit the line speed of the access bandwidth.
The following figure is an example of the physical structure of a 2-dollar 4-storey fat tree (2: Each leaf switch is connected to 2 terminals; 4 layers: The switch in the network is divided into 4 tiers). All of the physical switches that it uses are identical.
Fig. 3 The physical topology instance of the Fat Tree architecture
As you can see from the diagram, each leaf node is a physical switch that is connected to 2 terminals; The internal nodes on the upper layer are composed of 2 physical switches per logical node, and then to the top level, each logical node consists of 4 physical switches; There are 8 physical switches in the root node.
In this way, any logical node, downlink bandwidth and uplink bandwidth are exactly the same. This ensures that the entire network bandwidth is not convergent.
At the same time we can see that for the root node, half of the bandwidth is not used for downlink access. That's the fat tree. The uplink bandwidth reserved for the root node in order to support elastic expansion. The elastic expansion of network scale can be realized by extending the fat tree to the root.
2.2 Adapting to data center applications, fat trees need tailoring
In the Fat tree architecture, the root node reserves the same uplink bandwidth as the downlink access capability in order to extend the elasticity. In the actual construction of the data center, the scale of the whole network can be foreseen and planned in advance (for example: limited by room space, it is impossible to unlimited expansion), so the root is generally not required to reserve such a large uplink bandwidth.
Figure 4 Reducing the number of layers in a fat tree network
(Responsible editor: Lu Guang)