Research on Network Architecture features of data centers

Source: Internet
Author: User

Research on Network Architecture features of data centers

The network is an important part of the data center and a bridge connecting large-scale servers in the data center for large-scale distributed computing. The importance of the Network for the data center is self-evident. As data center traffic changes from traditional "North-South traffic" to "East-West traffic", it poses a high challenge to network bandwidth and performance, as well as the application requirements of virtualization technology, all of these require changes in the network, which makes the research on the network architecture of the data center a hot technology. The traditional three layers (access layer, aggregation layer and core layer) network Architecture is no longer suitable for the development of a new generation of data centers. This article will introduce some emerging network architecture technologies, so that you can understand the most cutting-edge network architecture research. Since the research involves different ideas and ideas, there are many network architectures designed to address various problems faced by the data center network. For example, Fat-Tree, VL2, Monsoon, Portland, Helios, c-Through, OSA, Flyway, WDCN and other technologies have seen great competition in the network architecture design field. I believe everyone is unfamiliar with these terms. This article will introduce some mainstream network architecture design ideas and give a detailed explanation of the trend of the latest network architecture design concepts.

Fat-Tree Fat Tree Exchange Network

The Fat-Tree network architecture is very classic. It is a traditional three-layer network architecture consisting of edge, aggregation, and core switching networks, forming a Tree network topology architecture. However, Fat-Tree is different from the traditional three-layer Tree network. Edge and aggregation networks are divided into different clusters. In a cluster, each access device is connected to each aggregation device, forming a complete bipartite graph. Each aggregation device is connected to a certain part of the core network device, so that each cluster is connected to any core device. Fat-Tree is called the Fat Tree network because the higher the bandwidth from the edge to the core, the closer the network is to the core. It is like a real Tree, the thicker the branches, that is: from the leaf to the root of the Tree, the network bandwidth does not converge, which is the basis for Fat-Tree to support non-blocking networks. To achieve non-convergence of network bandwidth, each node in the fat tree network (except the root node) must ensure that the uplink bandwidth and downlink bandwidth are equal, in addition, each node must provide the ability to forward the line rate of the access bandwidth. At the forwarding layer, layer-2 networks are used from the edge to the aggregation layer. The core layer-3 network is used for forwarding, and the aggregation needs to be forwarded through the core layer-3 forwarding.

VL2 Switch Network

The VL2 network architecture is designed to improve data center agility, that is, to allocate any number of server computing and storage resources to any cloud computing upper-layer services and applications. VL2 connects all servers in the network through a virtual layer-2 Ethernet, so that all servers in the network can be allocated to any cloud computing upper-layer service, that is, all servers are located in the same server sharing pool, which eliminates the resource sharding problem. This VL2 switch network topology is especially suitable for VLB, because the network can provide bandwidth guarantee for any traffic matrix that complies with the hose model by indirectly forwarding traffic to a convergence network device on the top of the network. At the same time, the routing is simple and flexible. A random path is used to reach a random aggregation network device, and then a random path is followed to reach the target access device. In the VL2 switch network, 10 Gbit/s ports are used between devices at all levels to reduce wiring overhead. As devices with higher forwarding rates emerge, 40 Gbit/s interconnection has gradually become the mainstream. In a VL2 switch network, several servers are connected to one access network device, and each access device is connected to two convergence devices. Each aggregation device is connected to all core devices to form a complete bipartite graph, ensuring sufficient network capacity.

Helios container network

Helios is a two-layer multi-root tree structure that divides all servers into several clusters. servers in each cluster are connected to the access device, the access device is also connected to the group network device and the optical network device on the top layer. This topology ensures that the communication between servers can use group links or fiber links. The Management Program of the Helios network architecture can dynamically configure network resources so that data streams with high traffic are transmitted using Fiber Links, while data streams with low traffic are still transmitted using group links, to achieve optimal use of network resources. Helios network is suitable for the interconnection between containers. It combines the characteristics of optical switching and electrical switching to build a photovoltaic hybrid container interconnection structure. The Helios network photoelectric hybrid feature makes it only requires less connections, and the construction cost is lower.

DCell Network

A DCell network is a network architecture that uses micro-network devices and servers with multiple network ports to recursively define the network. In the DCell network, the server connects to other servers and several small network devices through a two-way communication link, and the high-level DCell is recursively established through the low-level DCell. In a DCell network, when a high-level network is formed by a low-level network interconnection, the number of low-level networks must be equal to the number of servers in each low-level network plus 1, the interconnection standard is that each server in each low-level network is connected to another server in each low-level network. The advantage of the DCell network architecture is that the network has good scalability and can accommodate a large number of servers when the number of network device ports and recursive layers are not large. Servers with multiple ports can also select routes and use multi-path transmission to increase the aggregate network capacity. This provides good support for multi-to-Multi-network transmission, however, each server needs to add a network interface, which increases the server overhead and network expansion complexity. Of course, the DCell network is not perfect, and there is an SLB problem that the underlying link undertakes more transmission tasks. At the same time, more interfaces must be configured on the server to expand the network scale.

In addition to the four network architectures described above, there are actually more than a dozen mainstream network architecture theories, such as network-centric network architectures: monsoon, Jellyfish, OSA, WDCN, Elastic Tree, PortLand, SecondNet; server-centric Network Architecture: BCube, FiConn, snowflake structure, CayleyDC, MDCube, etc. These network architectures are suitable for different and specific scenarios and all have their own characteristics. The Network Architecture of data centers has become a hot topic in recent years. international academic circles, international standards organizations, network equipment vendors, and cloud computing providers have all paid great attention to the network architecture research of data centers, these prompted the emergence of so many network architecture theories that it is expected that the research on the network architecture of data centers will continue to become the focus in the next few years.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.