With the rise of new businesses such as cloud computing, mobile Internet and Web2.0, traditional data centers have been unable to meet. The industry is setting off a new wave of data center construction, with the next generation of cloud computing data centers. Network is one of the important components of cloud data center, the network Architecture of cloud data center must have five new technology features: 100G Ethernet technology, surge cache, network virtualization, unified Exchange, green energy-saving technology.
100G Ethernet Technology
Bandwidth pressure is the core problem of cloud data center network. In data centers, high-bandwidth applications such as video-on-demand, 10G FCoE, and high-performance computing require a Gigabit Ethernet interface. During the recent two years when the data center was growing rapidly, with the popularization of tens of thousands of Ethernet in the server and access devices, the network convergence layer and core layer equipment of the data center have more and more strong demand for 100G Ethernet. The 2010 IEEE officially released the 40g/100g Standard, the official approval of this standard, paved the way for the 100G high-speed Ethernet application. The advent of the 100G network era not only means that the port, bandwidth speed upgrades, not only in the data transmission rate than 10G faster than 10 times times, more importantly, the resulting function of a great increase and richness. As a new generation of cloud data center, will be in its convergence layer or core layer with 100G Ethernet to meet the application requirements, cloud data center into the 100G era.
Surge cache
Surges (general network conditions called burst flow, that is, burst) are instantaneous high-speed traffic. This situation is particularly evident in the data centers where Internet companies host search operations. The data center processes a search business, typically by a single server, and then initiates a search request to a Business Server with search information stored in the data center through a series of algorithms, usually thousands of them, and then thousands of servers send the search results back to the requesting server almost at the same time. This traffic model is a typical multiport network in which surges occur frequently in such data centers. The traditional data Center network adopts the mechanism of port caching, which makes the burst of all data stream be cached at the port, and the size of the buffer is the biggest possible burst value of the network.
Cloud data Center application features require large cache, so the general Cloud data Center network equipment must have a large cache (more than 1G). The port cache is no longer used, and the port cache is used. In port caching combined with virtual output queue (VOQ) technology, a large capacity cache is configured in the direction of each port, a small cache is configured in the port, and a dedicated traffic management device (TM) is used for internal traffic management. Credit is used to control each port in the direction of the data to the port of burst, each port to the other port to allocate the number of credit. When the outbound port forwards data on the line, if the direction of the faster, in the reach or exceed the port set in the burst gate, the port is no longer allocated to the Port credit, so that the data into the port cache in the local large buffer, when the port queue down to the threshold below, Continue to assign credit to the inbound port, allowing cached data to be forwarded.
This surge caching technique can automatically adjust the instantaneous flow congestion pressure in different directions, and is the main application technology of cloud data center network.
Network Virtualization
Due to many factors, such as multi-layer structure, security zone, security level, strategy deployment, routing control, VLAN partition, two-layer loop, redundancy design, the traditional data center network architecture is complicated, which makes the maintenance and management of data center network more difficult. Cloud data centers need to manage more network devices, and virtualization technology must be introduced for device management. Virtualization technology allows users to connect multiple devices, "laterally integrate", to form a "federated device" and to manage and use these devices as a single device. Users can also split a single device into multiple virtual devices, which are completely independent and can be managed separately. This greatly simplifies cloud data center network management.
Unified Exchange
The requirement of the Cloud data center network is more than the conventional Application data center, which makes the performance consideration of the network platform different from the traditional knowledge. The Cloud Data center network needs to have a "unified exchange" of the non-blocking full speed switching architecture.
Wire speed refers to the actual rate of transmission of the line data can be achieved nominal value, such as gigabit ports can be actual throughput to gigabit. The full speed refers to the switch all ports can simultaneously achieve the wire speed forwarding, this ability reflects the switch performance. The "non-blocking" line speed refers to the ability of all packets of bytes exchanged to reach the full speed, with all ports being detected at wire speed and without delay. The architecture of realizing non-blocking full speed is unified exchange technology.
The exchange architecture of traditional data center network is realized through crossbar high-performance switching network, the data is fixed in the crossbar internal route, and the running path of the same data flow is determined by hash algorithm. In this particular case, there will still be a blockage in the exchange of different classes. With the increasing application in recent years, the expansion of business scale and the rapid growth of actual bandwidth consumption, traditional switching architectures have been found to be difficult to meet the performance requirements in Internet data centers. The Unified Exchange architecture adopts dynamic route selection mode, and the data packets received by the Business Line card are processed to form fixed-length signal element, and the standard of dynamic route selection is loaded for each letter element. When the path is not available or the network board, line card failure, the selection of information will be dynamically changed by the hardware system automatically switch to the normal path. Through the unified Exchange architecture to truly achieve the network without blocking full speed.
Green Energy-saving technology
The Cloud Data Center network is one of the main components of energy consumption in the data center, which can improve the efficiency of the cloud data center only by reducing the energy consumption of the network. Cloud Data Center Network must adopt green energy-saving technology, network equipment consumption power is the total consumption of all devices in the device, the selection of Low-power devices is the source of energy saving, and its effect is not only the overall power consumption of simple accumulation after the reduction, but also reduce the cost of thermal design. The power system of network equipment should adopt complete flexible power supply intelligent management and adjust power distribution automatically. The Cloud data Center uses the network equipment which has the green energy-saving authentication.
In 2013, China Telecom, China Mobile and other enterprises in the new round of Data center network Equipment Collection test, network equipment power consumption will be an important test index, energy-efficient network equipment will be out. Future Cloud data Center will all adopt Low-power network equipment, only green energy-saving network is efficient network.
Although the network part of the overall data center accounted for about 15%. However, because more and more work relies on the network, this also causes the network technology to obtain the unprecedented big development, causes the cloud data center network to have many different previous data center's new characteristic.
So, under the port caching mechanism, each port data in the port congestion can be cached locally, so the cache capacity is proportional to the number of ports into the linear relationship, this linear ratio of cache capacity, can adapt to the rest of the cloud computing, the flow of the directional surge, for equipment manufacturers and the vast number of operator users, 100G network should also include IPV6, network security and network virtualization, such as new technical features in which, that is, 100G as a new generation of data center network architecture is an important link, will be in the IDC within the innovation of the technological revolution, greener, more efficient network architecture replaced.
IDC network inside a large number of bursts of traffic, operators at least to achieve 100G non-blocking exchange, and with large capacity of the cache platform, the cloud of IDC's ultimate goal is to improve the utilization of equipment and business flexibility, and IDC has the flexibility, scalability needs to use the IDC network to achieve, The unified exchange of network, including FCoE, CEE and other technologies, is to better support the integration of heterogeneous computing, storage and other resources.
The exchange capacity of the network and the absorption capacity of the network surge are two aspects of the cloud computing (or large data center) network's performance concern.
Ultra-wire speed: such as gigabit ports can be throughput of more than 1000 trillion (generally more than a little) this switch and the standard wire-speed equipment after docking easy to generate lost packets even the standard switch "blocked", in the data center environment, very few cases also allow the network to meet the hyper-linear speed, knowledge of the switch has special requirements.