Promote new generation data center

Source: Internet
Author: User
Keywords Data centers next-generation data centers these extensions implementations

Innovation and change

In the past, all roads pass through the data center.

The Enterprise data Center, as a configuration point for network, storage, and computing services, should be at the center of the mission-critical functional area because it is simple: the cost of delivering large-scale, centralized services to tens of thousands of users is so high that it can only be managed through the physical configuration of the required resources.

Now all this has changed. As demand for operational efficiency, sustainable business operations, dynamic business diversification, and cost competitiveness grows, the role of the data center is evolving rapidly, although it is similar to the past.

The data center will undergo three stages of evolution. The first is to compute the evolution of the substrate, including the virtualization of servers and the real-time porting of virtual machines to move loads and applications. The second is the evolution of storage substrates, such as the global and virtualization of storage resources and the functional integration of storage area networks (Sans) and network-attached storage (NAS). Finally, in the third phase, we find it necessary to evolve the underlying network substrate.

To achieve these three phases of change and gain advantage, the evolution of the network substrate must be synchronized with the transformation of the computing domain and the storage domain. Enterprise data centers must find ways to effectively enhance operational efficiency, storage capacity, and processing speed, while reducing costs. A new generation of data centers can meet all of these requirements and provide growth and scalability features to ensure their long-term relevance.

Optimize network substrate

When the data center was first designed, it was designed to provide a large number of users with access to a relatively small amount of common applications. These applications are based on a simple command-response interaction model, which handles text-based data.

As a result, the underlying network is designed relatively early and relies on layer 2nd (Swap) and layer 3rd (routing) devices typically used by traditional service providers or enterprises. However, for today's media-intensive traffic, the limitations of this architecture quickly manifest, not only inefficient use of available transfer resources and Exchange resources, but also adversely affect the mobility of virtual machines, which is critical to the availability of diverse, dynamically available applications and services. This passable network architecture increases the CAPEX, OPEX, and power consumption of data centers, all of which violate the design purposes of the next generation of data centers.

Unified structure

Today, typical servers deployed on a data center server rack are equipped with 2 or 3 high power adapters to connect network (Ethernet), storage (Fibre Channel) and cluster (InfiniBand or proprietary interconnect) to 3 disparate structures. The interconnection requirements of these 3 structures vary widely: network interconnection can tolerate packet loss and Gao Shiyan, storage interconnection guarantees no packet loss, and cluster interconnection requires minimal delay to facilitate interprocess communication. These disparate structures have led to the emergence of "network sprawl", in which tens of thousands of cables connect thousands of servers and storage devices through hundreds of network switches, routers, and application devices. All of this inevitably leads to increased power and cooling costs, as well as an increase in the cost of CAPEX and OPEX needed to manage these structures.

With the advent of 10G Ethernet and the key expansion of the industry to meet the requirements of storage and cluster interconnection, the existing structure will be merged into a unified and integrated network structure based on Ethernet. This structure provides seamless access to the storage and processing resources it supports. From a purely physical resource perspective, this structure consolidates resources such as adapters, cables, and switches to reduce the operating and power costs of a new generation of data centers.

Flat Network

Today's data center networks are based on traditional L2 and L3 network devices (switches and routers) developed for the Enterprise network and service provider networks. This traditional device cannot be fully expanded to meet the stringent requirements of a new generation of data centers. The scalability of the control protocol leads to inefficient resource utilization (link, switches, networks, restrictive topologies such as L2 VLANs and L3 subnets, and heavily overloaded hierarchies all severely limit the mobility of virtual machines and the consequent ability to move workloads and dynamically provide applications. These issues can lead to undesirable results in network performance and power consumption, and can interfere with critical optimizations needed for computing and storage virtualization.

Over the past few years, we have seen the advent of commercial chips dedicated to L2 and L3 Ethernet switching. These switching chips can meet the unique requirements of a new generation of data centers, and provide enhanced performance such as multiple load balancing, active congestion management, and scalable topology, which can significantly improve resource utilization at the link, switch, and network levels.

The complexity and proprietary implementation of the control layer protocol will be replaced by an extensible new open Control Protocol management stack, which can even be completely detached from the data tier. The scalability of the control layer can extend tens of thousands of nodes to achieve seamless and real-time virtual machine migration throughout the data center.

All of these data-layer and control-layer enhancements will lead to the emergence of large, flat L2 network structures across the entire data center. The "flattening" of the data center network substrate can realize the complete commercialization of the data center network and fundamentally change the economy of the data center.

Edge Distributed Intelligence

Similar to conventional L2/L3 switching devices, data center's various "intelligent" devices (load balancers, security devices, application accelerators, etc.) also adopt a scalable architecture that is now a serious scalability bottleneck. These devices can cause blocking points in the network and further limit the realization of the ultimate goal of seamless real-time porting of virtual machines throughout the data center.

Solutions such as load balancers, application accelerators, and security devices (authentication, encryption, access control, intrusion prevention, etc.) also need to be adapted to the Virtual data Center architecture. They must be able to support mobile users, mobile virtual machines, and dynamic network configurations. Instead of using proprietary, closed, and vertically scalable hardware implementations, the data center deploys these solutions by running software on the edge standard servers of large flat networks, which enables intelligent scaling at very low cost.

Energy Matching Network

Traditional network equipment failed to fully consider energy-sensing or energy-matching problems. Even industry-leading switches and routers consume near-peak power when running at low utilization rates. At the network level, data center operators cannot route traffic through their subnets, nor can they optimize energy consumption when data center service utilization is low.

A new generation of data centers requires energy sensing and matching designs in devices, circuits, components, boards, systems, software, and network management. These technologies can significantly reduce power consumption and the corresponding cooling costs, enabling data center operators to control energy optimization across the data center and match the use of data center services.

New Generation Data Center network

A new generation of data center networks will adopt a "flat, integrated and environmentally friendly" structure. It will not only prevent large-scale "contagion", and reduce the cost of managing each other's independent network, storage, and cluster structures, and will also use commercialized, energy-matched switches to build large, flat topologies that can scale thousands of nodes to achieve seamless virtual machine mobility.

Most importantly, it can turn control over to data center operators to enable policy-based traffic routing and intelligence to be applied, and to manage overall power consumption. The results will realize the commercialization of network substrates, which will not only greatly reduce the procurement costs, but also promote innovation in the higher level of the "technology food chain".

Example of a new generation of data center practices

One example of this new data center network concept is Microsoft's monsoon. Monsoon is a network architecture that uses low-cost L2 equipment. Scalability can be achieved by modifying the source routing performed at the control layer and by executing multipath routes on the data layer. This model uses the Valiant Load balancing technology developed by Stanford University to build a logical, fully meshed topology across a wide-area backbone, using no more than 2 jumps to route data from source to destination. By adopting a simple load balancing technique, it can distribute the load across the network to obtain the support of any network topology.

In another example, Amazon's elastic Computing Cloud (Elastic Compute Cloud) and its inherent elastic load-balancing technology can automatically allocate incoming traffic to the logical level of multiple Amazon EC2 instances (logical appearance). Traffic that is transferred into can be assigned to multiple EC2 instances located on a single or multiple available domain. The elastic Compute Cloud process can automatically expand according to the number of incoming application traffic and can detect "unhealthy" load-balancing instances, and once detected, EC2 will no longer route traffic to those instances, but instead redistribute traffic to those that are working properly.

A recent example is the Stanford University's OpenFlow project. The project is designed to add a feature set for commercial Ethernet switches, routers, and wireless access points. OpenFlow is an open standard that provides a standardized "API" that allows researchers to run experiments on the network without requiring vendors to expose the proprietary technology of their network devices. It is a very good agreement, embodied in its full use and inherited the advantages of the original technology. In the traditional network architecture, packet forwarding and advanced routing decisions generally occur in the same network device, but with OpenFlow, these functions will be separated. Although the routing decision is moved to a separate server, the data path remains on the switch. This enables distributed "task volume partitioning" and improves data processing efficiency.

The future of a new generation of data centers

So what does all this mean for the evolution of the data center and the people who can benefit from the development of a new generation of data centers?

First of all, we must realize that the evolution of the environment in three aspects is the response to the changing market demand. The evolution of the computing substrate includes server virtualization and the introduction of mobile virtual machines, which occur in the first phase and are largely completed. The next step is the global and virtualization of storage resources, the functional convergence of SAN and NAS. The last stage is still underway, the evolution of the network substrate and the key to the other 2 phases.

As the network evolves from a multi-level hierarchical model to a more integrated, Single-layer Single network model, we will see a decline in the cost of operation and energy consumption brought about by the persistence advantage of centralization of data center resources. At the same time, we will be more reliant on connectivity technologies, including Ethernet Fibre Channel (FCoE) and traditional high-speed (gigabit and Gigabit) Ethernet, to meet the growing demand for rich media content.

Eventually, as this model roots and becomes the standard for data Center resource connectivity, we will see a number of advantages emerge. These advantages include:

Ability to seamlessly and efficiently deploy and manage cloud-based services and applications

Economies of scale generated by the use of Ethernet as a primary means of storage access

Due to the improvement of network operation efficiency and the decrease of delay between network and storage resources, the network performance will be greatly improved.

Enables web-based services, service-oriented architecture models, and Web 2.0 application environments

Expand mobility support over the next few years to support the ultimate goal for most content

Increased energy efficiency can not only reduce costs, but also more environmentally friendly.

A new generation of data centers is neither an option nor a theory that stays on paper, but a reality. It is already available and evolving.

 

(Responsible editor: admin)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.