Data center design: Starting from an early age, continuous innovation

Source: Internet
Author: User

Data center design: Starting from an early age, continuous innovation

Relying on the traditional three-tier data center network design, it is no longer possible to meet the needs of enterprises to maintain normal financial operation while continuously innovating. On the contrary, enterprises have begun to shift to the design philosophy of "starting from small matters and constantly innovating. Generally, this means selecting a cloud model provided by an external service provider and retaining sensitive functions within the enterprise. The implementation of such a strategy involves the management of private data centers of enterprises and the procurement of appropriate communication services and cloud computing solutions. This article focuses on three solutions to build a private cloud and incorporate it into public cloud services to create a hybrid cloud model, including transmission technology and commercial trading space.

Three-tier data center network design that has been running for many years

Over the years, the network design of the data center has always followed a three-tier design blueprint. The core of Layer 3 forms a traffic package around the data center. This core is provided by the aggregation layer that integrates various service modules and layer-2 switching functions. Finally, the access layer physically connects the server to the network. Server components are usually 1RU high device stacks in the device racks and interconnected top-of-rack (ToR) or end-of-row (end-of-row, eoR ). Various forms of patch cable and cross-connection completed architecture. It can provide good, reliable, and scalable model services in a predictable enterprise business growth and equipment cost environment.

 

When layer-3 routing is more expensive than layer-3 switches, and enterprises tend to retain absolute ownership of physical data center facilities, it is very meaningful to build a layer-3 network in the data center. However, as applications change rapidly, storage capacity increases, and the availability of virtualized cloud computing resources has changed. Building a modern, flexible, and cost-effective enterprise data center requires a new approach that combines private and outsourced virtualization elements of the enterprise.

Ultra-large data centers require different design plans and blueprints

Ultra-large data centers recognize the bottlenecks of excessive links, potential congestion, and latency in the three-tier data center design model, and then begin to switch to the new architecture. In this example, the leaf-and-spine topology network or fat-tree switching network is used. One of the core structures is connected to pods, it includes multiple aggregation and access switches so that you can increase the increment as needed. This pods is designed to meet a series of requirements. For example, a multi-tenant Data Center (MTDC) may require a core network designed to accommodate convergence and access pods and support storage and computing assets of different customers. Different network design methods for data centers are currently being studied or being tested. Including Jellyfish, DCell, BCube, and FiConn. Jellyfish is one of the most interesting examples. "A top-level Rack (ToR) switch has a boundary and a Random Graph topology structure. The inherent loose nature of this design is obviously more flexible than the previous design ."

 

As the name suggests, ultra-large scale means a huge scale. But this is not necessarily the most important difference. A large data center can build an excess layer-3 internal network. Ultra-large-scale data centers, such as those created by Google and Amazon, require not only the ability to create larger storage clusters or increase computing capacity, but also the ability to quickly configure applications, while providing new services, you can adjust the capabilities required for computing, storage, and connection. Despite their large scale, ultra-large data centers can be built very small and scaled up quickly.

What should enterprises do?

How an enterprise determines the development of its enterprise data center depends on its business model, growth expectations, financial culture, staffing and professional knowledge. A small enterprise may require a large amount of storage capacity and secure data transmission, but operating a completely private data center is quite costly and expensive. A large enterprise may prefer to have its own data center while still requiring higher flexibility and growth capabilities.

The interconnected data center through the public internet or private connection is what we call the foundation of cloud computing. Cloud services can provide computing and storage functions for enterprises of all sizes. Some small and medium enterprises will be able to easily purchase the services they need from their cloud service providers and access these services provided by cloud service providers through their communication service providers. Other Enterprises (especially larger enterprises) will form a virtual private data center, including private resources of the enterprise, supplemented by cloud solutions.

 

A private cloud can be built by building a core data center of an enterprise Park with a basic server or storage field, and can be connected through a local core network. This data center facility can be expanded to include other computing or storage functions supported by pods in the enterprise data center to expand the private cloud, or build a virtual private cloud in a remote hosted data center.

The interconnected data center is typically achieved through a dedicated data center Interconnect (DCI) transmission service, such as a E-VPN, or through a private optical connection in a rented dark fiber. By remotely locating assets owned by an enterprise, you can add or delete data pods as needed while reducing costs. This flexibility makes it easy for enterprises to accept innovative digital technologies to better serve customers, increase market share, and maintain high internal control.

Start from scratch and keep innovating

Data centers can achieve flexibility and agility through the modular nature provided by a core pod and outsourcing architecture. The data center contains a larger server cluster and high storage capacity, which becomes very complex. The internal network of the data center requires a large amount of physical space, power and planning for operation and development. The new architecture, such as ye Ji or Jellyfish, must be helpful, but they are based on the assumption that resources are included in the data center.

DCI allows remote resource virtualization, which is the foundation of cloud computing. Private cloud is formed between different geographical locations of an enterprise through DCI. The virtual private cloud is formed by reaching the managed MTDC through DCI. Hybrid cloud is formed by DCI's cross-management boundary expansion to services including public clouds, usually through the internet, but more and more private connections in MTDC. All these DCI methods work together to provide a cloud interconnection solution that provides enterprises with the ability to add or delete computing and storage within their business cycle. An enterprise can build a traditional data center core to meet its initial needs, and then add capacity using a cloud option. A smaller core capability is supplemented by incremental resources provided by cloud computing.

The idea of "starting from scratch and innovating constantly" was put forward by Forster and Lapukhov in their article on InfoWorld in 2014. They introduced in the article how very large-scale data center operators such as Microsoft or eBay use this practice in the data center network, this allows them to use pods as a unit for design, deployment, operation, and elimination to match the incremental demand for storage and computing capabilities. This concept does not have to be limited to data centers, but should be extended to a private or virtual private architecture of virtual resources.

An enterprise that has begun designing its data centers from scratch and plans to use cloud computing deployment to increase its private data centers and virtualization resources will make full use of powerful digital tools, such as analysis, content distribution, network load balancing, identity management, security optimization, and customer relationship management, to achieve great flexibility.

For some small enterprises, they may not be able to afford part of these tools, and even some large enterprises are capital-intensive investments. Rapid innovation means that computing and storage resources must be easily obtained. For enterprises that try to change their scale and scope by building a large, scalable, On-Demand Data Center, this is costly, there are also great risks in their ability to cope with future changes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.