What are the new changes in the data center market in 2018 ?, 2018 data center
With the in-depth development of cloud computing technology, the "Cloud" has become the norm of enterprise development. The emergence and development of new technologies such as IOT, artificial intelligence, and 5G are driving a new round of revolution in data centers. From the mainframe data center to the standardized data center, to the modular data center and cloud data center, the data center has been evolving over the past 30 years. It is foreseeable that as we enter the big data era, data centers play an important role in data computing, storage, transmission, and other aspects. So what are the new changes in the data center market in 2018?
2018 is the year of the Edge Data Center
With the development of high-speed mobile internet and IOT, all connections are generating data, in particular, applications such as IOT, 4 K video, AR/VR, AI/ML, and self-driving are increasingly demanding for massive data processing, high-speed transmission, and real-time response, it is too inefficient to transmit all these information streams to cloud services or data centers for processing. Edge computing seems to be becoming the best solution. This will have an impact on the computing model of the data center in the future. The Edge Data center must be an important part of the development of the data center.
Edge computing is an open platform that integrates network, computing, storage, and application core capabilities on the side of a thing or data source to provide nearest end services. Its Applications are initiated on the edge side to produce faster network service response, meeting the industry's basic needs in real-time business, application intelligence, security and privacy protection.
Compared with cloud computing, edge computing starts from the data source and complements cloud computing in real-time and fast ways. Cloud computing focuses more on "Cloud", mainly to achieve final data analysis and application. All its data needs to be summarized to the back-end data center for completion, edge computing emphasizes the physical region where "terminals" are located. In general, edge computing provides users with network, computing, storage, and other resources nearby, which can better meet users' real-time business needs.
The Edge Data Center is between the core data center and the user, which is the closest to the user. The Edge Data Center can only maintain real-time data updates through the Wan and the core data center, provide users with good services directly. This not only avoids repeated data transmission, but also allows local users to access services that are no different from those that access the core data center. More importantly, the user experience is better.
According to IDC statistics, more than 2020 of terminals and devices will be connected by 50 billion. In the future, more than 50% of the data needs to be analyzed, processed, and stored on the edge of the network, edge computing faces a huge market scale. Nowadays, edge data centers exist widely, especially in the data center field of the Internet. Only in this way can the internet provide excellent network access experience.
All flash memory will become the best storage solution for data centers
In the past 2017 S, the storage market was innovated by SSD, NVME, and other flash technologies. As mentioned above, data centers need to make a difference in data storage and processing, but traditional hard drives are not powerful. Due to performance limitations, data centers using mechanical hard disks cannot meet their needs. More and more new data centers are choosing all-flash storage solutions.
With the development of data centers, in addition to the need for larger storage capacity, data centers have higher and higher performance requirements, especially for various applications such as mobile payment and mobile social networking, data Centers require lower latency and faster execution efficiency to ensure rapid response of various applications. Although the CPU and memory performance of the data center servers are constantly improving, the execution efficiency is not greatly improved due to the impact of disk performance, the all-flash data center solution can easily solve the problem of low storage performance in the data center.
From the perspective of the cost per GB, the price of flash memory is higher than that of mechanical hard disks. However, from the perspective of the overall cost, all flash memory is actually relatively cheap. First, because flash memory is the main force in the future, manufacturers have invested a lot in this technology, and the flash memory price is continuously decreasing. In the future, it will definitely fall to a relatively reasonable price. Second, flash memory has obvious advantages in terms of the cost of purchase and later use. This is mainly because although flash memory costs are high, the failure rate is much lower than that of mechanical hard disks, and the cost is lower in terms of energy consumption, O & M, and hardware and software. Third, from the perspective of performance and price, there is no doubt that flash memory is more cost-effective.
Gartner predicts that by 2020, the proportion of data centers that only use the SSA (all-flash array) will increase from zero to 25%. it is foreseeable that all flash memory will become the best data center solution in the future and be applied to key core businesses of enterprises.
Restructuring the network architecture of the data center
The network is the "highway" of the amount of data in the data center, and the construction of these "highways" must be planned and designed in a unified manner to give full play to the advantages of network interconnection. A network with advanced architecture can save costs, avoid frequent failures, and facilitate O & M, which is crucial to the development of data centers. With the development of cloud computing, big data, virtualization and other technologies, the network architecture of the data center has an innovative development impetus.
First, the evolution from a classic network to a VPC network. In a classic network, all users share the public network resource pool. No logical isolation is performed between users. The intranet IP addresses of users are uniformly allocated by the system, and the same Intranet IP addresses cannot be allocated to different users. VPC (Virtual Private Cloud, Private Network) provides users with a logically isolated Virtual network space. Within the VPC, users can freely define network segment division, IP addresses, and routing policies, you can create subnets in a vpc. You can add multiple VM instances to each subnet. Security provides network ACLs and access control for security groups.
Next, stack devices evolve to independent device networks. Stack enables Virtualization of network devices. Multiple devices are virtualized into one device, which facilitates network management and simplifies the network structure. It is an innovative technology that has been widely used in the past decade. However, the stack technology is a private technology. devices of various network vendors cannot be stacked, which limits the evolution of the stack technology. At the same time, although the stack device improves the network backup capability, if a problem occurs, it can also be caused by the stacked device, but it is not as secure as an independent device, in particular, network services must be interrupted to complete software upgrades for stacked devices. Therefore, the stacking technology is increasingly not suitable for data centers with high reliability requirements, and the data center is de-stacked.
Third, the basic network changes from L2 forwarding to L3 forwarding. In L2 Networks, broadcast storms or loops often occur. Using a layer-3 approach can not only reduce loops, but also improve network bandwidth utilization. Although the three-tier network configuration is complex, it can be deployed automatically through the Controller. As long as the port of the network device is connected through a cable, all network configurations are automatically deployed.
Fourth, the network bandwidth is getting higher and higher, and the speed of a single port is increased from 40G/200G to G/G. Now, 40G internal interconnection in the data center is very popular, almost becoming the standard for cloud data centers, there are also a lot of network outlets using GB, which were rare three years ago. We can see how fast the network bandwidth is developing, and this situation will change soon.
The fifth is to reduce the complexity of the basic network, and the connection and configuration of the network will be foolish. This requires two powerful tools: SDN and NFV. SDN is an innovation oriented towards network architecture, and NFV is an innovation oriented towards device architecture, using these two tools makes it extremely simple to configure and deploy network devices, or directly distribute the forwarding table through the Controller, or directly issue the VXLAN configuration through the controller without manual intervention. People only need to click on the Controller to complete business deployment, daily monitoring and troubleshooting.
Network Reconstruction has become a hot topic in the development of data center technology, and the data center has ushered in the era of network reform. With the evolution of the network architecture, the network has changed from closed to open, and is more suitable for the future development of data center services.
AI era of self-driven data centers
Network, server, storage, and virtualization devices are usually managed by different management teams using various tools, which is why most of the management costs are often used in deployment, monitoring, update, and troubleshooting. Enterprises have limited IT budgets and need to consider the increase in applications. Automation can help robots to complete repetitive tasks manually.
Just like self-driving technology, data centers also welcome the driving age. Data center management is becoming centralized, automated, and managed through software rather than hardware. We are looking at the integration of machine learning, cloud computing, virtualization, and other technologies to improve the O & M efficiency of the entire data center. In particular, with the introduction of artificial intelligence technology and machine learning, IT teams will be able to focus on more important tasks, rather than trivial daily O & M management.
The data center is naturally a massive database, and the data generated and forwarded every day is growing exponentially. With this data, we can use big data technology for analysis, we can get a lot of meaningful data. Artificial Intelligence can learn this data and often get unexpected gains. For example, Google's big lab center has now begun to use artificial intelligence technology to help itself save on power costs, and Google's data center's power consumption has been saved by a few percent, this alone saves hundreds of millions of dollars.
At the same time, we also need to see that AI needs become the main driving force for the future growth and innovation of data centers. At the same time, AI technology will also help data centers improve operational efficiency and build intelligent data centers.
While improving the automation level, we can see that the data center integration trend is also very obvious. The introduction of hyper-converged infrastructure (HCI) greatly simplifies the design, deployment, and O & M of data centers. In addition, it integrates data centers with cloud service capabilities, especially hybrid clouds. These are all affecting the design and operation of the data center.