Data management tends to be intelligent in the era of cloud computing

Source: Internet
Author: User
Keywords Ethernet storage System cloud computing data center

Data widely exists in enterprise's IT system, is the core of enterprise development, all IT system development depends on data, and service to enterprise business Data management needs. With the development of IT industry and the continuous progress of data management requirements, the "intelligent" trend of data management has been unstoppable, and intelligent data management is becoming the common development goal of all enterprises.

The server's status in the data center is declining, storage systems-thanks to the increasing importance of data-the status of the data center is steadily rising, the core of enterprise informatization and IT system changes: computing power is no longer the main criterion of evaluation, and the control of information and data is the most urgent demand of CIO.

According to Gartner, global controller-based external (ECB) disk storage revenues in 2010 exceeded the high level of $1.4 billion trillion in 2008. The ECB's storage revenues were more than $19.4 billion trillion in 2010, 18.1% 1 higher than the $16.5 billion trillion in 2009. In the fourth quarter of 2010, shipments of global server markets grew by 6.5% and revenue increased 16.4% 2, according to Gartner. Since 2010, the average annual increase in enterprise investment in the server has started to be smaller than the investment in external disk storage (mainly in the data center of the network Storage system), storage began to exceed the server as the main growth point of IT investment.

Because of this, the topic of data management can cause so many people to debate, it has also spawned a series of "Smart Trends in Data management", in which storage management, virtualization, data protection and cloud computing are the immediate issues that intelligent data management needs to address, but for the future, Is the trend of data management immutable? 2, 3 years after the data center, when the storage system and data management architecture as the core value of the data center, will there be a new change?

The answer is yes, SAN and NAS have almost ruled the storage market over the past 30 years, but all this has begun to change, with object storage, converged devices emerging, storage-system lifecycles shrinking, 5-7-year life cycles, data explosions progressively compressed into 3-5 years The way the connections within the datacenter seem to be changing from Fibre Channel to new connectivity; Even the data itself seems to be changing ...

Like the country's "Twelve-Five" plan for the next five years set goals, guidance and implementation of the road map. It planning for an enterprise data center also needs to be planned for five years, or even longer, and the need for long-term planning for data as the most important and the only part that will run through it is self-evident.

However, in such a complex and volatile world of information and data, the future development of long-term data management planning, Enterprise CIOs will face a major challenge. Intelligence is the inevitable direction of development--liberating human resources to reduce costs, increase the proportion of automation and intelligence-intelligent data management has demonstrated its important advantage as a "future-oriented data management" approach-making information, data, storage systems, and the entire datacenter smarter, which is always true.

Connection mode change data storage: 10GBE,FCOE,DCB

After the 2007 GB Fibre Channel (Fc,fiber Channel) became the mainstream of data storage, FC in the data center market began to enter a platform for development, although FC has been in the data center, especially the storage system, the external connectivity market occupies a very high market share, However, the speed of development has been greatly limited because of the shackles of price and cost of ownership and the speed of technology development. After the 10gb/s Ethernet technology matures, FC Although the 8gb/s of the next generation of speed products to deal with, but looking at the future 3-5 years of development, as well as the 10gb/s Ethernet is now very fast development momentum, the two competition although still many fiber channel manufacturers still insist, but the outcome is already very obvious.

Emoticons: Migration to 10GbE has begun, and 2012 will be a significant inflection point

Along with the development of 10GB/S Ethernet, FCoE and DCB should also be born. With the advent of 10Gb Ethernet, Ethernet based optical fiber communication technology (FCoE) and new lossless 10GB/S Ethernet (data center bridging, short DCB) technology began to become popular in the datacenter, The Foundation for future data Center network convergence will be built on the 10Gb Ethernet architecture.

Data widely exists in enterprise's IT system, is the core of enterprise development, all IT system development depends on data, and service to enterprise business Data management needs. With the development of IT industry and the continuous progress of data management requirements, the "intelligent" trend of data management has been unstoppable, and intelligent data management is becoming the common development goal of all enterprises.

FCoE strives to realize the feature of Fibre Channel on Ethernet, it can insert Fibre Channel information into Ethernet packet, so that the Fibre Channel request and data of server SAN storage device can be transmitted over Ethernet connection, DCB provides a network infrastructure that can minimize the phenomenon of packet loss. , further expanding Ethernet, because DCB provides a "lossless" network, so it can maximize the predictability of network performance, but also for the storage system real implementation in the Ethernet environment to provide low-cost based possibilities.

10GB/S Ethernet, FCoE and DCB, three connections are changing data storage, but also changing the way data management, as Ethernet in the next generation is about to 40gb/s and the next generation of FC is only 16gb/s, and DCB and FCoE technology greatly improve the application level and scope of Ethernet, Ethernet storage System and Fibre Channel storage system in the storage market ratio will change greatly, Ethernet is very likely to surpass Fibre Channel in the storage system to occupy the mainstream.

The introduction or gradual transition to an Ethernet storage system is a path to future data management to equallogic the trend in the two-digit growth rate per year in the Ethernet market, while the Compellent storage system implements a unified storage of FC+IP sans and ready for FCoE To prepare for the transition and replacement period for the next 5-7 years-now is the time to prepare for the Ethernet age, as it is true that Ethernet's overall share of the Fibre Channel storage system will take place over the next 3 years.

Data start mobile system upgrades accelerate pace

Until tiered storage technology matures, data is stored statically in unchanging media, and data is archived to lower-cost disks or tapes only when a data is not used, or is no longer used frequently, with the archive server's huge resource footprint. The stillness of the data creates a huge waste of storage systems.

With the maturity of the compellent-led automatic tiered storage technology, and the Scale-out architecture brought by storage systems such as EqualLogic, compellent, and even generations of storage systems, a unified storage pool can be formed-data starts in different media, The flow of data between different storage nodes, coupled with the disk backup storage system beginning to replace tape as an efficient alternative to backup, is more fluid than ever before.

If your data still stays on the primary storage system, or stay on the expensive online disk, high cost and inefficient storage system will make you urgently improve their efficiency and cost-effective, especially after the advent of virtualization, not only the dynamic data changes, virtual server-virtual machine, the establishment, operation, The process of stopping and then dying has come faster than ever before. A piece of data may be an important desk material today, and tomorrow may become an object that needs to be "staged" to a low-cost system or even an archive.

As a result, automated tiered storage and scaling must be the target of future storage systems, and the data management architecture, should also be able to intelligently identify at least three aspects: 1, which media is efficient, which media is Low-cost, SAS disk, SATA disk or SSD can be identified; 2, Which data needs to be backed up and archived in time, which data needs to be streamlined (data de-duplication), 3, which system nodes are new performance-better nodes, and which are the bearer of data that might be used for lower-demand applications.

If your data management architecture does not implement such intelligent data management, then it is not just your storage system, your data center, it architecture, all out!

Entering the age of cloud computing into the big data age

The momentum of big data and cloud computing is hard to resist, the big data problem is the advent of a new level of data that we now face: Enterprise data from TB to PB, personal data from GB to tb--our quest to solve big data problems comes from three points: 1, The increase in the number of devices that generate data, the amount of data produced by personal digital devices and enterprise computing systems is far more than 10 years ago, and CommVault's Xu Yongxing said 2011 produced 180 times times the amount of data in 1996; 2. The size of the file (unstructured data) itself has changed. From the 600MB rmvb to 30GB of Blu-ray 1080P video, 3, the increase in enterprise data, resulting in a large database.

And cloud computing fades with the glitz of this wave, is also showing its true nature as an industry trend-whether or not cloud computing is a new computing form-for example, some people think it's just distributed computing and Pay-as-you-go--The only goal we want to achieve in cloud computing: trans-regional, highly reliable, pay on demand , WYSIWYG, rapid deployment, etc. these are the things the IT industry has been chasing for the past 20 years. Now we can say that cloud computing will be the new IT ecosystem, and of course, cloud computing is facing the test of big data and mixed cloud requirements.

Naturally, cloud computing and large data will be a big challenge for future data management, for cloud computing, we've talked about Ethernet storage (IP Sans), scale-out, automated tiering, data deduplication, and seamless scaling in previous articles, but we also need solutions for big data- Unlike most propaganda, I think big data exists not only in large enterprises, but also in small and medium-sized enterprises, because large data may be the size of a single file (unstructured data) or a large structure of data from applications such as data warehouses.

(Responsible editor: admin)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.