How to deal with storage I/O bottlenecks in mass data age

Source: Internet
Author: User
Keywords Disk writing

The age of mass data is coming. Data volumes are growing rapidly, according to the latest study by IDC, an analyst, and the 2011 global data volume broke through 1.8ZB, growing 9 times times faster in 5 years and managing more than 50 times times the amount of data being managed.

At the same time, the cloud computing market has matured and developed very quickly, in the massive data and cloud of double attack, data storage is facing severe challenges, we see that storage has become a weak link in the enterprise, if not pay attention to so lost will not only opportunities.

One of the biggest challenges of cloud computing is addressing the storage I/O bottlenecks associated with large-scale virtual machine deployments. With the rapid increase in the number of virtual machines, the surge in random read and write I/O is inevitably a problem for Nas and San Arrays and Das, because disk I/O or target-side CPUs are becoming bottlenecks.

The public cloud and hosting cloud drives small and midsize enterprises and deploys enterprise-outsourced platform infrastructures for cost-effectiveness and resource resilience. As a result, virtualized multi-tenant OLTP databases and network servers increased rapidly, resulting in large amounts of random write and read I/O transmissions.

In the private cloud or enterprise cloud, enterprise data centers have to face the virtual machine scalability pain points caused by server consolidation and new technologies that improve virtual confidentiality, such as VDI applications. Storage administrators tend to overcome these pain points by purchasing more storage array hardware, but this solution does not effectively take advantage of procurement costs, management overhead, and maintenance expenses.

The initial goal of virtualization technology is to increase CPU and other resource utilization, since many data centers have only 10%~20% CPU utilization. This is not only a waste of the hardware budget, but also includes the hidden costs of space, power, cooling, and recurrent maintenance costs. Today, many enterprises deploy virtualization management programs, successfully consolidate the server, substantial cost savings. The commercialization of CPU and network bandwidth resources makes it possible to rapidly increase the number of virtual machines, which further increases the burden of storage I/O.

In the past, 50 physical servers loaded a NAS array meant 50 host connections, now averaging 10 to 3 virtual machines per host, and 50 physical hosts equivalent to 500 to 1500 virtual servers connected to a single storage array. Because each virtual machine is read or written separately from the Enterprise virtual machine on the same physical server, the data access patterns become more random regardless of whether they are read or written.

A large number of random writes and read I/O requests have overwhelmed the NAS array. Storage CPU and disk I/O is fast saturated and application can stagnate. NAS arrays cannot meet read and write requests more quickly to serve the application server virtual machines, which results in loss of end customers.

Second, the higher virtual server latency, the application server virtual machines face inconsistent performance challenges, some virtual machines waiting for write or read I/O to complete the time too long. These virtual machines desperately need IOPS, which can affect performance server level agreements and reduce productivity. In the worst-case scenario, slow I/O responses can cause application timeouts and unplanned outages.

One of the key features of Intel's IO Acceleration solution--Storage I/O acceleration employs hardware-based acceleration, which enables faster data from and to applications.

This includes adding RAID 6 technology to correct errors during data transfer. Not only does it ensure faster data transfers, but it also avoids loss or tampering when data is transmitted across disk and disk storage systems. Byte parity is used to ensure the integrity of data as it passes through the storage subsystem. The parity of the data is written to the disk drive to prevent loss of data in the event of multiple hard disk failures, or bad data blocks during the rebuild. This increases system availability and reliability, shortens backup windows, accelerates disk rebuilding, and protects data.

At the same time, Intel Fast storage technology, Intelrapid Storage Marvell (Intelrst), is a windows-based application. This program provides higher performance and reliability for desktops, mobile computers, and server platform systems with SATA disks. When using one or more SATA disks, you can benefit from improved performance and reduced power consumption. When you use multiple disks, you can increase the protection of data loss in the case of disk failure.

(Responsible editor: Lu Guang)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.