The storage design leveraged by the host server architecture has a significant impact on host and guest performance. Storage performance is a complex mix of drives, interfaces, controllers, caches, protocols, Sans, HBAs, drivers, and operating system considerations. Typically, the overall performance of a storage architecture is measured by maximum throughput, maximum IO operations per second (IOPS), and latency or response time. Although these three factors are important, IOPS and latency are most closely related to server virtualization.
Storage connectivity: The Hyper-V host server has three different ways to access stand-alone disks and storage arrays: direct-attached storage, ISCSI storage area networks, and Fibre Channel storage area networks.
1) Direct -attached storage: direct-attached storage typically includes a hard drive inside the host server, or a SCSI, SAS, or ESATA connection that connects directly to a hard drive in a dedicated storage array on the server. The host server leverages an internal SCSI, SAS, or SATA controller card to support server access to storage and supports a variety of RAID levels. Storage arrays are typically dedicated to each server.
2) iSCSI Storage Area Network : iSCSI has become an increasingly popular storage network architecture that supports the use of SCSI protocols on the TCP/IP network infrastructure. ISCSI supports the use of standard Ethernet components to build storage area networks such as NICs, switches, routers, and so on. In general, ISCSI sans are less expensive to implement than traditional Fibre Channel sans. The storage arrays used in the ISCSI architecture are typically low-to-mid-tier arrays that are shared by multiple host servers. It is recommended that you use redundant, dedicated Gigabit Ethernet NICs for ISCSI connections.
3) Fibre Channel storage Area Network : A Fibre Channel storage area Network provides high-speed, low-latency storage array connectivity. Host servers can take advantage of host bus adapters (HBAs) to connect to Fibre Channel sans via switches and controllers. Fibre Channel Sans are typically used in conjunction with midrange to high-end storage arrays, providing features such as RAID, disk snapshots, and multipath IO.
Drive type: The type of hard drive used in the host server or storage array has the greatest impact on overall storage architecture performance. Key performance elements of the hard drive include interface architectures (for example, U320 SCSI, SAS, SATA), drive speed (7200, 10k, 15k RPM), and average latency in milliseconds. Advanced features, such as caching on the drive, native command queuing (NCQ), support additional features that improve performance. When you consider Hyper-V Host Server Tuning and guest performance, similar to storage connectivity, high IOPS and low latency are much more important than maximum durable throughput. When selecting a drive, this also means that the drive with the highest speed and lowest latency should be selected whenever possible. By replacing the 10k rpm drive with a 15k rpm drive, you can increase the IOPS per drive by up to 35%.
1) SCSI: SCSI drives are rapidly being replaced by SATA, SAS, and Fibre Channel drives. SCSI drives are not recommended for new host server architectures, however, existing servers that use U320 SCSI drives can still provide superior performance characteristics.
2) SATA: SATA drives are a low-cost, relatively high-performance storage option. The main forms of SATA drives include the 1.5 GB/s and 3.0 GB/s standards (SATA I and SATA II) with a rotational speed of 7200 rpm and an average latency of approximately 4 milliseconds. A small number of SATA I drives run at 10k rpm with an average delay of 2 milliseconds, and these drives provide an excellent low-cost storage solution.
3) SAS: SAS drives are often much more expensive than SATA drives, but they can significantly improve performance at both throughput levels, and more importantly, lower latency. SAS drives typically have a speed of 10k or 15k rpm with an average latency of 2 to 3 milliseconds.
4) Fibre Channel : Fibre Channel drives are usually the most expensive, and performance characteristics are typically similar to SAS drives, but with different interfaces. Choosing a Fibre Channel or SAS drive is usually determined by the storage array you choose. Similar to SAS, they typically offer 10k and 15k two RPM, with average latency similar. If you are using a Fibre Channel SAN, make sure that the switch and controller infrastructure is tuned to handle the large amount of storage I/O that is generated by the consolidated server.
Disk Redundancy Architecture: It is recommended that you use a redundant array of inexpensive Disks (RAID) for all Hyper-V host storage. The Hyper-V host can run and store data from multiple workloads, according to Microsoft's official presentation. To ensure availability during disk failure, RAID is essential. In addition, RAID arrays can improve overall performance if they are reasonably selected and configured.
1)RAID1: RAID 1 refers to disk mirroring. Two drives store the same information, so one drive is mirrored by another drive. For each disk operation, the system must write the same information to two disks. Because dual write operations can degrade system performance, many of these systems utilize duplex, with each mirrored drive having its own host adapter. Although the mirroring method provides excellent fault tolerance, it is relatively expensive to implement because only half of the available disk space is available for storage and the other space is used for mirroring.
2)RAID5: Also known as band with parity, this level of strategy is popular in low-and mid-tier storage systems. RAID 5 can divide data into chunks that are distributed across disks in an array. RAID 5 can write parity data across all disks in a RAID 5 set. Data redundancy is provided by caused by parity parity information. The data and parity information is stored on the disk array in the same way that two types of information are always on different disks. Because of the characteristics of the parity algorithm, each write request will result in 3 actual disk writes, which can degrade write performance. Bands with parity can provide better performance than disk mirroring (RAID 1). However, when a partial stripe is missing, read performance will degrade (for example, in the event of a disk failure). RAID 5 is a low-cost option, making it possible to utilize drive space more efficiently than RAID 1.
3)RAID10: This level is also known as a mirror with a stripe. RAID 10 uses a striped disk array and then mirrors it to another identical striped disk group. For example, you can use five disks to create a striped array. The striped disk array is then mirrored using a different set of five striped disks. RAID 10 provides the performance benefits of the disk stripe and mirrored disk redundancy. RAID 10 provides the highest read and write performance over any other RAID level, but at the cost of using twice times the number of disks.
4)RAID50: This is a nested RAID level that combines RAID 0 with block-level striping and RAID 5 parity. It can be thought of as a RAID 0 array with multiple RAID 5 arrays. This level improves write performance for RAID 5 and provides better fault tolerance than a single raid level. The exact configuration and number of disks will determine the actual availability and performance characteristics of this RAID level. This RAID type has become a common feature on low-end storage devices.
It is generally recommended to use raid on the system volumes in all host Server schema modes when designing storage for Hyper-V host servers, using RAID 1 or RAID 10,raid 5 and 50 for data volumes in a single host server architecture pattern with inherent write performance penalties. Therefore, it is generally not recommended to use these levels for virtualized environments.
Storage controller Architecture: A storage controller can be a server add card, such as a SCSI or SAS controller, or it can be a component of a mid-tier to a high-end storage array. The storage controller provides an interface between a disk drive and a server or storage area network. design features that affect storage controller performance include the interface or HBA type, the number of caches, and the number of independent channels.
1) disk controller or HBA interface : The disk controller interface determines the available drive types and the speed and latency of storage I/O. The following table summarizes the most commonly used disk controller interfaces.
Architecture |
Throughput (theoretically max megabytes/sec) |
ISCSI (Gigabit Ethernet) |
+ MB/s |
Fibre Channel (2 GFC) |
212.5 MB/s |
SATA (SATA II) |
MB/s |
SCSI (U320) |
+ MB/s |
Sas |
375 MB/s |
Fibre Channel (4 GFC) |
425 MB/s |
Fibre Channel (8 GFC) |
850 MB/s |
ISCSI (10,000 Gigabit Ethernet) |
1250 MB/s |
2) Controller cache : The storage controller cache can store data in cache memory during bursts or during frequent access to the same data, which is often faster than the physical disk I/O and can improve performance.
3) Controller channel : the number of internal and external channels of the storage controller has a significant impact on overall storage performance. Multiple channels increase the number of synchronous read/write I/O operations (IOPS) that can be performed, especially when using advanced RAID arrays.
This article is from the "Xu Ting blog" blog, make sure to keep this source http://ericxuting.blog.51cto.com/8995534/1588087
Introduction to Hyper-V server storage