New advances in storage for Hyper-V

Source: Internet
Author: User
Keywords Hyper-V

So far, many of the relevant introductory articles have made a fairly deep understanding of the features of Hyper-V-NUMA, improvements, replication, and virtual machine monitoring in scalability. Now you may wish to focus on the new topic of Hyper-V, the improvements in Hyper-V storage: The new VHDX format, as well as the storage of virtual machines on file shares, the improved cluster CSV ( shared volumes),DIRECTDMA , Guest Fibre Channel and unload data transfer.

Let's focus on file sharing: If you have to choose one of the fundamental technologies in Windows Server 2012 that is the biggest leap forward, it's definitely SMB 3.0. this excellent file sharing technology exists since Windows came into existence, but file sharing technology in Server 2012 (and Windows 8) is different from previous generations of technology. Its performance can reach 97% to 98% of direct-attached storage (DAS), and can also be optimized for server application workloads, such as SQL Server 2012 and Hyper-V virtual machine disks, that are managed to run applications in a normal file share. The flexibility it offers is unprecedented.

Virtual machines that store VHD (X) files on an SMB 3.0 file share can migrate in real time between Hyper-V hosts, but don't throw away your cluster architecture-this feature does not mean that it provides high availability. If a Hyper-V host fails, other hosts are not notified (because they are not in the same cluster), and the virtual machines are not automatically restarted.

While Microsoft outlines the design of a converged cluster: In this cluster, some nodes are storage nodes, providing shared storage for Hyper-V host nodes, but note that a host is both a file share host and a Hyper-V host, which is not supported. If the connection between the Hyper-V host and its storage system is temporarily interrupted, Hyper-V caches the input/output traffic at both ends, with a cache time of up to 1 minutes. SMB 3 Multichannel technology takes full advantage of all available network paths between host and file share, and does not require any additional configuration (only if you need to aggregate other protocols, you need me to aggregate this configuration on the NIC described earlier) to protect against accidentally disconnected network cables.

If you consider using a storage area network (SAN) as the back-end storage infrastructure for the SMB 3.0 file share, and using the new version of the Microsoft Cluster Shared volume (CSV) file system, the potential data center design becomes more interesting. This architecture, known as a scale-out file server (SOFS), is suitable for server application workloads (SQL Server and Hyper-V), but is not suitable for general document file sharing (traditional file-sharing cluster roles should be appropriate for this share). In this scenario, each file share host can access the same back-end data and provide that data to the Hyper-V node. If a file share host fails, it fails over to another file share host in a transparent manner.

CHKDSK has been improved in Windows Server 2012 servers, separating the time-consuming disk analysis operations from the repair phase, enabling the profiling phase to run when the disk is working properly. This means that CHKDSK can now check and repair large volumes, while downtime is extremely short (only minutes, not hours). However, in Sofs, the quick fix phase can be performed by one node, while the other nodes can still access the volume. As a result, downtime during disk checking is zero.

Now that we're talking about storage for Hyper-V in Windows Server 2012, it's important to mention storage space (Storage MSN), which allows you to build a "San-like" Extensible storage environment with popular hardware servers and storage devices, There is also a built-in data protection mechanism. While storage space does not replace high-end Sans, this type of storage is effective in many scenarios, including in small and midsize enterprises and large enterprise environments. Another option for high availability Hyper-V storage is the built-in support for a shared SAS cradle, where the RAID controllers in each host can synchronize their information. This technique is called cluster PCI RAID.

The last part of high speed file sharing is SMB Direct, which provides exceptional storage access performance with remote Direct memory access (RDMA) features. Improvements in Hyper-V help improve performance levels, such as one input/output channel per 16 virtual machines (Windows Server 2008 R2 only provides an input/output channel for the entire virtual machine), and each SCSI disk has an input/output queue ( Windows Server 2008 R2 is limited to one queue per controller, as well as a dynamic extension of input/output interrupts across virtual processors, rather than using just one as in previous versions. Microsoft has demonstrated that a single virtual machine can achieve 1 million input/output operations per second.

Volume Shadow Copy Service (VSS) has also been improved to make it possible to back up a consistent data source from a remote shared area, as you would with local storage in Windows Server 2008 R2.

CSV: Another file system?

If you're looking at a shared volume inside a Server Manager that is enabled for CSV in Windows Server 2012, you'll find that it specifies that the file system is CSVFS, but still relies on the underlying NTFS. In addition to the 0 downtime chkdsk mentioned earlier, there are other improvements in CSV 2.0, which are also improved and more compatible with backup software. Older versions use custom reparse points (reparse point) to mount shared storage, which requires backup software developers customizing their applications to know how to back up shared storage. Windows Server 2012 uses standard mount points, which are expected to be convenient for independent software developers (ISVs).

The

also has a new feature named CSV cache, which uses system memory on the file share host to cache read content, which can greatly improve performance, especially in the Virtual Desktop Infrastructure (VDI) scenario where the input/output is often a bottleneck. The recommended cache size is MB, but you need to test the actual load in your environment. You have no limits on how many CSV volumes you have, and there is no limit on how many files you have on each volume, so any restrictions depend on what your hardware supports.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.