VMware Storage: SAN Configuration Basics __ Storage

Source: Internet
Author: User
Tags virtual environment fcoe vmware converter

VMware storage is not just as simple as mapping LUNs to physical servers. VMware vsphere allows system administrators to create multiple virtual machines on one physical machine.

Potential hypervisor and vsphere ESXi enable the guest virtual machine to simultaneously use both internal and external storage devices. This article will discuss the basics of building sans on vsphere and what administrators consider when deploying shared SAN storage.

VMware Storage: San Basics

Vsphere supports internal connected disk devices including Jbods, hardware RAID arrays, SSD disks, and PCIe SSD cards. However, one of the major inconveniences of using these storage formats is that they are directly connected to a single server.

However, SAN storage provides a shared, high-availability, and resilient storage platform that extends up to a server deployment environment. In addition, storage vendors add support for vsphere on their products, providing better performance and scalability than local storage deployments.

Deploying vsphere with Nas and SAN storage is possible, but this article covers only Sans, or block devices. Involving Iscsi,fibre channel and FCoE agreements.

VMware file systems and data storage:

One of the major structural features of vsphere block storage is the use of VMware File System (VMFS). The same way as a traditional server formats a block device with a file system, Vsphere uses VMFS on a block LUN to store the virtual machines.

The vsphere storage unit is Datastore, which contains one or more concatenated LUNs. In many instances, the vsphere deployment uses a 1:1 correspondence between the LUN and the datastore, but this is not a configuration requirement.

Vsphere has undergone several versions of the change, VMFS has also been updated and improved, and the current ESXi version 5.1 uses VMFS version 5. Improvements in scalability and performance have enabled a single datastore to host multiple virtual machines.

Within Datastore, virtual machines are stored as virtual machine disk files (Vmdks). Vsphere also allows direct connections to LUNs without VMFS formatting. These devices are called original mapping devices (raw device mapping, RDM). Raw device mappings in a virtual environment can be used to create a direct connection to a virtual machine. With the VMware RDM feature, applications with high I/O performance overhead can achieve significant performance gains because RDM can invoke commands directly from existing SAN environments.

This allows the user to load an existing LUN. If you are using Exchange Server and it is already running on a SAN, when you virtualize the Exchange server, you run a VMware converter, a Microsoft converter, or other third-party products to turn the physical machine into a virtual machine. If you just convert the C disk drive, you can load the original store in the existing location. This allows the server to convert all data to VMDK without the need for downtime, and no additional allocation space is needed to migrate vmdk.

Vmware-san Connection:

Vsphere supports fibre Channel,fcoe and iSCSI block storage protocols.

The Fibre Channel protocol provides a multipath, resilient infrastructure, but requires additional expenditures for dedicated storage network facilities, such as fibre switches and HBAs.

Instead, iSCSI offers a relatively inexpensive choice for shared storage because the NIC is typically far cheaper than fibre Channel HBAs and converged network adapters.

Multipath is difficult to configure before the latest version of Vsphere, but this has been improved. In addition, iSCSI connection speeds are currently limited to 1Gbps and 10Gbps. Finally, the security of iSCSI devices is more complex for administrators because their functionality is more basic and unsuitable for a highly scalable environment.

Configuration restrictions:

VMware has some limitations on the size of the block storage configuration. Applies to iSCSI and fibre Channel:

LUNs per ESXi host–256

Maximum Volume SIZE–64TB

Maximum file Size–2tb minus bytes

These limits are high enough for most users to achieve, but the number of LUNs in a large-scale shared deployment environment may be a problem, so the number and type of datastore deployed within the vsphere architecture is critical.

Hypervisor Functional Characteristics:

Vsphere hypervisor contains some functional features that manage external storage.

Storage vmotion enables virtual machines to move between datastore and virtual machines without downtime. This is a good way to achieve load balancing or data migration from old hardware devices.

Storage DRS (SDRS) provides a possibility for policy-based storage. The creation of new virtual machines can be built on service-based policies such as IOPS and capacity. In addition, once the virtual machine is deployed and put into use, SDRs ensures that capacity and performance are load balanced across multiple similar datastore.

Storage feature Features:

Vsphere hypervisor contains some functional features that manage external storage.

The Vstorage APIs for Array Integration (VAAI) is a series of additional SCSI commands introduced in ESXi 4.1 that allow the host to uninstall (offload) specific virtual machines and storage management operations into compatible storage hardware. With the help of storage hardware, the host can perform these operations faster, while saving CPU, memory, and storage network bandwidth resources.

The above features are implemented by vsphere "primitives" mapping directly to the new SCSI command. These features include atomic hardware-assisted locking, which has a better granularity for file locking within VMFS. This feature provides an alternative way to protect the VMFS clustered file system metadata (metadata) to improve the scalability (scalability) of a large cluster of ESX servers sharing the same data store, full copy, and data replication to the array. This feature allows the storage array to complete the entire replication directly within the array, no longer requires the ESX server to participate in reading and writing back the data process; block zero, and offload the Vmfs 0 work in a thin provisioning environment to the array. This feature allows the storage array to quickly clear 0 of the resulting large number of storage blocks, thereby speeding up the deployment and provisioning of virtual machines (provisioning). The VAAI is extended to SCSI unmap, allowing hypervisor to control storage arrays to free up freed resources in a thin provisioning environment.

Hardware-assisted locking:

The Vmfs file system allows multiple hosts concurrent access to the same shared logical volume, which is necessary for vmotion to run. VMFS has a built-in security mechanism to prevent virtual machines from being run or modified by more than one host at the same time. Vsphere uses "SCSI reservation" as its traditional file-locking mechanism, which locks the entire logical volume using the "RESERVE SCSI" command during an instruction operation related to a storage, such as an incremental snapshot growth or occurrence. This helps prevent conflicts, but also delays the completion of storage work because the host must wait for the unlock command "release SCSI" of the logical volume to continue writing. Using the Atomic Test and Set (ATS) command is a hardware-assisted locking mechanism that allows storage arrays to be locked offline, so that individual disk data blocks, rather than the entire logical volume, are available. This allows the remaining logical volumes to continue to be accessed by the host during the lockdown, which helps to avoid performance degradation. This capability also allows more hosts to be deployed in the same cluster, and more virtual hosts deployed on the same logical volume, through VMFS data storage.

Full copy:

With full copy technology, the ability to deploy the virtual machines will be greatly enhanced because the processing can be done within the storage array, or between storage arrays (some of the storage vendors ' arrays support xcopy capabilities), and it has been a matter of seconds before the process that took a few minutes to work At the same time, the ESX server's CPU load is reduced (because of the reduced data traffic it participates in). The benefits of this feature are more relevant to the desktop infrastructure environment, which is likely to involve the deployment of hundreds of virtual machines based on a template.

For storage vMotion, the process of migrating virtual machine hosting will be similarly reduced, because the replication process no longer needs to be passed on to the ESX server to reach the array device, which will greatly release the storage I/O and server CPU cycle consumption.

Full copy not only saves processing time, but also saves CPU, memory resources, and network bandwidth and storage front-end controller I/O to the server. For most of these metrics, full copy can be up to 95% reduction.

Block Clear 0:

Making the array complete with disk large clear 0 (bulk zeroing) will speed up the standard initialization process. One use of block zeroing is to create a virtual disk in the acute clear 0 thick mode (eager-zero thick) format. If you do not use block zeroing technology, the creation command must wait until the disk array zero task is complete before it ends. For large-capacity disks, this can last for a long time. Block zeroing (also known as copy same) lets the disk array immediately return the cursor (cursor) to the requesting service (as if the write 0 process has been completed) and then complete the work of the 0 build blocks, where the entire cursor is no longer needed until the end of the work.

Vstorage APIs for Storage Awareness (VASA) is another set of APIs that allow vsphere to obtain more potential storage resource information within the array. Includes feature information such as RAID levels, thin provisioning, and data dissipation. Another problem with the streamlined configuration that Vasa solves is space recycling. When you delete a file on Windows or Linux, the file is not physically deleted from disk. On the contrary, the file is marked for deletion and the deleted file is deleted only after the new file is created. In most cases, this is not a problem. However, for thin virtual disks located on the thin data store, this can lead to an inability to control the growth of thin volumes. This is because the free disk space is not returned to the storage array after the file is deleted.

Key steps for deploying SAN storage:

Storage administrators should consider the following steps when deploying SAN storage:

Vendor and feature Support

Most, but not all of the storage vendors support vsphere advanced features such as VAAI and Vasa. If it is possible to use the above features, you need to be careful to confirm. Currently, the Vstorage array Integration application interface is only valid for data block based storage arrays (fibre storage or iSCSI) and does not support NFS storage. Vendor support for Vaai is much different, and some vendors, such as EMC, quickly support these features, while others take a long time to integrate them into all of their storage array models. You can see which storage arrays support specific Vstorage application interface features by looking at VMware's compatible list of fibre storage. With VMware Fibre storage compatible lists, you can search for your storage array to support VAAI, and if so, other application interfaces are supported.

HBA support and dedicated iSCSI connectivity

If the administrator plans to deploy fibre Channel, the HBA must be on the VMware Hardware Compatibility List. The number of HBAs on each server depends on the expected workload and requires at least 2 hardware redundancy. For iSCSI, you need to use a dedicated network card, so you also need multiple redundancy.

Datastore size

Possible cases, create as large a datastore as possible. To be within the limits of the storage product, especially in the case of thin provisioning. Thus reducing the likelihood that the user will need to move the data in the future.

Datastore type

Datastore is the lowest granularity of current virtual machine performance. Therefore, administrators should cause datastore to match the workload type. For example, test and development data should be placed in lower performance storage. When mapping Datastore to LUNs, the storage administrator should also create separate datastore for LUNs that are based on array synchronization protection.

The future of VMware and storage:

VMware has outlined the evolution of Vsphere block storage, presented in the form of virtual volumes (Vvols). Currently, a virtual machine consists of multiple file systems on the datastore that are located on the physical LUN map. Vvols provides the opportunity to abstract virtual machine files as Vvol containers to turn on the QoS features of the virtual machine itself. Currently, QoS can only be a property of the entire datastore, which can result in data migrations just to ensure that the virtual machine receives the service level it requests.

As with VMware, other vendors have developed a platform specifically for VMware. Tintri is a good example, although it uses NFS rather than block protocols. The Tintri Vmstore platform is familiar with the file types that make up a virtual machine, so you can ensure that quality of service, performance tracking, and flash utilization are accurately positioned at the virtual machine level.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.