VMware connects to shared storage in three different ways, San,ip sans (that is, iSCSI) and Nas (NFS). This article is not about the difference between the 3 and the SAN and NAS, please go to Google.
Second, this article does not look at FC Sans, if your environment already has a SAN that supports MPIO and has enough storage space, try to make the most of it. Because many articles point out that Nas/nfs is an ideal medium for low I/O demand workloads, fiber optics is ideal for high demand workloads. (Here is a reference to the translation: http://storage.chinabyte.com/387/8638887.shtml)
So, for VMware, is NFS good or iSCSI good? Many of the documents, including NetApp and VMware, are not exactly what they are. A more general statement is to see what kind of environment you are familiar with.
Cost
First, the most important factor that cannot be considered is cost. If you build a storage network from scratch, then the cost should be FC SAN > HW iscsi > SW iscsi = NFS. Here, we do not consider 10G Ethernet and FCoE. or because 10G of equipment and accessories are too expensive.
Real-World case: In our Organization's environment, because of cost considerations, there are no sans (san is usually more expensive, you have to buy a dedicated HBA card, at least 2 San Switch, and have FC San interfaces for your storage). Our internal storage network environment is mainly 2 kinds: iSCSI and NFS. Ms Cluster Service (such as email and Ms SQL) uses Iscsi,oracle DB for NFS. As a result, our vwware is left with 2 options, iSCSI and NFS. (iSCSI of hardware because CPU resources are not bottlenecks, throughput is not an advantage, due to cost considerations also give up)
Cost: SW ISCSI and NFS are tied.
Performance
NFS is ip-based, but not an ip-based storage protocol, and seems to be a bit more expensive in terms of network overhead. However, performance testing shows that NFS and iSCSI are almost equal (read performance is comparable, slightly poor write performance). (Reference document Http://www.vmware.com/files/pdf/perf_vsphere_storage_protocols.pdf)
Read performance: (Note that FC is a solo show because FC is 4gb/s bandwidth, while the other 3 are 1GB/S Gigabit Ethernet)
Write performance