Fault symptom:
The new version of the vnx oe (Operating Environment) Operating Environment has been officially released. The versions of the corresponding Block and File are vnx oe for Block v05.32.000.5.006 and vnx oe for File v7.1.47.5. Now let's make a summary of the new content contained in this new version of vnx oe update.
Solution:
New features and enhancements
· VNX data Block to Unified storage (Block to Unified) plug-and-play upgrade service: This function allows users to upgrade blocks to Unified storage.
· Upgrade Readiness Checker Wizard: used to Upgrade data blocks to the unified storage protocol.
· Pre-installed vnx oe for File software: this is an improvement in the manufacturing process. All VNX systems (Block, File, and unified storage) will be pre-installed with a new file image of the fast installation package of private file LUN. The format of the quick installation package for this new file image will be the same as that for the after-sales department to implement the data block to the unified storage upgrade package (PBU. Compared with the current quick installation image format, this new File Image format will reduce the deployment time.
· Install the vnx oe for File software when the data block is upgraded to the unified storage field: this function supports on-site upgrades from the data block to the unified storage. You can use the existing uniseries Service Manager (USM) the software upgrade tool downloads new data blocks to the unified storage upgrade package (PBU ).
· Supports intra-series model upgrades: This feature allows intra-series Data In-situ (DIP) upgrades. In-series DIP upgrades allow you to continue using the installed sas dae and related drives and SLIC.
· Windows 2008R2 Branch Cache support: This function supports Windows 2008 R2 Branch Cache. It allows a Windows client to locally cache a file on a server in a branch office. This cached copy can be used whenever the same file is requested. The Central Office Content Server of VNX for File CIFS Server supports the branch cache function.
· VMwareVAAI for NFS: Snaps of snaps: supports multiple VMDK snapshots of at least one level (source> snapshot) on the NFS file system. Although this feature initially only supports VAAI interfaces for NFS, it may also be released in the form of VSI plug-ins in the future. This feature not only allows VMware View 5.0 to use VAAI for NFS on the VNX system, but also requires the file system to enable "VMware VAAI nested clone support”during creation and install emcnasplugin-1-0.10.zip in the esxserver.
· Layer-based load balancing: This feature allows cross-drive re-distribution of data fragments (slice) at the level of a given storage pool to improve performance. It also includes the active server load balancer function. For example, data sharding will be re-allocated from the RAID group in one level based on the activity level to balance the load in the level. This redistribution will occur in the user-defined allocation window.
· Improved data block compression performance: This function provides an open data block compression performance. This includes improving the compression speed, reducing the impact on the storage system, and reducing the impact on the host response time.
· Deeper file compression algorithm: This function provides additional options to use deeper compression algorithms for greater capacity savings when using file-level compression. This option can be used by third-party application servers, such as FileMoverAppliance Server, based on the data type (for example, each metadata definition) suitable for a deeper compression algorithm.
Compared with the default compression algorithm, the new algorithm is expected to save 15% more capacity. The trade-off is that using a deeper compression algorithm takes a longer time for the system to complete the initial compression (no more than 4 times the default time) and decompress for reading (no more than 2 times the default time ).
This option can be enabled on the Compression tab of File System. The default setting is the standard compression algorithm.
· Rebalance when adding a drive to a storage Pool: This feature provides cross-drive redistribution of data fragments (slice) in the new storage Pool to improve performance.
· Enable the conversion from DLU to TLU when data block compression is enabled: This function provides an internal mechanism (instead of initiated by the user). When compression is enabled for the Thick Pool LUN, it performs in-situ conversion from the Thick (Direct) LUN to the Thin LUN instead of migration. In addition, it also provides an internal mechanism for converting the original Think LUN to Thin. When compression is disabled, the LUN is switched back to Thick without being initiated by the user.
· Hybrid RAID type in a storage Pool: This function allows users to define RAID types for each storage layer in the storage Pool, without requiring that the same storage Pool be of the same RAID type.
· Improved TLU performance, making it no worse than 115% of FLU: This function provides improved TLU performance. This includes reducing the impact of host response time and potential impact on the storage system.
· Recognizable compression capacity saving: This function shows the capacity saved by compression. This is different from the capacity savings caused by Thin Provisioning. The advantage is that users can determine the increasing benefits brought by compression. This is necessary. When compression is used, the performance will be affected, and the user needs sufficient information for cost-benefit analysis.
· New Tiering policy: This feature provides the option of "Start High, then Auto-tire" (highest Start, and then automatic layering) for the storage pool ). When you select this policy for a given LUN, the initial data is allocated to the highest level, and the subsequent layers are performed based on the activity level.
· New storage pool RAID option: This feature provides two additional storage pool RAID options for 8 + 1 RAID 5, 14 + 2 RAID 6 for better efficiency. These options apply only to the new storage pool. Both GUI and CLI can be managed.
· E-Trace enhancement: active files and other statistics for each file system: This feature allows users to view active files in the file system or quota table. In addition to the ID, the file name can also be identified by the path name.
O also provides APIs to DPA, Unisphere, or other management tools for E-Trace graphical user interfaces, including the ability to save data for trend analysis.
· Unisphere Quality of Service Manager supports VNX Snapshot (VNX Snapshot): This function provides Unisphere Quality of Service Manager (UQM) support for source LUN and Snapshot LUN introduced for the VNX Snapshot function.
· Unisphere Analyzer supports new VNX features: This feature supports all new VNX features in Unisphere Analyzer, including but not limited:
VNX snapshot
64-bit counter
· Unified network service: provides several enhancements to improve user experience, such:
DNS support: in addition to static IPv4 addresses, you can also use host names for external network services such as iSNS, NTP, and LDAP.
Supports IPv6 DNS, NTP, and iSNS servers
Supports LDAP nested groups (nested groups) and specified server certificates
Supports NTP authentication on VNX File (Control Station)
Unified DNS, NTP, and LDAP Settings in the oUnisphere domain
· Continuous Monitoring: This function provides the ability to monitor important storage performance information to take appropriate actions when a specified condition is detected. You can set multiple monitoring metrics. The default setting will monitor CPU usage, memory usage, and nfs I/O latency. When a monitoring event is triggered, the system takes a series of actions, including recording the event, starting detailed log collection within a specified period of time, sending an email or SNMP Trap notification.
· Customizable VNX uniseries
· VNX Snapshot (VNX Snapshot): This function provides a VNX Snapshot, also known as a write-in-place pointer-based Snapshot function. The original version only supports Block Luns and requires Pool LUNs. Support for file systems will be implemented in later versions.
· NDMP (Network Data Management Protocol) V4 IPv6 extension: This feature provides support for Symantec, NetApp, and EMC-authorized and verified NDMP v4 IPv6 extensions, it is used to use NDMP for two-way or three-way backup in the IPv6 network environment.
· NDMP access time: This function supports the recent access time (atime. This information is not retained when NDMPCopy is performed in earlier versions. Therefore, after data migration, you cannot archive the inactive data.
· SRPF (transport rixremote Data Facility) interoperability of Control Station: This function supports failover between the local and remote VNX gateways.
· File Level Retention (FLR) internal locking and deletion: This function provides an internal policy engine on the File system, you can automatically set the file retention time and apply it to the written file. It also provides the ability to automatically delete files when they are retained and expire.
· Default minimum/maximum file-level retention time
· File-level retention date: you can enter a retention date until January 1, 100 to meet internal management requirements.
· Added bandwidth for a single drive
· Unified VNX configuration files saved in XML format
· Unified support channels: add Live Chat to uniseries Service Manager (USM)
· UDoctor Control center integration: integrates UDoctor tools in VNX Control Station. UDoctor is composed of AHA (Array Health Analyzer), TRT (TRiiAGEReal Time), TOMS (TRiiAGEOn Management Station) and other tools. Previously, UDoctor was provided only with independent Windows Management tools.
· Support for VASA (VMwarevStorageAPIs for Storage Awareness)
· Extended VASA support
· FIPS140-2 Security Compliance