Chapter I. EMCProduct Introduction1.1.Noun Introduction
dae--disk cage. The cabinet used to mount the disk.
Disk Processor enclosure--a controller unit with disks. The main equipment of the storage system, including the system disk, controller, power module, cache and CPU of the storage system, is the core device of the storage system reading and computing data.
Standby Power supply--Standby power supply. The purpose of this standby power supply is to protect the data from loss of data in the event of a power outage by continuing to supply power until an unexpected power outage occurs until data in the temporary storage controller cache is written to the hard disk. The standby power supply can support approximately 3-5 minutes in full-load operating mode, which is enough time to write data from the cache to the hard disk.
1.2.Storage System views
Front view
Rear View
Chapter II Initialization of the storage system
2.
2.1.initialization Steps
Initialization is only configured at the beginning of the installation and is not introduced here.
During initialization, the stored management IP and user name and password are configured.
Device Name |
Number of cables |
IP Address |
Note |
Vnx5100-a |
2 |
User name/password |
admin/*** |
192.168.*.41 |
SPA |
192.168.*.42 |
SPB |
|
|
User name/password |
admin/**** |
Ds-300b-1 |
1 |
192.168.*.43 |
A |
Ds-300b-2 |
1 |
192.168.*.44 |
B |
2.2.Login VNX5100 Storage System
(1) First verify that the client computer has Java installed (version 1.6.0 or above)
(2) After the initialization is completed in the Internet Explorer to enter the controller A or b just set any IP can be accessed, enter the user name and password. Click Login
User name: admin Password: *
Click on the Accept for Session
(3) Access to the management interface
2.3.Set VNX5100 Cache
Click System Options, then on the right to find the Manage Cache option, click Enter
On the SP Memory tab, set the read and write cache on demand, typically set write caches : Read Cache is 4:1. In this project, set the read cache to 100mb,write cache to 780MB, click Apply OK.
Click Yes.
The cache was successfully assigned.
Then enable the stored read-write cache in the SP cache and click Apply.
Click Yes to confirm the opening.
Click OK to successfully open the read cache and write cache.
If you need to adjust the cache, you need to close the read cache and the write cache before adjusting.
Chapter IIIconfiguration of VNX5100 storage
3.
3.1.planning for storage-tier space
This is a total of 15 600GB 15K disk configuration, the following plan:
3.2.Create raid Group
This document is an example of RAID group 1, RAID Group 2, and RAID group 499 .
Example one: Create RAID Group 1:
Under the storage, point storage pools,
For example, select Pools to the right of the raid Groups, and then click Create
Set Storage Pool id:1
Set up RAID configuration:raid1/0
Select disk, we recommend using manual, select Manual and click Select.
Select the disk you want to configure from the left, click the right arrow, select on the right, and click OK.
Confirm the selected disk and click OK.
Click Yes
Successfully created RAID Group 1.
Example 2: Creating a RAID Group 2
Set Storage Pool id:2
Set up RAID configuration:raid5
Select disk, we recommend using manual, select Manual and click Select.
Select the disk you want to configure from the left, click the right arrow, select on the right, and click OK.
Confirm the selected disk and click OK.
Click Yes.
Successfully created RAID Group 2. The Riad group3~5 is configured with RAID Group 2 and is not duplicated here.
Example 3: Create a hotspare
Set Storage Pool id:499
Set up RAID configuration:hot Spare
Select disk, recommend using manual, select Manual and Click Select, select the disk you want to configure.
Successfully created RAID group 499. Follow this procedure to create a RAID group 498 and RAID group 497.
After you are done, you can view the raid Group you created , select the raid you created, and click Properties to view the attributes.
3.3.Partitioning LUNs
Build LUNs on the created RAID group, select Raid group right, choose Create LUN
Set the user capacity to the size you want (this is Max, which is the whole raid to build a LUN), the latter can be gb/mb, etc.;
Set the LUN ID, uniqueness principle, the following can give the LUN name;
Set number of LUNs to create multiple LUNs at the same time.
Click Advanced to assign the default owner to an SP, it is recommended to distribute the average to spa and SPB, and if you create multiple LUNs at the same time, you can select Auto, and the storage will be automatically distributed evenly.
Click Yes
Click OK. Follow this method to complete the creation of the remaining LUNs.
After you are done, you can view the created LUNs, select the LUN you created, and click Properties to view the attributes.
3.4.host HBA Card Master book and PowerPath installation
In this project, the Windows platform uses the agent autoenrollment, the Linux platform uses the manual registration.
(1) Windows Platform HBA card registration and PowerPath installation.
Install Navihostagent-win-32-x86-en_us-6.29.6.0.35-1.exe on the server and
emcpower.x64.signed.5.5.sp1.b512. EXE software, you need to restart the server after installation.
(2) Linux Platform HBA card registration
Under Hosts, click Initiator
You can see the device connected to the storage, this example manually registered UAFDB01 This server, other Linux servers registered in the same way.
Log in to the fibre switch to perform switchshow, find the HBA wwn for UAFDB01, and look for the HBA wwn in the diagram below, check click Register.
Set Initiator Type:clariion/vnx
Set Failover mode:active-active Mode (ALUA)-failovermode4
Set host Agent information
Host NAME:UAFDB01
IP address:168.33.2.80
Click Yes
Successful registration of the first link, each server has 4 links, so the remaining 3 are registered to this host. The second link is registered below.
Select the second link and click Register.
In the pop-up window,
Set Initiator Type:clariion/vnx
Set Failover mode:active-active Mode (ALUA)-failovermode4
Set host Agent information
Select existing host and click Browse Host
Select the UAFDB01 you just registered and click OK.
Click Yes
The second link is successfully registered, and the third to fourth link method is the same as the second one.
3.5.mapping LUNs to the host
Under the hosts, choose Storage Groups, then click Create.
Give storage Group a name,
Click OK
Click Yes,
Click Yes to add LUNs
Expand the spa and SPB front + number, select the LUNs to be assigned to the server, and click Add
Add Lun0~lun8 and click OK
Click Yes
Click OK
Add Host
Select the two hosts on the left UAFDB01 and UAFDB02, click the right arrow
Click OK
Click Yes
Add host success, click OK
3.6.Host recognition Disk
(1) Windows platform
Right-click My Computer Management Disk Management right-click Disk Management rescan Disk
(2) The Linux host needs to restart the host
Execute the POWERMT command to identify the PowerPath device
#powermt Config
Check the disk path, normal results each LUN has 2 path to alive State ;
# POWERMT Display Dev=all
Pseudo name=emcpower0a
CLARiiON id=ck200065100269 [XDH]
Logical device ID=600601609E031B009A09A96C2CE7DB11 [LUN 1]
state=alive; policy=claropt; priority=0; Queued-ios=0
Owner:default=sp A, current=sp a
==============================================================================
----------------Host--------------- -Stor--- I/O Path-- Stats---
# # HW Path I/o Paths interf. Mode State q-ios Errors
==============================================================================
0 fscsi0 hdisk4 SP A1 active Alive 0 0
1 fscsi1 hdisk7 SP B0 active Alive 0 0
POWERMT Save
Operating system Commands view: fdisk–l
Note: When you create PV , use the EMC Pseudo name, such as emcpower0a
Fourth ChapterVNX5100 System Maintenance
4.
4.1.storage System on/off sequence
Boot order: fibre switch à storage device à host system
Shutdown sequence: host system à storage device à fibre switch
4.2.VNX5100 Boot Step
U check that all power supply lines are connected properly, loose or altered and corrected.
starting from the top of the cabinet, open the power supply for all DAE disk enclosures from up to down.
U Open two battery (SPS) switch at the bottom of the enclosure
U boot complete.
Pre-boot precautions: U disk array before power-up, to ensure that the enclosure is cooling and working properly, make sure that each slot in the enclosure is plugged into the hard drive and the wind shield. You need to ensure that the ――spe or DPE has at least one working SP before powering on, and that each DAE must have at least a single working LCC. |
4.3.VNX5100 shutdown Steps
U turn off the two battery (SPS) switch at the bottom of the cabinet and wait about 3 minutes until the data in the VNX5100 storage write cache is fully written to the hard disk, you can see that the battery light is completely off, and that the power led on the SPE chassis and the Dae-os disk chassis is off when the battery stops powering.
U turn off the power switch of the SPE chassis and the Dae-os disk chassis
from bottom to top, switch off all DAE disk enclosures (inside the enclosure)
U turn off the cabinet rear side cabinet main switch
U shut down the machine to complete.
Precautions before shutting down: U Warning: do not turn off any of the power supplies in the Spe,dpe,dae enclosure before turning off the two battery (SPS) power switch, otherwise it is likely to cause serious errors and result in data loss! You need to wait 3 minutes even after you turn off the two-battery (SPS) power switch until the data in the CX storage write cache is fully written to the hard disk to turn off power to the enclosure such as the Dae,spe or DPE U Stop all applications for VNX disk array access so that all I/O can be written back to the first 5 disks from the SP's write cache. U If you have a UNIX server connected to an VNX disk array, you must umount all the file systems associated with the CX disk array on the UNIX server. |
EMC Storage Management