Preface: Recently compiled computer, found several production on the operation of the test documents, special to share.
This article is completed on the virtual machine, at the end of the text has a simulator link.
I. Introduction of NetApp
NETAPP systems provide seamless access to all enterprise data for users on various platforms. NetApp's full range of fiber-optic networked storage Systems support NFS and CIFS for file access, support FCP and iSCSI in block storage access, and ensure that you can easily integrate NetApp storage systems into NAS or SAN environments and protect the original information.
NETAPP systems run an efficient data ONTAP? A micro-core operating system for merging Unix, Windows, NAS, Fibre Channel, and iSCSI sans and web data into a central location. NETAPP Enterprise Storage Systems is a scalable, proven suite of highly available networked storage systems that is easy to install, configure, and manage, and is one of the industry's lowest total cost of ownership (TCO), highest return on investment (ROI) products.
Second, Basic Concepts
1. Filer
Magnetic array head, corresponding to the controller of other types of magnetic array.
2. Filerview
NetApp Magnetic array Disk Management Web tools, when the terminal is WIN2000, you need to install a Java Virtual machine to open the corresponding interface.
3. RAID
A RAID group consists of one or more pieces of data disk plus one or more data check disks
4. RAID4 and RAID DP
RAID4 calibration data and RAID5 calibration data, are the data disk corresponding bit bit of the parity or check results, just RAID4 all the checksum data on a disk (that is, check disk), and RAID5 Officer test data scattered across all the disks. The RAID DP (double PARITY) is a double check disk, where the data of the two check disk is different: The data of the first check disk is the same as the RAID4, and the data of the second disk is different or calculated by diagonal method.
5. Plex
Plex is a collection of one or more RG groups
6. Aggr
A AGGR is a collection of one or more plex, and if the RG group is mirrored, a AGGR contains two plex, otherwise it contains only one plex. AGGR is used to manage plex and RAID groups because these entities exist only as part of the AGGR.
7. Volume
Data volumes, NetApp magnetic array Disk Management in a special way, a volume has at least one raid group, can also have multiple RAID group. The system data is called the root volume (root volume). Other volumes that hold data are called regular volumes. One nose has and only one root volume. The LUNs on the NetApp Magnetic array are created on the volume. Volumes are divided into traditional and flexible volumes. Traditional volume: Only in one aggregate, only by adding the capacity of the whole new hard disk to expand, can not be reduced, a RG can only have one traditional volume. Flexible Volume: Can contain only part of a single RG.
8./vol
Each NetApp store must have only one root volume, which will read the data stored on it when it is started. The root volume is the only volume with the root attribute, and the/etc directory inside it holds configuration information. It is similar to other vol, except that this vol also holds ONTAP configuration information, logs, firmware and so on. /vol is not a directory, it is a special virtual root path that storage uses to mount other directories. Other volumes cannot be viewed through Mount/vol, only the mount per volume alone.
9. Qtree
A logical definition unit, defined in a subdirectory of a traditional or flexible volume, to create up to 4,995 qtree per volume. The main functions of qtree are: to facilitate data management and distribution, and to manage soft and hard usage limits. Qtree differs from Volume: a single qtree cannot be taken, and space retention and reclamation are not supported.
Ten. Quotas
Limit the amount of disk space and file usage that users or groups use.
LUNs.
The Logical unit (LUN) is a storage unit that is accessed by the client on the storage system.
snapshot.
Snapshot is a snapshot technology from NetApp, which has the advantages of less space, no impact on performance, easy generation, and quick recovery of data.
Third,The management of Aggr
1. Create a Aggr
View agg status, found only aggr0, composed of disk for v0.16, v0.17, v0.18, v0.19, v0.20, v0.21, v0.22
netapp> Aggr status-r Aggr1
View disk Information
Netapp>sysconfig-d
New Aggr1 and AGGR2, where Aggr1 is a custom three disk, AGGR2 consists of non-custom three disks
New AGGR1
NETAPP>AGGR Create aggr1-d v0.24 v0.25 v0.26
New AGGR2
NETAPP>AGGR Create AGGR2 3
2. Extended Aggr
Add 3 disks to the AGGR1
Rehearsal command execution
Netapp> Aggr Add aggr1-n 3
Aggr add aggr1-d v0.35 v0.28 v0.34
Specify a specific disk
Netapp> aggr Add aggr1-d v0.34 v0.35 v0.36
Add 3 disks to the AGG2 to the new RG Group
Netapp> aggr Add aggr2-g new 3
3. Eliminate AGGR Snapshots
View Aggr size, find Aggr1 and AGGR2 snapshots Occupy space (default is 20%)
Netapp> df-a
Clear Aggr1 and Aggr2 snapshot space, respectively
netapp> snap reserve-a aggr1 0netapp> snap reserve-a aggr2 0
4. Delete Aggr
First offline
netapp> Aggr Offline AGGR1
Then destroy
Netapp> Aggr Destroy Aggr1
You can see that no AGGR1 information is available
Four, Management of volume
1. Create Volume
View Vol Information
Netapp> Vol Status
Create a new volume vol2 on AGGR2 with a size of 50m
Netapp> Vol Create vol2 aggr2 50m
2. Eliminate snapshot space
View Vol2 Size
Netapp> df–h
Deleting a snapshot
netapp> snap reserve-v vol2 0
3. Close the snapshot
By default, the snapshot feature is turned on by creating the volume. To use the FCP protocol to delimit LUNs for the host to build the database without having to open the snapshot, close the snapshot steps as follows:
First look at the snapshot and discover that the Vol0 snapshot was not closed
ys-nphd2> Snap listys-nphd2> snap Sched
To delete a snapshot, see 2
Turn off the vol0 snapshot feature
ys-nphd2> Snap sched vol0 0
4. Extend and reduce volume space
Increase the vol2 by 20m
Netapp> Vol size Vol2 +20m
Reduce the vol2 by 10m
Netapp> Vol Size vol2-10m
5. Parameter modification of New volume
NetApp recommends modifying the three parameters on the newly created volume to be on, ' Minimal Read Ahead ', ' Create Unicode ', ' Convert Unicode ', respectively.
View Vol2 parameter values
Netapp> Vol Options Vol2
Set Minra to On
Netapp> Vol options Vol2 minra on
6. Delete Volume
First offline
Netapp> Vol Offline vol2
Then destroy
Netapp> Vol Destroy Vol2
V. NFS exports
1. Storage-side configuration
Use the Rdfile command to open a/etc/exports file, copy it into text, complete the edit, and then perform a full copy
Netapp> Rdfile/etc/exports
Netapp> wrfile/etc/exportsnetapp> Exportfs–a
2. Host-side configuration
Displays the current status of NFS Connection sharing between host and client
ha01:/# showmount-a 192.168.17.51
This address is a storage IP
View port information for storage device registrations
ha01:/# rpcinfo-p 192.168.17.51
Host-side New mount point
ha01:/# Mkdir/fs-nfs
Mount Storage (executed as root user)
ha01:/# Mount 192.168.17.51:/vol/vol2/fs-nfs/
Edit Fstab, set boot auto mount/fs-nfs join (Aix config file is/etc/filesystems)
192.168.17.51:/vol/vol2/fs-nfs NFS Defaults 0 0
Vi.output ofLUNs
1. Create Qtree
Create a qtree named Tree-dropme under Volume DROPME
Ys-nphd2>qtree CREATE/VOL/DROPME/TREE-DROPME
2. Create Igroup
Ys-nphd2>igroup create–f–t Aix IGROUP-DROPME
3. Join the host WWPN (World Wide Port Names)
ys-nphd1> igroup Add Igroup-dropme 10:00:00:00:c9:c0:f5:bb
4. Creating LUNs
Create a lun:lun-test1 under Qtree named Tree-dropme
Ys-nphd2>lun create–s 20g–t Aix/vol/dropme/tree-dropme/lun-test1
5. Lun Map
View LUN status before map
Ys-nphd2>lun Show–mys-nphd2>lun Show
LUN map
Ys-nphd2>lun Map/vol/dropme/tree-dropme/lun-test1 IGROUP-DROPME
6. Host-Side identification
Run Cfgmgr–v
Run LSPV, found a lot of physical disk
Vii. Common operation and precautions
1. How to identify a disk
Where 0a.41 is the disk ID, which is made up of path_id and deivce_id, which is the form of path_id.device_id, where the disk's location can be quickly located by disk ID. PATH_ID-refers to the slot where the adapter card is located, and the port number on the adaptor, for example, 0a represents the adapter on the slot0 a port, slot0 is generally integrated on the motherboard, slot0 generally have a, B, C, D a total of 4 ports; You can attach adapter cards to other slots, Additional adapter cards are typically dual-port (A and B ports) device_id-refers to the disk's loop ID or SCSI ID number, as determined by the type of disk enclosure, the ID number of the enclosure, and the location of the disk in the disk enclosure (number of bay). Additionally, the disk information that is listed from Sysconfig-r: Ha represents the number of the enclosure that path_id;shelf represents, and bay represents the location of the disk in the enclosure.
2. Modify the header IP
First modify the/etc/hosts file to change the IP from 192.168.17.51 to 192.168.17.151
Netapp> rdfile/etc/hostsnetapp> wrfile/etc/hosts
Discovery IP Not in effect, restart system is effective
Netapp> reboot
3. Check if the cluster status is normal
Ys-nphd2> CF status Cluster enabled, YS-NPHD1 is up. Interconnect Status:up.
This status is normal
4. Command mode
The NetApp command pattern is divided into management mode and advanced mode, default to admin mode, and advanced mode to:
Netapp> Priv Set advanced
5. Order of switching machines
Boot
Power-on sequence: double-supply the disk holder in the order of the disk rack number, and power the controller after 10 seconds.
Shutdown
Enter:> halt at the command line of the controller to shut down the system.
If dual control, please enter the command line of the two controller separately: >halt-f. -F refers to whether it is not taken over.
Shutdown sequence: Power off the controller, turn off the disk rack power.
6. Serial connection to NetApp
With a one end for the RJ45, a DB9 control line is connected to the magnetic array console port and win operating system terminal serial port, the win operating system terminal and the magnetic array network port is connected to a network cable to the switch, the IP configuration of the win operating system terminal and the magnetic array ready to allocate IP in a network segment. This allows the configuration of the magnetic array to be started on the win operating system terminal.
Open HyperTerminal in Windows, set to default:9600 baud rate/8 bit/No checksum/1 bit stop bit,
Confirm the return to the login prompt, login user name is root, enter the password.
7. NetApp Management Approach
Serial connection
Remote Telnet
Filerview (Http://xxx.xxx.xx.xx/na_admin)
NETAPP OnCommand System Manager management software
Simulator: https://pan.baidu.com/s/1nBWivjFjFXgAT-I5Y5bsNw Password: 50aj
Management software: Https://pan.baidu.com/s/1wyBa_NqL4sBwZrb-zQ3vnA Password: Srno
Operation and documentation Sharing (1): NetApp Operations User Manual