Linux 11g rac asm Disk group increases disk capacity online 1. Operating system version OEL 6.1[[email protected] ~]# more /etc/redhat-release red hat enterprise linux server release 6.1 (Santiago) 2. Database Version oracle 11g racSQL> select * from v$version; BANNER--------------------------------------------------------------------------------Oracle database 11g enterprise edition release 11.2.0.3.0 - 64bit productionpl/sql Release 11.2.0.3.0 - ProductionCORE 11.2.0.3.0 productiontns for linux: version 11.2.0.3.0 - productionnlsrtl version 11.2.0.3.0 - production3. Storage environment vmware exsi5.0 Virtual EMC Storage 4. View all system disks on the node [email  PROTECTED] ~]# FDISK -L | GREP SD*DISK /DEV/SDA: 128.8 GB, 128849018880 bytes/dev/sda1 * 1 14 103424 83 Linux/dev/sda2 14 4191 33554432 82 Linux swap / Solaris/dev/sda3 4191 15666 92170240 8e linux lvmdisk /dev/sdb: 2147 mb, 2147483648 bytes/dev/ sdb1 1 261 2096451 83 linuxdisk /dev/sdc: 2147 mb, 2147483648 bytes/dev/sdc1 1 261     2096451   83  LINUXDISK /DEV/SDD: 2147 MB, 2147483648 bytes/dev/sdd1 1 261 2096451 83 linuxdisk /dev/sde: 536.9 gb, 536870912000 Bytes/dev/sde1 1 65270 524281243+ 83 linuxdisk /dev/sdf: 536.9 GB, 536870912000 bytes/dev/sdf1 1 65270 524281243+ 83 LinuxDisk /dev/sdg: 536.9 GB, 536870912000 bytes/dev/sdg1 1 65270 524281243+ 83 linuxdisk /dev/sdh: 214.7 GB, 214748364800 bytesDisk /dev/sdi: 107.4 GB, 107374182400 Bytes5. The current system hard drive is SDA to SDI, now need to add a 500G of shared hard disk, to expand the ASM Disk group, add new hard disk after the system recognition should be SDJ view pseudo file system/proc/scsi/scsi file has Scsi1,scsi2,scsi3 three channels [[Email protected] ~]# more /proc/scsi/scsi attached devices:host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: NECVMWar model: vmware ide cdr10 rev: 1.00 type: cd-rom ANSI SCSI revision: 05Host: scsi2 Channel: 00 Id: 00 lun: 00 vendor: vmware model: virtual disk Rev: 1.0 Type: Direct-Access ansi scsi revision: 02Host: scsi3 Channel: 00 Id: 00 Lun: 00 vendor: vmware model: virtual disk rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02host: scsi3 Channel: 00 id: 01 lun: 00 vendor: vmware model: Virtual disk Rev: 1.0 Type: direct-access ansi scsi revision: 02host: scsi3 channel: 00 Id: 02 Lun: 00 Vendor: VMware Model: Virtual disk rev: 1.0 type: direct-access ANSI SCSI revision: 02Host: scsi3 Channel: 00 Id: 03 lun: 00 vendor: vmware model: virtual disk Rev: 1.0 Type: Direct-Access ansi scsi revision: 02host: scsi3 Channel: 00 Id: 04 Lun: 00 Vendor: VMware model: virtual disk rev: 1.0 type: Direct-Access ANSI SCSI revision: 02Host: scsi3 Channel: 00 id: 05 lun: 00 vendor: vmware model: Virtual disk Rev: 1.0 Type: direct-access ANSI scsi revision: 02host: scsi3 channel: 00 id: 06 lun: 00 Vendor: VMware Model: Virtual disk rev: 1.0 type: direct-access ANSI SCSI Revision: 02host: scsi3 channel: 00 id: 08 lun: 00 vendor: VMware Model: Virtual disk Rev: 1.0 Type: Direct-Access ANSI SCSI revision: 02 6. Execute echo Command at node,[[email protected] ~]# echo "- - -" > /sys/class/scsi_host/host3/scan the current maximum ID number is 08, after performing the echo command, the system's/PROC/SCSI/SCSI file will appear under the ID 09 identification number as follows:host: scsi3 channel: 00 id: 09 lun: 00 vendor: vmware model: virtual disk rev: 1.0 type: direct-access ansi scsi revision: 02 7. View the/proc/scsi/scsi file separately at the node, at which point the node has been identified host: scsi3 channel: 00 id: 09 lun: 00[[email protected] ~] # more /proc/scsi/scsi | grep 09host: scsi3 channel: 00 Id: 09 Lun: 00[[email protected] ~]# more /proc/scsi/scsi | grep 09host: scsi3 channel: 00 id: 09 lun: 008. On Node 1, the SDJ partition [[EMAIL PROTECTED]&NBsp;~] #fdisk /dev/sdj[[email protected] ~] #partprobe9. Node view SDJ1 partition and successfully identify [[email later protected] ~]# fdisk -l | grep sdjdisk /dev/sdj: 536.9 GB, 536870912000 bytes/dev/sdj1 1 65270 524281243+ 83 linux[[email protected] ~]# fdisk -l | grep sdjdisk /dev/sdj: 536.9 gb, 536870912000 bytes/dev/sdj1 10. On Node 1, log on with a grid user to view the current ASM disk group [[email protected] ~] #su - grid view ASM Disk Group [[email protected] ~] $sqlplus  / AS SYSASMSQL> SELECT GROUP_NUMBER,NAME,TOTAL_MB, FREE_MB from v$asm_diskgroupGROUP_NUMBER NAME total_mb &nBSP;  FREE_MB------------ ----------------- ---------- ---------- 1 CQDATA 1547972 525323 2 FRA 511993 477058 3 OCR 6141 574111. viewing ASM disks [[Email protected] ~]# oracleasm listdisksdata1data2fraocr_vot1ocr_vot2ocr_ VOT312. Now it is time to expand the 500G disk to Cqdata's ASM disk group to first need to create a new ASM disk Node 1 on the Execute [[email protected] ~] #oracleasm Createdisk data3 /dev/sdj1[[email protEcted] ~] #oracleasm scandisks[[email protected] ~] #oracleasm Execute [[email protected] ~]# oracleasm on LISTDISKSDATA1DATA2DATA3FRAOCR_VOT1OCR_VOT2OCR_VOT3 node 2 scandisksreloading disk partitions: donecleaning any stale asm disks ... Scanning system for asm disks ... [[email protected] ~] #oracleasm listdisksDATA1DATA2DATA3FRAOCR_VOT1OCR_VOT2OCR_VOT3 At this point, the node has already identified the new ASM disk DATA313. Node 1 execution, view ASM disk, determine path[[email protected] ~]# su - grid[[email protected] ~]$ sqlplus / as sysasmsql> select name, path, mode_status, state, disk_number,failgroup from v$asm_disk;name PATH mode_status state disk_number failgroup-------- ------------------ -------------- ----------- ----------- ----------data1 orcl:data1 ONLINE normal 0 data1data2 orcl:data2 ONLINE NORMAL 1 DATA2DATA3 orcl:data3 online NORMAL 2 data3fra orcl:fra ONLINE NORMAL 0 FRAOCR_VOT1 ORCL:OCR_VOT1 ONLINE NORMAL 0 OCR_VOT1OCR_VOT2 orcl:ocr_vot2 online NORMAL 1 ocr_vot214. On node 1, the grid user has sysasm logged on to the ASM instance, adding a new ASM disk to the Cqdata disk group Data3sql> alter diskgroup cqdata add disk ' orcl:data3 ' rebalance power 10; group_number operation state power ACTUAL SOFAR EST_WORK est_rate est_minutes error_code------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- 1 rebal run 1 1 130396 340324 1224 171 View current reblance time,500g takes about 1 hours SQL>SELECT * FRom v$asm_opration;group_number operation state POWER ACTUAL SOFAR est_work est_rate est_minutes error_code------------ ---------- --- ----- ---------- ---------- ---------- ---------- ---------- ----------- -- ---------------------------- 1 rebal RUN 1 1 13327 332978 1242 5715. When there is no data output when querying v$asm_operation, it means that the reblance action of ASM is over sql> select * from v$asm_operation;no rows selected modify the reblance parameter to the default sql > alter diskgroup data1 rebalance power 1;16. On the node, respectively, the following results sql> select,name,free_mb,total_mb from v$asm_diskgroupname free_mb total_ MB-------------------- ---------- ----------cqdata 1025323 2047972fra 477058 511993OCR 5741 6141cqdata Disk Group has successfully completed the expansion this paper is based on VMware testing, storage multipath problem in this experiment has been ignored by VMware, if there is missing or unknown to welcome the brick
This article is from the "O Record" blog, so be sure to keep this source http://evils798.blog.51cto.com/8983296/1420928