1, first check whether DG Space is sufficient
Sql> select Name,total_mb,free_mb,usable_file_mb from V$asm_diskgroup;
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
--------------------------------- ---------- ---------- --------------
DATADG 4198790 230531 230531
OCRDG 15360 14434 4657
RECODG 512078 497578 497578
REDODG 204800 42117 42117
Description
The first column of numbers, representing the total space
The second column of numbers, which represents the remaining space, may be 1 time times, twice times, or 3 times times the actual available, due to redundancy, which is a virtual value
The third column number, which represents the remaining space, is different from the second column, and the third column represents the actual available space
So focus on the value of the third column, which is the actual space available.
Special reminder: Both ends of the disaster must be checked, otherwise it will cause serious problems
2. Schedule Table Space Additions
--First the actual size of the root table space, determine the value added this time
Generally, if the overall size of the table space is only 100G in the range, one-time add 20G can, if the smaller table space, add 10G at once is also feasible
If the overall table space is larger, one-time additions should be in 30G
The specific situation of the specific analysis, pay attention to ensure that the overall table space around 80% to determine the size of the added file.
Special attention, every time you add, be sure to ensure that the value of DG is twice times the total value added, disaster preparedness both sides to ensure!
If you want to add a disk to the DG, as in the next
3. Add disk to DG
--First notify the storage administrator to partition the corresponding disk to the specified machine, indicating the shared
--Scan disk (two node execution)
[Email protected] ~]# echo "---" >/sys/class/scsi_host/host1/scan
Note that some machines have multiple fiber interfaces, which will be repeated as follows:
[Email protected] scsi_host]# ls-a
. .. Host0 host1 host10 host2 host3 host4 host5 host6 host7 host8 host9
If there are 10, it will be executed 10 times, pre-written script
[Email protected] ~]# echo "---" >/sys/class/scsi_host/host2/scan
[Email protected] ~]# echo "---" >/sys/class/scsi_host/host3/scan
Perform the same operation on the other node after execution is complete
--After the scan is complete, check the newly added disk
[[email protected] scsi_host]# for i in ' cat/proc/partitions | awk {' Print $4 '} |grep SD '; Do echo "# # $i: ' scsi_id--whitelist/dev/$i '"; Done
# # SDA:361866DA04F1063001E9E8C2811E75CC8
# # SDA1:361866DA04F1063001E9E8C2811E75CC8
# # SDA2:361866DA04F1063001E9E8C2811E75CC8
# # SDB:3600A098038303742665D49316B78327A
# # SDC:3600A098038303742665D49316B78327A
# # sde:3600a098038303742665d49316b783279
# # sdd:3600a098038303742665d49316b783279
# # sdf:3600a098038303742665d49316b783278
# # SDH:3600A098038303742665D49316B783330
# # sdg:3600a098038303742665d49316b783278
# # sdj:3600a098038303742665d49316b783331
.......................
# # Sdbv:3600a098038303742695d4933306e7a51
# # Sdbw:3600a098038303742695d4933306e7a51
# # Sdbx:3600a098038303742695d4933306e7a51
# # Sdby:3600a098038303742695d4933306e7a51
# # Sdbz:3600a098038303742695d4933306e7a51
# # Sdca:3600a098038303742695d4933306e7a51
# # Sdcb:3600a098038303742695d4933306e7a51
# # Sdcc:3600a098038303742695d4933306e7a51
By looking at it, we find that the last one is the latest addition.
--Edit multipathing
[Email protected] scsi_host]# vi/etc/multipath.conf
.........................
multipath {
Wwid 3600a098038303742665d49316b783278
Alias Ocrdisk1
}
multipath {
Wwid 3600a098038303742665d49316b783279
Alias Ocrdisk2
multipath {
Wwid 3600a098038303742665d49316b783333
Alias Data4
}
multipath {
Wwid 3600a098038303742695d4933306e7a51
Alias Data5
}
This time, we joined the DATA5
Note that two nodes have to do this
--Reconfiguring Multipathing
[Email protected] scsi_host]# multipathd-k
Multipathd> Reconfigure
Ok
Multipathd> quit
[Email protected] scsi_host]# multipath-l
Data5 (3600a098038303742695d4933306e7a51) dm-11 Netapp,lun C-mode
size=500g features= ' 4 queue_if_no_path pg_init_retries retain_attached_hw_handle ' hwhandler= ' 0 ' WP=RW
|-+-policy= ' round-robin 0 ' prio=0 status=active
| | |-3:0:2:9 SDCB 68:240 active undef running
| | |-3:0:3:9 SDCC 69:0 active undef running
| | |-1:0:2:9 sdbx 68:176 active undef running
| '-1:0:3:9 sdby 68:192 active undef running
'-+-policy= ' round-robin 0 ' prio=0 status=enabled
|-3:0:1:9 SDCA 68:224 active undef running
|-1:0:0:9 SDBV 68:144 active undef running
|-1:0:1:9 SDBW 68:160 active undef running
'-3:0:0:9 sdbz 68:208 active undef running
DATA4 (3600a098038303742665d49316b783333) dm-8 Netapp,lun C-mode
size=500g features= ' 4 queue_if_no_path pg_init_retries retain_attached_hw_handle ' hwhandler= ' 0 ' WP=RW
|-+-policy= ' round-robin 0 ' prio=0 status=active
| | |-3:0:0:6 SDN 8:208 active undef running
| | |-1:0:0:6 SDO 8:224 active undef running
| | |-3:0:1:6 SDAF 65:240 active undef running
| '-1:0:1:6 sdag 66:0 active undef running
'-+-policy= ' round-robin 0 ' prio=0 status=enabled
|-1:0:3:6 sdba 67:64 active undef running
|-1:0:2:6 Sdar 66:176 active undef running
|-3:0:2:6 SDBJ 67:208 active undef running
'-3:0:3:6 Sdbs 68:96 active undef running
You can see the Data5 normal state that you just added
--Edit Udev
[Email protected] rules.d]# pwd
/etc/udev/rules.d
[Email protected] rules.d]# ls-a
. 60-pcmcia.rules 90-hal.rules 99-fuse.rules
.. 60-raw.rules 97-bluetooth-serial.rules
60-fprint-autosuspend.rules 70-persistent-cd.rules 98-kexec.rules
60-openct.rules 90-alsa.rules 99-asm-multipath.rules
We're using 99-asm-multipath.rules.
[Email protected] rules.d]# VI 99-asm-multipath.rules
.........................
env{dm_name}== "Data5", owner:= "grid", group:= "Oinstall", mode:= "660", symlink+= "iscsi/oraasm-$env {dm_name}"
env{dm_name}== "Data4", owner:= "grid", group:= "Oinstall", mode:= "660", symlink+= "iscsi/oraasm-$env {dm_name}"
Add the Data5, as shown in the above steps two nodes to execute
Restart Udev
[Email protected] etc]# Start_udev
Starting udev: [OK]
Two nodes to execute
--Check disk permissions
[Email protected] mapper]# pwd
/dev/mapper
[Email protected] mapper]# LS-LRT
Total 0
CRW-RW----1 root root, 236 Jan 17:34 Control
lrwxrwxrwx 1 root root 8 Jan 17:34 reco1. /dm-10
lrwxrwxrwx 1 root root 7 Jan 17:34 redo1. /dm-9
lrwxrwxrwx 1 root root 7 Jan 17:34 ocrdisk3. /dm-2
lrwxrwxrwx 1 root root 7 Jan 17:34 Ocrdisk1. /dm-4
lrwxrwxrwx 1 root root 7 Jan 17:34 data1. /dm-5
lrwxrwxrwx 1 root root 7 Jan 17:34 Volgroup-lv_swap. /dm-1
lrwxrwxrwx 1 root root 7 Jan 17:34 volgroup-lv_root. /dm-0
lrwxrwxrwx 1 root root 7 Jan 17:34 Ocrdisk2. /dm-3
lrwxrwxrwx 1 root root 7 Jan 17:34 data2. /dm-6
lrwxrwxrwx 1 root root 7 Jan 17:35 data3. /dm-7
lrwxrwxrwx 1 root root 8 Jan 17:35 data5. /dm-11
lrwxrwxrwx 1 root root 7 Jan 17:35 data4. /dm-8
Data5 corresponds to/dm-11, check permissions
[Email protected] dev]# LS-LRT | grep DM
BRW-RW----1 root disk 8, 192 Jan 17:34 SDM
CRW-RW----1 root root 1, Jan 17:34 Oldmem
CRW-RW----1 root root, 17:34 cpu_dma_latency
BRW-RW----1 root disk 252, 1 Jan 17:34 dm-1
lrwxrwxrwx 1 root root 4 Jan 17:34 root-dm-0
BRW-RW----1 root disk 252, 0 Jan 17:34 dm-0
BRW-RW----1 Grid oinstall 252, 5 Jan 17:36 dm-5
BRW-RW----1 grid oinstall 252, Jan 17:36 dm-11
BRW-RW----1 Grid oinstall 252, 4 Jan 17:36 dm-4
BRW-RW----1 Grid oinstall 252, 3 Jan 17:36 dm-3
BRW-RW----1 grid oinstall 252, 2 Jan 17:36 dm-2
BRW-RW----1 grid oinstall 252, ten Jan 17:36 dm-10
BRW-RW----1 Grid oinstall 252, 8 Jan 17:36 dm-8
BRW-RW----1 Grid oinstall 252, 7 Jan 17:36 dm-7
BRW-RW----1 Grid oinstall 252, 6 Jan 17:36 dm-6
BRW-RW----1 Grid oinstall 252, 9 Jan 17:36 dm-9
You can see that dm-11 permissions have been granted to Grid:oinstall
Note that two nodes are checked to ensure that the permissions are correct, otherwise the add disk will be error-
--Expansion of DG
Sql> select Name,path from V$asm_disk;
NAME PATH
-----------------------------------------------------
Datadg_0000/dev/mapper/data3
Datadg_0011/dev/mapper/data2
Redodg_0000/dev/mapper/data1
Sql> alter DiskGroup DATADG add disk '/dev/mapper/data5 ' rebalance power 8;
DiskGroup altered.
After you add a success, you can see if the success
Sql> select Name,total_mb,free_mb,usable_file_mb from V$asm_diskgroup;
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
------------------------------ ---------- ---------- --------------
DATADG 4198790 230530 230530
OCRDG 15360 14434 4657
RECODG 512078 507865 507865
REDODG 204800 42117 42117
To do this, add the disk to the DG to complete.
4. Add files to the table space
Go to the database where you want to add files
[Email protected] ~]$ export ORACLE_SID=TESTDB1
[Email protected] ~]$ sqlplus "/as sysdba"
Sql*plus:release 11.2.0.4.0 Production on Wed Jan 11 17:55:28 2017
Copyright (c) 1982, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0-64bit Production
With the partitioning, Real application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real application testing options
Sql> alter tablespace testdb_blob add datafile ' +datadg ' size 30720M;
Tablespace altered.
Here, be sure to use the "+" number, otherwise it will become a local file, resulting in a node can not be used.
Beijing website Construction Company
Oracle DB Cluster Add Table Space Operations specification