List all configured Databases
[Root @ node1 ~] # Srvctl config Database
Novadb
Status of all instances and services
[Root @ node1 ~] # Srvctl status database-D novadb
Instance novadb1 is running on node node1
Instance novadb2 is running on node node2
Status of a single instance
[Root @ node1 ~] # Srvctl status instance-D novadb-I novadb1
Instance novadb1 is running on node node1
The status of the global Naming Service in the database
$ Srvctl Status Service-D orcl-s orcltest
Service orcltest is running on instance (s) orcl2, orcl1
Node application status on a specific node
[Root @ node1 ~] # Srvctl status nodeapps-N node1
VIP is running on node: node1
GSD is running on node: node1
Listener is running on node: node1
ONS daemon is running on node: node1
ASM instance status
[Root @ node1 ~] # Srvctl status ASM-N node1
ASM instance + ASM1 is running on node node1.
[Root @ node1 ~] # Srvctl status ASM-N node2
ASM instance + asm2 is running on node node2.
Displays the configuration of the RAC database.
[Root @ node1 ~] # Srvctl config database-D novadb
Node1 novadb1/opt/ora10g/product/10.2.0/db_1
Node2 novadb2/opt/ora10g/product/10.2.0/db_1
Display all services of the specified cluster Database
[Root @ node1 ~] # Srvctl config service-D novadb
Novadb Pref: novadb1 novadb2 avail:
Display node application configurations (VIP, GSD, ONS, listener)
[Root @ node1 ~] # Srvctl config nodeapps-N node1-a-g-S-l
VIP exists.:/node1-vip/192.168.150.htm/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.
Use the srvctl config command to view the configuration information of the existing database:
[Root @ node1 ~] # Srvctl config database-D novadb-
Node1 novadb1/opt/ora10g/product/10.2.0/db_1
Node2 novadb2/opt/ora10g/product/10.2.0/db_1
Db_name: novadb
ORACLE_HOME:/opt/ora10g/product/10.2.0/db_1
Spfile: + rac_disk/novadb/spfilenovadb. ora
Domain: NULL
Db_role: NULL
Start_options: NULL
Policy: automatic
Enable flag: DB Enabled
Displays the configuration of the ASM instance.
[Root @ node1 ~] # Srvctl config ASM-N node1
+ ASM1/opt/ora10g/product/10.2.0/db_1
[Root @ node1 ~] # Srvctl config ASM-N node2
+ Asm2/opt/ora10g/product/10.2.0/db_1
All running instances in the Cluster
SQL> r
1 select
2 inst_id
3, instance_number inst_no
4, instance_name inst_name
5, parallel
6, status
7. database_status db_status
8, active_state state
9, host_name host
10 from GV $ instance
11 * order by inst_id
Rows will be truncated
Inst_id inst_no inst_name par status db_status state
---------------------------
1 1 novadb1 Yes open active normal
2 2 novadb2 Yes open active normal
SQL>
All data files in the disk group
SQL> select name from V $ datafile
Union
Select member from V $ logfile
Union
Select name from V $ controlfile
Union
Select name from V $ tempfile;
Name
---------------------------
+ Rac_disk/novadb/controlfile/current.260.685491565
+ Rac_disk/novadb/datafile/nova_test.268.686337643
+ Rac_disk/novadb/datafile/sysaux.257.685491407
+ Rac_disk/novadb/datafile/system.256.685491401
+ Rac_disk/novadb/datafile/undotbs1.258.685491411
+ Rac_disk/novadb/datafile/undotbs2.264.685491733
+ Rac_disk/novadb/datafile/users.259.685491413
+ Rac_disk/novadb/onlinelog/group_1.261.685491571
+ Rac_disk/novadb/onlinelog/group_2.262.685491575
+ Rac_disk/novadb/onlinelog/group_3.265.685491915
+ Rac_disk/novadb/onlinelog/group_4.266.685491921
Name
---------------------------
+ Rac_disk/novadb/tempfile/temp.263.685491617
12 rows selected.
SQL>
All ASM disks belonging to the "rac_disk" disk group
SQL> select path from V $ asm_disk where group_number in (select group_number from V $ asm_diskgroup where name = 'rac _ disk ');
Path
---------------------------
/Dev/raw/raw3
/Dev/raw/raw4
Orcl: nova3
Start/stop a RAC cluster
Make sure that you log on with an oracle UNIX user. We will run all the commands from the Rac1 node:
# Su-Oracle
[Oracle @ node1 ~] $ Hostname
Node1
Stop Oracle RAC 10g environment
The first step is to stop the Oracle instance. When this instance (and related services) is closed, the ASM instance is closed. Finally, close the node application (virtual IP, GSD, TNS Listener, and ONS ).
[Oracle @ node1 ~] $ Export oracle_sid = novadb1
[Oracle @ node1 ~] $ Emctl stop dbconsole
[Oracle @ node1 ~] $ Srvctl stop instance-D novadb-I novadb1
[Oracle @ node1 ~] $ Srvctl stop ASM-N node1
[Oracle @ node1 ~] $ Srvctl stop nodeapps-N node1
Start the Oracle RAC 10g environment
The first step is to start the node application (virtual IP, GSD, TNS Listener, and ONS ). After the node application is successfully started, start the ASM instance. Finally, start the Oracle instance (and related services) and the Enterprise Manager Database Console.
[Oracle @ node1 ~] $ Export oracle_sid = novadb
[Oracle @ node1 ~] $ Srvctl start nodeapps-N node1
[Oracle @ node1 ~] $ Srvctl start ASM-N node1
[Oracle @ node1 ~] $ Srvctl start instance-D novadb-I novadb1
[Oracle @ node1 ~] $ Emctl start dbconsole
Start/stop all instances using srvctl
Start/stop all instances and Their enabled services. I just added this step as a method to close all instances if I found it interesting!
[Oracle @ node1 ~] $ Srvctl start database-D novadb
[Oracle @ node1 ~] $ Srvctl stop database-D novadb
Enable and stop a listener
[Oracle @ node1 ~] $ LSNRCTL start listener_hostb
[Oracle @ node1 ~] $ LSNRCTL stop listener_hostb
I have not passed the test below. I am too busy waiting for the test ..
Backup
Votning diskdd if = voting_disk_name of = backup_file_name
Dd If =/dev/rdsk/keys of = votingdisk. Bak # dd If =/dev/Zero of =/dev/rdsk/c4t600c0ff00000098ade240330a000d0s4 BS = 512 COUNT = 261120
Test
# Dd If =/dev/rdsk/c4t600c0ff000000000098ade240330a000d0s4 of =/data/backup/RAC/vd_backup040000bak
261120 + 0 records
261120 + 0 record recall
# Cd/data/backup/RAC
# Ls
Ocr041_bak ocrdisk vd_backup041_bak votingdisk. Bak votingdisk041_bak
# Dd If =/data/backup/RAC/vd_backup041_bak of =/dev/rdsk/c4t600c0ff000000000098ade240330a000d0s4
261120 + 0 records
261120 + 0 records recall backup OCR Disk
View backups
$ Ocrconfig-showbackup
Backup
/Data/Oracle/CRS/bin/ocrconfig-export/data/backup/RAC/ocrdisk. Bak
Restore all nodes that need to be stopped, stop the Oracle clusterware software on all of the nodes
/Data/Oracle/CRS/bin/ocrconfig-import file_name
Automatic backup restoration
#/Data/Oracle/CRS/bin/ocrconfig-showbackup
#/Data/Oracle/CRS/bin/ocrconfig-Restore/data/Oracle/CRS/CDATA/db168crs/backup00.ocrhosta $ cluvfy comp OCR-n all // check
OCR check
# The ocrcheck configuration path is in
If you need to change the path configuration of the OCR disk in the/var/opt/Oracle/ocrconfig_loc file.
OCR disk space check
#/Data/Oracle/CRS/bin/ocrcheck
Status of Oracle cluster registry is as follows:
Version: 2
Total Space (Kbytes): 399752
Used space (Kbytes): 3784
Available space (Kbytes): 395968
ID: 148562961
Device/File Name:/dev/rdsk/c4t600c0ff000000000098ade240330a000d0s5
Device/file integrity check succeeded device/file not configured cluster registry integrity check succeeded #