Oracle 11g rac shutdown and startup sequence, Status view command tutorial, 11 grac
Oracle 11g rac shutdown and startup sequence, Status view
Closing order:
1. Shut down the database. oracle users execute the srvctl command:
[Oracle @ node1 ~] $ Srvctl stop database-d ORCL --- stop instances on all nodes
[Oracle @ node1 ~] $ Srvctl status database-d devdb
Or run SQL> shutdown immediate after each node logs on to the database.
2. To stop the cluster service, you must use the root user (three steps may have been completed in step 2 ):
[Root @ node1 oracle] # cd/u01/app/11.2.0/grid/bin
[Root @ node1 bin] #./crsctl stop cluster-all ---- stop all node services
Or [root @ node1 bin] #./crsctl stop cluster ---- stop the cluster service on this node, and each node executes
You can also control the stopped nodes as follows:
[Root @ node1 bin] #./crsctl stop cluster-n rac1 rac2
View node status
[Grid @ node1 ~] $ Crs_stat-t-v or crsctl status resource-t
[Grid @ node1 ~] $ Srvctl status nodeapps
3. Stop HAS (High Availability Services ).
[Root @ node1 oracle] # cd/u01/app/11.2.0/grid/bin
[Root @ node1 bin] #./crsctl stop has-f
The preceding has startup commands must be executed on each node separately.
Startup sequence:
The RAC of 11g R2 is automatically started by default. Of course, if you need to start it manually. Manually start the instance in the order of cluster, HAS, and database. The specific command is as follows:
1. To start HAS (High Availability Services), you must use the root user
[Root @ node1 bin] #./crsctl start has
The preceding has startup commands must be executed on each node separately.
View node status
[Grid @ node1 ~] $ Crs_stat-t-v or crsctl status resource-t
2. Start the cluster)
[Root @ node1 ~] #./Crsctl start cluster-all -- all nodes start at the same time
Or only start
[Root @ node1 ~] #./Crsctl start cluster-n rac1 rac2 -- both nodes start at the same time
3. Start the database and run the srvctl command (assuming the database name is ORCL ):
[Oracle @ node1 ~] $ Srvctl start database-d ORCL --- stop instances on all nodes
Or run SQL> startup after each node logs on to the database.
Run the crs_stat command to verify the process.
Check node status
[Grid @ node1 ~] $ Crs_stat-t-v or crsctl status resource-t
[Grid @ node1 ~] $ Srvctl status nodeapps
Check database status
[Grid @ node1 ~] $ Srvctl status database-d devdb
Check ASM status
[Grid @ node1 ~] $ Srvctl status asm
Root User
Disable Automatic Start of CRS system restart
[Root @ node1 bin] #./crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
Check whether the crs is configured to start automatically
[Root @ node1 bin] #./crsctl config crs
CRS-4621: Oracle High Availability Services autostart is disabled.
Enable the CRS system to restart and automatically start (executed on each node separately)
[Root @ node1 bin] #./crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[Root @ node1 bin] #./crsctl config crs
CRS-4622: Oracle High Availability Services autostart is enabled.
Ocr and votedisk check (grid user)
Run these commands under the grid user, crsctl query css votedisk and ocrcheck
[Grid @ node1 ~] $ Crsctl query css votedisk
# STATE File Universal Id File Name Disk group
------------------------------------------
1. ONLINE 6312056f545c4fc7bf9f0a9b56a5aba0 (ORCL: VOL1) [OCR]
Located 1 voting disk (s ).
[Grid @ node1 ~] $ Ocrcheck
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2964
Available space (kbytes): 259156
ID: 1801821488
Device/File Name: + OCR
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical authentication uption check bypassed due to non-privileged user