Oracle 11g RAC Common Command collation sharing _oracle

Source: Internet
Author: User
1), check the cluster status:
[GRID@RAC02 ~]$ crsctl Check Cluster
Crs-4537:cluster Ready Services is online
Crs-4529:cluster Synchronization Services is online
Crs-4533:event Manager is online

2, all Oracle instances-(database state):
[Grid@rac02 ~]$ srvctl status database-d racdb
Instance RACDB1 is running on node RAC01
Instance RACDB2 is running on node RAC02

3), check the status of a single instance:
[Grid@rac02 ~]$ srvctl status instance-d racdb-i racdb1
Instance RACDB1 is running on node RAC01

4), node application state:
[Grid@rac02 ~]$ srvctl status Nodeapps
VIP RAC01-VIP is enabled
VIP Rac01-vip is running on NODE:RAC01
VIP RAC02-VIP is enabled
VIP Rac02-vip is running on NODE:RAC02
Network is enabled
Network is running on NODE:RAC01
Network is running on NODE:RAC02
GSD is disabled
GSD is isn't running on NODE:RAC01
GSD is isn't running on NODE:RAC02
ONS is enabled
ONS Daemon is running on NODE:RAC01
ONS Daemon is running on NODE:RAC02
EONS is enabled
EONS Daemon is running on NODE:RAC01
EONS Daemon is running on NODE:RAC02

5), list all the configuration databases:
[grid@rac02 ~]$ srvctl Config Database
Racdb

6), Database configuration:
[grid@rac02 ~]$ srvctl config database-d racdb-a
Database Unique NAME:RACDB
Database name:racdb
Oracle Home:/u01/app/oracle/product/11.2.0/dbhome_1
Oracle user:oracle
Spfile: +racdb_data/racdb/spfileracdb.ora
Domain:xzxj.edu.cn
Start Options:open
Stop options:immediate
Database role:primary
Management policy:automatic
Server pools:racdb
Database INSTANCES:RACDB1,RACDB2
Disk Groups:racdb_data,fra
Services:
The Database is enabled
Database is administrator managed

7, ASM Status, and ASM configuration:
[Grid@rac02 ~]$ srvctl status ASM
ASM is running on RAC01,RAC02
[grid@rac02 ~]$ srvctl config asm-a
ASM Home:/u01/app/11.2.0/grid
ASM Listener:listener
The ASM is enabled.

8), TNS Listener status and configuration:
[Grid@rac02 ~]$ srvctl Status Listener
Listener Listener is enabled
Listener Listener is running on node (s): RAC01,RAC02
[grid@rac02 ~]$ srvctl config listener-a
Name:listener
Network:1, Owner:grid
Home: <crs home>
/u01/app/11.2.0/grid on node (s) rac02,rac01
End points:tcp:1521

9), scan status and configuration:
[Grid@rac02 ~]$ srvctl status Scan
SCAN VIP scan1 is enabled
SCAN VIP Scan1 is running on node RAC02
[grid@rac02 ~]$ srvctl config scan
SCAN name:rac-scan.xzxj.edu.cn, Network:1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name:scan1, IP:/rac-scan.xzxj.edu.cn/192.168.1.55

10, VIP Each node's status and configuration:
[Grid@rac02 ~]$ srvctl status Vip-n rac01
VIP RAC01-VIP is enabled
VIP Rac01-vip is running on NODE:RAC01
[Grid@rac02 ~]$ srvctl status Vip-n rac02
VIP RAC02-VIP is enabled
VIP Rac02-vip is running on NODE:RAC02
[grid@rac02 ~]$ srvctl config vip-n rac01
VIP EXISTS.:RAC01
VIP exists.:/rac01-vip/192.168.1.53/255.255.255.0/eth0
[grid@rac02 ~]$ srvctl config vip-n rac02
VIP EXISTS.:RAC02
VIP exists.:/rac02-vip/192.168.1.54/255.255.255.0/eth0

11), Node application configuration-(VIP, GSD, ONS, listener)
[grid@rac02 ~]$ srvctl config nodeapps-a-g-s-l
-l option has been deprecated and would be ignored.
VIP EXISTS.:RAC01
VIP exists.:/rac01-vip/192.168.1.53/255.255.255.0/eth0
VIP EXISTS.:RAC02
VIP exists.:/rac02-vip/192.168.1.54/255.255.255.0/eth0
GSD exists.
ONS Daemon exists. Local port 6100, remote port 6200
Name:listener
Network:1, Owner:grid
Home: <crs home>
/u01/app/11.2.0/grid on node (s) rac02,rac01
End points:tcp:1521

12, verify the clock synchronization among all cluster nodes:
[Grid@rac02 ~]$ cluvfy Comp Clocksync-verbose
Verifying Clock synchronization across cluster nodes
Checking if Clusterware is installed in all nodes ...
Check of Clusterware Install passed
Checking if CTSS Resource is running on all nodes ...
CHECK:CTSS Resource running on all nodes
Node Name Status
------------------------------------ ------------------------
Rac02 passed
RESULT:CTSS Resource Check passed
Querying CTSS for time offset on all nodes ...
Result:query of CTSS for time offset passed
Check CTSS State started ...
CHECK:CTSS State
Node Name State
------------------------------------ ------------------------
RAC02 Active
CTSS is in Active state. Proceeding with check ' clock time offsets on all nodes ...
Reference Time Offset limit:1000.0 msecs
Check:reference Time Offset
Node Name Time Offset Status
------------ ------------------------ ------------------------
RAC02 0.0 passed
Time offset is within the specified limits on the following set of nodes:
"[Rac02]"
Result:check of clock time offsets passed
Oracle Cluster time synchronization Services check passed
Verification of Clock synchronization across the cluster nodes was successful.

13, all running instances in the cluster-(SQL):
SELECT inst_id, Instance_number inst_no, instance_name inst_name, parallel, status,
Database_status Db_status, Active_state State, HOST_NAME host to Gv$instance order by inst_id;
14, all database files and their ASM disk group-(SQL):
15), ASM disk Volume:
16), start and stop the cluster:
The following actions need to be performed by the root user.
(1) to stop the Oracle Clusterware system on the local server:
[ROOT@RAC01 ~]#/u01/app/11.2.0/grid/bin/crsctl Stop cluster

  Note:After running the crsctl stop cluster command, Oracle Clusterware managed
If any one of the resources is still running, the entire command fails. Use the-f option to unconditionally stop all resources and
Stop the Oracle clusterware system.
Also note that Oracle Clusterware can be stopped on all servers in the cluster by specifying the-all option
System. Stop the Oracle Clusterware system on RAC01 and RAC02 as follows:
[Root@rac02 ~]#/u01/app/11.2.0/grid/bin/crsctl Stop Cluster–all
To start the Oralce clusterware system on the local server:
[ROOT@RAC01 ~]#/u01/app/11.2.0/grid/bin/crsctl start cluster
Note: You can start the Oracle clusterware system on all servers in the cluster by specifying the-all option.
[Root@rac02 ~]#/u01/app/11.2.0/grid/bin/crsctl start Cluster–all
You can also specify one or more of the specified in the cluster by listing the servers (separated by a space between the servers)

To start the Oracle clusterware system on the server:
[Root@rac01 ~]#/u01/app/11.2.0/grid/bin/crsctl start cluster-n rac01 RAC02
To start/Stop all instances using SRVCTL:
[ORACLE@RAC01 ~] #srvctl Stop database-d racdb
[ORACLE@RAC01 ~] #srvctl start database-d racdb
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.