Common commands for oracle 11g RAC Management

Source: Internet
Author: User

Common commands for oracle 11g RAC Management 1) Check cluster status: [grid @ rac02 ~] $ Crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online 2), all Oracle instances-(Database status ): [grid @ rac02 ~] $ Srvctl status database-d racdb Instance racdb1 is running on node rac01 Instance racdb2 is running on node rac02 3) Check the status of a single Instance: [grid @ rac02 ~] $ Srvctl status instance-d racdb-I racdb1 Instance racdb1 is running on node rac01 4), node Application status: [grid @ rac02 ~] $ Srvctl status nodeapps VIP rac01-vip is enabled VIP rac01-vip is running on node: rac01 VIP rac02-vip is enabled VIP rac02-vip is running on node: rac02 Network is enabled Network is running on node: rac01 Network is running on node: rac02 GSD is disabled GSD is not running on node: rac01 GSD is not running on node: rac02 ONS is enabled ONS daemon is running on node: rac01 ONS daemon is running o N node: rac02 eONS is enabled eONS daemon is running on node: rac01 eONS daemon is running on node: rac02 5) list all configuration databases: [grid @ rac02 ~] $ Srvctl config database racdb 6), database Configuration: [grid @ rac02 ~] $ Srvctl config database-d racdb-a Database unique name: racdb Database name: racdb Oracle home:/u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: + RACDB_DATA/racdb/spfileracdb. ora Domain: xzxj.edu.cn Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1, racdb2 Disk Groups: RACDB_DATA, FRA Se Rvices: Database is enabled Database is administrator managed 7), ASM status, and ASM configuration: [grid @ rac02 ~] $ Srvctl status asm is running on rac01, rac02 [grid @ rac02 ~] $ Srvctl config asm-a ASM home:/u01/app/11.2.0/grid ASM listener: listener asm is enabled. 8), TNS listener status and configuration: [grid @ rac02 ~] $ Srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node (s): rac01, rac02 [grid @ rac02 ~] $ Srvctl config listener-a Name: LISTENER Network: 1, Owner: grid Home: <CRS home>/u01/app/11.2.0/grid on node (s) rac02, rac01 End points: TCP: 1521 9), SCAN status and configuration: [grid @ rac02 ~] $ Srvctl status scan scan vip scan1 is enabled scan vip scan1 is running on node rac02 [grid @ rac02 ~] $ Srvctl config scan SCAN name: rac-scan.xzxj.edu.cn, Network: 1/192. 168.1.0/255.255.255.0/eth0 scan vip name: scan1, IP:/rac-scan.xzxj.edu.cn/192.168.1.55 10), status and configuration of each VIP node: [grid @ rac02 ~] $ Srvctl status vip-n rac01 VIP rac01-vip is enabled VIP rac01-vip is running on node: rac01 [grid @ rac02 ~] $ Srvctl status vip-n rac02 VIP rac02-vip is enabled VIP rac02-vip is running on node: rac02 [grid @ rac02 ~] $ Srvctl config vip-n rac01 VIP exists.: rac01 VIP exists.:/rac01-vip/192.168.1.53/255.255.255.0/eth0 [grid @ rac02 ~] $ Srvctl config vip-n rac02 VIP exists.: rac02 VIP exists. :/rac02-vip/192.168.1.54/255.255.255.0/eth0 11), node application configuration-(VIP, GSD, ONS, listener) [grid @ rac02 ~] $ Srvctl config nodeapps-a-g-s-l option has been deprecated and will be ignored. VIP exists.: rac01 VIP exists. :/rac01-vip/192.168.1.53/255.255.255.0/eth0 VIP exists.: rac02 VIP exists. :/rac02-vip/192.168.1.54/255.255.255.0/eth0 GSD exists. ONS daemon exists. local port 6100, remote port 6200 Name: LISTENER Network: 1, Owner: grid Home: <CRS home>/u01/app/11.2.0/grid on node (s) rac02, rac01 End points: TCP: 1521 12). Verify the clock synchronization between all cluster nodes: [grid @ rac02 ~] $ Cluvfy comp clocksync-verbose Verifying Clock Synchronization within ss the cluster nodes Checking if Clusterware is installed on all nodes... check of Clusterware install passed Checking if CTSS Resource is running on all nodes... check: CTSS Resource running on all nodes Node Name Status -------------------------------------- ---------------------- rac02 passed Result: CTSS resource check passed Querying CTSS for time offset on all nodes... result: Query of CTSS for time offset passed Check CTSS state started... check: CTSS state Node Name State -------------------------------------- ---------------------- rac02 Active CTSS is in Active state. proceeding with check of clock time offsets on all nodes... reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time O Ffset Status ------------ ------------------------ ---------------------- rac02 0.0 passed Time offset is within the specified limits on the following set of nodes: "[rac02]" Result: check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Verification of Clock Synchronization within ss the cluster nodes was successful. 13) all running instances in the cluster-(SQL): SELECT inst_id, I Nstance_number inst_no, instance_name inst_name, parallel, status, database_status db_status, active_state state, host_name host FROM gv $ instance order by inst_id; 14), all database files and Their ASM disk groups-(SQL): 15), ASM disk volume: 16), start and stop the cluster: The following operations must be performed by the root user. (1) Stop the Oracle Clusterware system on the local server: [root @ rac01 ~] #/U01/app/11.2.0/grid/bin/crsctl stop cluster Note: after running the "crsctl stop cluster" command, if any of the resources managed by Oracle Clusterware are still running, the entire command fails. Use the-f option to unconditionally stop all resources and stop the Oracle Clusterware system. In addition, you can specify the-all option to stop the Oracle terterware system on all servers in the cluster. Stop the oracle clusterware System on rac01 and rac02 as follows: [root @ rac02 ~] #/U01/app/11.2.0/grid/bin/crsctl stop cluster-all start the oralce clusterware system on the local server: [root @ rac01 ~] #/U01/app/11.2.0/grid/bin/crsctl start cluster Note: You can specify the-all option to start the Oracle Clusterware system on all servers in the cluster. [Root @ rac02 ~] #/U01/app/11.2.0/grid/bin/crsctl start cluster-all can also list servers by spaces) start the Oracle Clusterware system on one or more specified servers in the Cluster: [root @ rac01 ~] #/U01/app/11.2.0/grid/bin/crsctl start cluster-n rac01 rac02 use SRVCTL to start/stop all instances: [oracle @ rac01 ~] # Srvctl stop database-d racdb [oracle @ rac01 ~] # Srvctl start database-d racdb

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.