Oraclerac correctly deletes the actionlist of A Single Node
1 Node 2 Use dbca to delete a node 2 node 1 alter database disable thread 2; 3 Node 1 verify whether a node database has been deleted [03:49:06 oracle (db) @ rac1 ~] $ Srvctl config database-d prodDatabase unique name: prodDatabase name: prodOracle home:/u01/app/oracle/product/11.2.0/dbOracle user: oracleSpfile: + DATA/prod/spfileprod. oraDomain: Start options: openStop options: immediateDatabase role: PRIMARYManagement policy: AUTOMATICServer pools: prodDatabase instances: prod1Disk Groups: DATAMount point paths: Services: Type: RACDatabase is administrator manag Ed4 Node 1 stop Node 2 listen srvctl disable listener-l LISTENER-n rac2; srvctl stop listener-l LISTENER-n rac2; 5 Node 2 update node information/u01/app/oracle/product/11.2.0/db/oui/bin/runInstaller-updateNodeList ORACLE_HOME =/u01/app/oracle/product/11.2.0/db "CLUSTER_NODES = {rac2}"-local6 Node 2 Delete the ORACLE software of Node 2/u01/app/oracle/product/11.2.0/db/deinstall-local7 Node 1 update the remaining Node /u01/app/oracle/product/11.2.0/db/oui/bin/runInstalle R-updateNodeList ORACLE_HOME =/u01/app/oracle/product/11.2.0/db "CLUSTER_NODES = {rac1}" delete oracle software. After the oracle software is deleted, delete grid software 8 Node 1 to view ons resource Information olsnodes-s-trac1 Active Unpinnedrac2 Active Unpinned9 Node 2 executes rootcrs as the root user. the pl script understands the configuration information of the grid/u01/app/11.2.0/grid/crs/install/rootcrs. pl-deconfig-deinstall-force10 Node 1 re-confirm ons Resource Status olsnodes-s-trac1 Active Unpinnedrac2 Inactive Unpinned11 node 1root User Execute delete another node information crsctl delete n Ode-n rac2olsnodes-s-trac1 Active Unpinned12 node 2oracle user execution/u01/app/11.2.0/grid/oui/bin/runInstaller-updateNodeList ORACLE_HOME =/u01/app/11.2.0/grid /"CLUSTER_NODES = rac2" CRS = TRUE-silent-local13 Node 2 uninstall grid software run by oracle users/u01/app/11.2.0/grid/deinstall-localroot run rm- rf/etc/oraInst. locrm-rf/opt/ORCLfmaprm-rf/etc/oratab14 Node 1 update information of remaining nodes oracle user execution:/u01/app/11.2.0/grid/oui/bin/runInstal Ler-updateNodeList ORACLE_HOME =/u01/app/11.2.0/grid/"CLUSTER_NODES = rac1" CRS = TRUE-silent15 Node 1 check whether the node 1 has been deleted [04:24:28 oracle (db) @ rac1 ~] $ Cluvfy stage-post nodedel-n rac2-verbosesponming post-checks for node removalChecking CRS integrity... clusterware version consistency passedThe Oracle Clusterware is healthy on node "rac1" CRS integrity check passedResult: Node removal check passedPost-check for node removal was successful. the node is successfully deleted.