Manually kill all CRS processes in Oracle 11.2 without causing a host reboot method

Source: Internet
Author: User
Tags mongodb postgresql redis firewall

As we all know, in a RAC environment, the kill Ocssd.bin process can cause the host to reboot.


But sometimes the system has been abnormal, and CRS can not shut down normally, and the host may be a few years without restarting the old system, no one dares to restart, now what?


We can only attempt to manually kill the process and then manually repair CRS (note that there are only 3 d.bin processes in the 10.2 RAC).


Test environment: Operating system is OEL 6.6

[root@lunar1 ~]# cat /etc/oracle-release
Oracle Linux Server release 6.6
[root@lunar1 ~]# 
[root@lunar1 ~]# uname -a
Linux lunar1 3.8.13-44.1.1.el6uek.x86_64 #2 SMP Wed Sep 10 06:10:25 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@lunar1 ~]# 

The CRS version of this set of RAC is 11.2.0.4:


[root@lunar1 ~]# crsctl Query CRS Activeversion
[root@lunar1 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]
[root@lunar1 ~]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
[root@lunar1 ~]# crsctl query crs softwareversion
Oracle Clusterware version on node [lunar1] is [11.2.0.4.0]
[root@lunar1 ~]# 

Note that due to the fact that 12.1 common RAC (non-Flex Cluster) is the same, processing ideas and processes are the same.


To view the status of the current CRS:


[root@lunar1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       lunar1                                       
               ONLINE  ONLINE       lunar2                                       
ora.DATADG1.dg
               ONLINE  ONLINE       lunar1                                       
               ONLINE  ONLINE       lunar2                                       
ora.DATADG2.dg
               ONLINE  ONLINE       lunar1                                       
               ONLINE  ONLINE       lunar2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunar1                                       
               ONLINE  ONLINE       lunar2                                       
ora.asm
               ONLINE  ONLINE       lunar1                   Started             
               ONLINE  ONLINE       lunar2                   Started             
ora.gsd
               OFFLINE OFFLINE      lunar1                                       
               OFFLINE OFFLINE      lunar2                                       
ora.net1.network
               ONLINE  ONLINE       lunar1                                       
               ONLINE  ONLINE       lunar2                                       
ora.ons
               ONLINE  ONLINE       lunar1                                       
               ONLINE  ONLINE       lunar2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunar2                                       
ora.cvu
      1        ONLINE  ONLINE       lunar2                                       
ora.lunar.db
      1        ONLINE  ONLINE       lunar1                   Open                
      2        ONLINE  OFFLINE                               STARTING            
ora.lunar1.vip
      1        ONLINE  ONLINE       lunar1                                       
ora.lunar2.vip
      1        ONLINE  ONLINE       lunar2                                       
ora.oc4j
      1        ONLINE  ONLINE       lunar1                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunar2                                       
[root@lunar1 ~]# 


To view all current CRS processes:
[root@lunar1 ~]# ps -ef|grep d.bin
root      3860     1  0 19:31 ?        00:00:12 /u01/app/11.2.0.4/grid/bin/ohasd.bin reboot
grid      3972     1  0 19:31 ?        00:00:04 /u01/app/11.2.0.4/grid/bin/oraagent.bin
grid      3983     1  0 19:31 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/mdnsd.bin
grid      3994     1  0 19:31 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/gpnpd.bin
root      4004     1  0 19:31 ?        00:00:15 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      4007     1  0 19:31 ?        00:00:12 /u01/app/11.2.0.4/grid/bin/gipcd.bin
root      4019     1  0 19:31 ?        00:00:05 /u01/app/11.2.0.4/grid/bin/osysmond.bin
root      4032     1  0 19:31 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/cssdmonitor
root      4051     1  0 19:31 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/cssdagent
grid      4063     1  0 19:31 ?        00:00:12 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      4157     1  0 19:31 ?        00:00:06 /u01/app/11.2.0.4/grid/bin/octssd.bin reboot
grid      4180     1  0 19:31 ?        00:00:06 /u01/app/11.2.0.4/grid/bin/evmd.bin
grid      4343  4180  0 19:32 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmlogger.bin -o /u01/app/11.2.0.4/grid/evm/log/evmlogger.info -l /u01/app/11.2.0.4/grid/evm/log/evmlogger.log
root      5385     1  1 19:39 ?        00:00:17 /u01/app/11.2.0.4/grid/bin/crsd.bin reboot
grid      5456     1  0 19:39 ?        00:00:04 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      5473     1  0 19:39 ?        00:00:07 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      5475     1  0 19:39 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/scriptagent.bin
grid      6535     1  0 19:50 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER -inherit
oracle    7132     1  0 20:04 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      7350  7273  0 20:04 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]# 

Okay, we're starting to simulate the kill process. Kill/u01/app/11.2.0.4/grid/bin/ohasd.bin First (automatically reboot, see 11.2 RAC startup process)

Then, we kill Cssdmonitor:

[root@lunar1 ~]# kill -9 4032
-bash: kill: (4032) - No such process
[root@lunar1 ~]# 

There is no such integration, which means that the cssdmonitor process has been restarted:


(See 11.2 RAC startup process)


[root@lunar1 ~]# ps -ef|grep d.bin
grid      3983     1  0 19:31 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/mdnsd.bin
grid      3994     1  0 19:31 ?        00:00:03 /u01/app/11.2.0.4/grid/bin/gpnpd.bin
grid      4007     1  0 19:31 ?        00:00:13 /u01/app/11.2.0.4/grid/bin/gipcd.bin
root      4019     1  0 19:31 ?        00:00:05 /u01/app/11.2.0.4/grid/bin/osysmond.bin
grid      4063     1  0 19:31 ?        00:00:13 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      4157     1  0 19:31 ?        00:00:06 /u01/app/11.2.0.4/grid/bin/octssd.bin reboot
grid      4180     1  0 19:31 ?        00:00:07 /u01/app/11.2.0.4/grid/bin/evmd.bin
grid      4343  4180  0 19:32 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmlogger.bin -o /u01/app/11.2.0.4/grid/evm/log/evmlogger.info -l /u01/app/11.2.0.4/grid/evm/log/evmlogger.log
root      5385     1  1 19:39 ?        00:00:19 /u01/app/11.2.0.4/grid/bin/crsd.bin reboot
grid      5456     1  0 19:39 ?        00:00:05 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      5473     1  0 19:39 ?        00:00:07 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      5475     1  0 19:39 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/scriptagent.bin
grid      6535     1  0 19:50 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER -inherit
oracle    7132     1  0 20:04 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
grid      7490     1  0 20:06 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
root      7534  2487  3 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/ohasd.bin restart
grid      7571     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      7575     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
root      7578     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdagent
root      7588     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdmonitor
root      7740  7273  0 20:07 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]# 


The above process start time between 20:04~20:07, is the/u01/app/11.2.0.4/grid/bin/ohasd.bin process restart, automatic background restart.


Now, we kill MDNSD gpnpd gipcd Osysmond.


Of the 4 processes, the first 3 are CRS startup several processes that were initiated in addition to OHASD.


If you kill these processes, OHASD will reboot:


[root@lunar1 ~]# kill -9 3983 3994 4007 4019
[root@lunar1 ~]# ps -ef|grep d.bin
grid      4063     1  0 19:31 ?        00:00:13 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
grid      6535     1  0 19:50 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER -inherit
grid      7490     1  0 20:06 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
root      7534  2487  2 20:07 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/ohasd.bin restart
grid      7571     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      7575     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
root      7578     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdagent
root      7588     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdmonitor
grid      7756     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/gpnpd.bin
grid      7758     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/mdnsd.bin
root      7776  7273  0 20:07 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]# 


Here we see that the 4 process of the kill just did not get up, what is going on?


Don't worry, it's not time, OHASD need check to start, O (∩_∩) o haha ~


Then we kill the listener:


[root@lunar1 ~]# kill -9 6535 7490 
[root@lunar1 ~]# ps -ef|grep d.bin
grid      4063     1  0 19:31 ?        00:00:13 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      7534  2487  2 20:07 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/ohasd.bin restart
grid      7571     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      7575     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
root      7578     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdagent
root      7588     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdmonitor
grid      7756     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/gpnpd.bin
grid      7758     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/mdnsd.bin
grid      7783     1  2 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/gipcd.bin
root      7785     1  2 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/osysmond.bin
root      7844     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/ologgerd -m lunar2 -r -d /u01/app/11.2.0.4/grid/crf/db/lunar1
root      7853     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/octssd.bin
grid      7873     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmd.bin
root      7874     1 14 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/crsd.bin reboot
grid      7944  7873  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmlogger.bin -o /u01/app/11.2.0.4/grid/evm/log/evmlogger.info -l /u01/app/11.2.0.4/grid/evm/log/evmlogger.log
grid      7979     1  9 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
grid      7982     1  3 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/scriptagent.bin
oracle    7986     1  4 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      8001     1  3 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      8025  7979  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/lsnrctl status LISTENER
grid      8028  7979  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/lsnrctl status LISTENER_SCAN1
root      8083  7273  0 20:08 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]# 

OK, look, the kill process has been restarted, 11.2 of the RAC really tough ah.


Now we KILL/ETC/INIT.D/INIT.OHASD the process:
[root@lunar1 ~]# ps -ef|grep ohasd
root      2487     1  0 19:20 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      7534  2487  1 20:07 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/ohasd.bin restart
root      8191  7273  0 20:08 pts/2    00:00:00 grep ohasd
[root@lunar1 ~]# kill -9 2487 7534
[root@lunar1 ~]# ps -ef|grep ohasd
root      8239     1  0 20:08 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      8257  8239  0 20:08 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      8258  8257  0 20:08 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      8267  7273  0 20:08 pts/2    00:00:00 grep ohasd
[root@lunar1 ~]# ps -ef|grep ohasd
root      8239     1  0 20:08 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      8299  7273  0 20:08 pts/2    00:00:00 grep ohasd
[root@lunar1 ~]# 

What we see here is the process of /etc/init.d/init.ohasd being automatically restarted by the system. This information will be recorded in /var/log/message/:


[root@lunar1 ~]# tail -f /var/log/messages
Jan 24 19:45:31 lunar1 kernel: e1000 0000:00:03.0 eth0: Reset adapter
Jan 24 20:03:50 lunar1 kernel: e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Jan 24 20:03:52 lunar1 kernel: e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Jan 24 20:07:01 lunar1 clsecho: /etc/init.d/init.ohasd: ohasd is restarting 1/10.
Jan 24 20:07:01 lunar1 logger: exec /u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib /u01/app/11.2.0.4/grid/bin/crswrapexece.pl /u01/app/11.2.0.4/grid/crs/install/s_crsconfig_lunar1_env.txt /u01/app/11.2.0.4/grid/bin/ohasd.bin "restart"
Jan 24 20:08:26 lunar1 init: oracle-ohasd main process (2487) killed by KILL signal
Jan 24 20:08:26 lunar1 init: oracle-ohasd main process ended, respawning
Jan 24 20:13:58 lunar1 init: oracle-ohasd main process (8239) killed by KILL signal
Jan 24 20:13:58 lunar1 init: oracle-ohasd main process ended, respawning
Jan 24 20:14:12 lunar1 root: exec /u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib /u01/app/11.2.0.4/grid/bin/crswrapexece.pl /u01/app/11.2.0.4/grid/crs/install/s_crsconfig_lunar1_env.txt /u01/app/11.2.0.4/grid/bin/ohasd.bin "reboot"
^C
[root@lunar1 ~]# 


And his process is automatically restarted (note that this is the crsd process has not been restarted):


[root@lunar1 ~]# ps -ef|grep d.bin
grid      4063     1  0 19:31 ?        00:00:14 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      7578     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdagent
root      7588     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdmonitor
grid      7756     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/gpnpd.bin
grid      7758     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/mdnsd.bin
grid      7783     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/gipcd.bin
root      7785     1  1 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/osysmond.bin
root      7844     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/ologgerd -m lunar2 -r -d /u01/app/11.2.0.4/grid/crf/db/lunar1
root      7853     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/octssd.bin
grid      7873     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmd.bin
root      7874     1  3 20:07 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/crsd.bin reboot
grid      7944  7873  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmlogger.bin -o /u01/app/11.2.0.4/grid/evm/log/evmlogger.info -l /u01/app/11.2.0.4/grid/evm/log/evmlogger.log
grid      7979     1  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
grid      7982     1  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/scriptagent.bin
oracle    7986     1  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root      8001     1  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      8119     1  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
grid      8120     1  0 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER -inherit
root      8321  8319  1 20:08 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/crsctl.bin check has
root      8325  7273  0 20:08 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]# 


Now we kill: evmlogger.bin gpnpd.bin mdnsd.bin gipcd.bin evmd.bin oraagent.bin scriptagent.bin oraagent.bin orarootagent.bin and then two lisnterner


[root@lunar1 ~]# kill -9 7944 7756 7758 7783 7873 7979 7982 7986 8001 8119 8120
[root@lunar1 ~]# ps -ef|grep d.bin
grid      4063     1  0 19:31 ?        00:00:14 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      7578     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdagent
root      7588     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/cssdmonitor
root      7785     1  1 20:07 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/osysmond.bin
root      7844     1  0 20:07 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/ologgerd -m lunar2 -r -d /u01/app/11.2.0.4/grid/crf/db/lunar1
root      8593  8591  0 20:09 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/crsctl.bin check has
root      8597  7273  0 20:09 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]# 


Then, kill osysmond.bin ologgerd cssdmonitor cssdagent :


[root@lunar1 ~]# kill -9 7785 7844 7588 7578  
[root@lunar1 ~]# 


Ok, now there is one ocssd.bin left:


[root@lunar1 ~]# ps -ef|grep d.bin
grid      4063     1  0 19:31 ?        00:00:14 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      8629  7273  0 20:10 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]#


Now we kill the legend that once killed, it will cause the host to restart the process ocssd.bin :


[root@lunar1 ~]# kill -9 4063
[root@lunar1 ~]# 


Ok, our system is still fine, there is no restart, and the resources are released:


[root@lunar1 ~]# ipcs -ma
 
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
 
------ Semaphore Arrays --------
key        semid      owner      perms      nsems     
0x00000000 0          root       600        1         
0x00000000 65537      root       600        1         
 
------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages    
 
[root@lunar1 ~]# 
[root@lunar1 ~]# 


If you want to recover, it is very simple, just restart the crs directly ok:


[root@lunar1 ~]# ps -ef | grep -v grep|grep -E 'init|d.bin|ocls|evmlogger|UID'
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 19:20 ?        00:00:01 /sbin/init
root      2486     1  0 19:20 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run
root      8924     1  0 20:13 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
[root@lunar1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@lunar1 ~]# ps -ef|grep ohasd
root      8924     1  0 20:13 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      8968     1  4 20:14 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/ohasd.bin reboot
root      9187  7273  0 20:14 pts/2    00:00:00 grep ohasd
[root@lunar1 ~]# 
[root@lunar1 ~]# ps -ef|grep d.bin
root      8968     1  0 20:14 ?        00:00:08 /u01/app/11.2.0.4/grid/bin/ohasd.bin reboot
grid      9090     1  0 20:14 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/oraagent.bin
grid      9101     1  0 20:14 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/mdnsd.bin
grid      9112     1  0 20:14 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/gpnpd.bin
root      9122     1  0 20:14 ?        00:00:09 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      9126     1  0 20:14 ?        00:00:08 /u01/app/11.2.0.4/grid/bin/gipcd.bin
root      9139     1  0 20:14 ?        00:00:12 /u01/app/11.2.0.4/grid/bin/osysmond.bin
root      9150     1  0 20:14 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/cssdmonitor
root      9169     1  0 20:14 ?        00:00:01 /u01/app/11.2.0.4/grid/bin/cssdagent
grid      9180     1  0 20:14 ?        00:00:04 /u01/app/11.2.0.4/grid/bin/ocssd.bin 
root      9212     1  1 20:14 ?        00:00:28 /u01/app/11.2.0.4/grid/bin/ologgerd -M -d /u01/app/11.2.0.4/grid/crf/db/lunar1
root      9340     1  0 20:18 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/octssd.bin reboot
grid      9363     1  0 20:18 ?        00:00:03 /u01/app/11.2.0.4/grid/bin/evmd.bin
root      9455     1  0 20:18 ?        00:00:09 /u01/app/11.2.0.4/grid/bin/crsd.bin reboot
grid      9532  9363  0 20:18 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/evmlogger.bin -o /u01/app/11.2.0.4/grid/evm/log/evmlogger.info -l /u01/app/11.2.0.4/grid/evm/log/evmlogger.log
grid      9569     1  0 20:18 ?        00:00:02 /u01/app/11.2.0.4/grid/bin/oraagent.bin
grid      9572     1  0 20:18 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/scriptagent.bin
root      9591     1  0 20:18 ?        00:00:05 /u01/app/11.2.0.4/grid/bin/orarootagent.bin
grid      9682     1  0 20:18 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER -inherit
grid      9684     1  0 20:18 ?        00:00:00 /u01/app/11.2.0.4/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle    9774     1  0 20:19 ?        00:00:03 /u01/app/11.2.0.4/grid/bin/oraagent.bin
root     10642  7273  0 20:38 pts/2    00:00:00 grep d.bin
[root@lunar1 ~]#
[root@lunar1 ~]# crsctl status res -t

NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       lunar1                                       
ora.DATADG1.dg
               ONLINE  ONLINE       lunar1                                       
ora.DATADG2.dg
               ONLINE  ONLINE       lunar1                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunar1                                       
ora.asm
               ONLINE  ONLINE       lunar1                   Started             
ora.gsd
               OFFLINE OFFLINE      lunar1                                       
ora.net1.network
               ONLINE  ONLINE       lunar1                                       
ora.ons
               ONLINE  ONLINE       lunar1                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunar1                                       
ora.cvu
      1        ONLINE  ONLINE       lunar1                                       
ora.lunar.db
      1        ONLINE  ONLINE       lunar1                   Open                
      2        ONLINE  OFFLINE                                                   
ora.lunar1.vip
      1        ONLINE  ONLINE       lunar1                                       
ora.lunar2.vip
      1        ONLINE  INTERMEDIATE lunar1                   FAILED OVER         
ora.oc4j
      1        ONLINE  ONLINE       lunar1                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunar1                                       
[root@lunar1 ~]# 


This shows only node 1, because node 2 I'm off.
The test proves that the system will not reboot simply by first kill the Cssdmonitor and cssdagent process (cssagent, as you can see from the classic large image of the CRS boot), and then kill the Ocssd.bin process.


In addition, 12.1 common RAC (non-Flex Cluster) in the case of the same, processing ideas and processes are the same.


Alibaba Cloud Hot Products

Elastic Compute Service (ECS) Dedicated Host (DDH) ApsaraDB RDS for MySQL (RDS) ApsaraDB for PolarDB(PolarDB) AnalyticDB for PostgreSQL (ADB for PG)
AnalyticDB for MySQL(ADB for MySQL) Data Transmission Service (DTS) Server Load Balancer (SLB) Global Accelerator (GA) Cloud Enterprise Network (CEN)
Object Storage Service (OSS) Content Delivery Network (CDN) Short Message Service (SMS) Container Service for Kubernetes (ACK) Data Lake Analytics (DLA)

ApsaraDB for Redis (Redis)

ApsaraDB for MongoDB (MongoDB) NAT Gateway VPN Gateway Cloud Firewall
Anti-DDoS Web Application Firewall (WAF) Log Service DataWorks MaxCompute
Elastic MapReduce (EMR) Elasticsearch

Alibaba Cloud Free Trail

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.