Record once crs-0184:cannot communicate with the CRS daemon solution

Source: Internet
Author: User

1. Description:

Use the crs_stat–t command to view RAC Services, direct report Crs-0184:cannot communicate with the CRS daemon. Bugs

But the strange thing is that our db is not a problem. Sqlplus/as SYSDBA can continue landing and use.

2. Bug Analysis:

First look at the warning Journal: the bug started in 2016/07/13

/grid/11.2.0/log/phars1/alertphars1.log

2016-07-13 16:04:49.616:
[CRSD (21419)]crs-2765:resource ' Ora. VOTDG.DG ' have failed on server ' phars1 '.
2016-07-13 16:04:49.702:
[CRSD (21419)]crs-2878:failed to restart Resource ' Ora. VOTDG.DG '
2016-07-13 16:04:49.703:
[CRSD (21419)]crs-2769:unable to Failover Resource ' Ora. VOTDG.DG '.
2016-07-13 19:39:38.436:
[CRSD (21419)]crs-1006:the OCR location +VOTDG is inaccessible. Details in/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:38.437:
[CRSD (21419)] Crs-1006:the OCR location +VOTDG is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:53.742:
[/grid/11.2.0/bin/oraagent.bin (30612)] Crs-5822:agent '/grid/11.2.0/bin/oraagent_oracle ' disconnected from server. Details at (: CRSAGF00117:) {0:11:9490} in/grid/11.2
.0/log/phars1/agent/crsd/oraagent_oracle/oraagent_oracle.log.
2016-07-13 19:39:53.742:
[/grid/11.2.0/bin/orarootagent.bin (21814)] Crs-5822:agent '/grid/11.2.0/bin/orarootagent_root ' disconnected from server. Details at (: CRSAGF00117:) {0:3:36} IN/GRID/1
1.2.0/log/phars1/agent/crsd/orarootagent_root/orarootagent_root.log.
2016-07-13 19:39:53.743:
[/grid/11.2.0/bin/oraagent.bin (21774)] Crs-5822:agent '/grid/11.2.0/bin/oraagent_grid ' disconnected from server. Details at (: CRSAGF00117:) {0:5:10} in/grid/11.2.0/log/phars1/agent/crsd/oraagent_grid/oraagent_grid.log.
2016-07-13 19:39:53.743:
[/grid/11.2.0/bin/scriptagent.bin (1919)] Crs-5822:agent '/grid/11.2.0/bin/scriptagent_grid ' disconnected from server. Details at (: CRSAGF00117:) {0:13:12} in/grid/11.
2.0/log/phars1/agent/crsd/scriptagent_grid/scriptagent_grid.log.
2016-07-13 19:39:53.745:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:39:55.153:
[CRSD (16165)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:55.162:
[CRSD (16165)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:55.774:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:39:57.201:
[CRSD (16185)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:57.210:
[CRSD (16185)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:57.814:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:39:59.206:
[CRSD (16210)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:59.214:
[CRSD (16210)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:39:59.843:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:01.237:
[CRSD (16223)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:01.245:
[CRSD (16223)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:01.872:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:03.263:
[CRSD (16238)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:03.273:
[CRSD (16238)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:03.900:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:05.293:
[CRSD (16254)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:05.302:
[CRSD (16254)]Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physic Al Storage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:05.929:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:07.325:
[CRSD (16271)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:07.335:
[CRSD (16271)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:07.956:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:09.346:
[CRSD (16290)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:09.355:
[CRSD (16290)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:09.985:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:11.376:
[CRSD (16327)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:11.386:
[CRSD (16327)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:12.013:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:13.401:
[CRSD (16340)] Crs-1013:the OCR location in a ASM disk group is inaccessible. Details In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:13.411:
[CRSD (16340)] Crs-0804:cluster ready Service aborted due to Oracle Cluster Registry error [Proc-26:error while accessing the physical s Torage
]. Details at (: CRSD00111:) In/grid/11.2.0/log/phars1/crsd/crsd.log.
2016-07-13 19:40:14.053:
[OHASD (20149)] Crs-2765:resource ' ORA.CRSD ' have failed on server ' Phars1 '.
2016-07-13 19:40:14.053:
[OHASD (20149)]crs-2771:maximum Restart attempts reached for resource ' ORA.CRSD '; won't restart.
2016-07-13 19:40:14.053:
[OHASD (20149)]crs-2769:unable to failover resource ' ORA.CRSD '.

analysis of the above-mentioned journal, the process is the resource ' Ora. VOTDG.DG ' failed= ' try to re-activate the resource = "Restart failed =" The location of the OCR file +VOTDG no visit = "Finally led to the CRS, due to the physical storage can not be visited." = "After trying to reach the maximum number of times, let go of the re-revelation =" CRSD failure.

All of the above evidence indicates that the VOTDG has not been interviewed, which led to the difference in CRS services

And then we'll look at the /grid/11.2.0/log/phars1/crsd/crsd.log magazine.

2016-07-13 16:04:49.615: [Agfw][4118722304]{0:5:6} AGFW Proxy Server received the Message:resource_status[proxy] ID 2 0481:162,956
2016-07-13 16:04:49.615: [Agfw][4118722304]{0:5:6} verifying msg rid = ora. VOTDG.DG phars1 1
2016-07-13 16:04:49.615: [Agfw][4118722304]{0:5:6}Received state Change for Ora. VOTDG.DG phars1 1 [Old state = ONLINE, new state = OFFLINE]--this suggests Ora. VOTDG.DG's state becomes offline.
2016-07-13 16:04:49.615: [Agfw][4118722304]{0:5:6} AGFW Proxy Server sending message to PE, Contents = [midto:2| opid:3| froma:{invalid| node:0| process:0| type:0}| ToA
: {invalid| node:-1| process:-1| type:-1}| midfrom:0| type:4| pri2| Id:287142:ver:2]
2016-07-13 16:04:49.615: [Agfw][4118722304]{0:5:6} AGFW Proxy Server replying to the Message:resource_status[proxy] I D 20,481:162,956
2016-07-13 16:04:49.616: [Crspe][4108216064]{0:5:6} state change received from Phars1 for Ora. VOTDG.DG phars1 1
2016-07-13 16:04:49.616: [crspe][4108216064]{0:5:6} processing PE command id=13336. Description: [Resource State Change (ora. VOTDG.DG phars1 1): 0x7fb470104850]
2016-07-13 16:04:49.616: [Crspe][4108216064]{0:5:6} RI [Ora. VOTDG.DG phars1 1] New external state [OFFLINE] old value: [ONLINE] on phars1 label = []
2016-07-13 16:04:49.616: [Crsd][4108216064]{0:5:6} {0:5:6} Resource Resource Instance Id[ora. VOTDG.DG phars1 1]. Values:
State=offline
Target=online
Last_server=phars1
Current_rcount=0
Last_restart=0
Failure_count=0
failure_history=
state_details=
Incarnation=0
State_change_vers=0
Last_fault=0
last_state_change=1468397089
Internal_state=0
Degree_id=1
Id=ora. VOTDG.DG phars1 1
Lock Info:
Write Locks:none
Readlocks:|state inited| | ONLINE staterecovered| Has failed!
2016-07-13 16:04:49.616: [crspe][4108216064]{0:5:6} processing unplanned state change for [Ora. VOTDG.DG Phars1 1]
2016-07-13 16:04:49.617: [crspe][4108216064]{0:5:6} scheduled local recovery for [Ora. VOTDG.DG Phars1 1]
2016-07-13 16:04:49.617: [Crsrpt][4106114816]{0:5:6} Published to EVM crs_resource_state_change for Ora. Votdg.dg
2016-07-13 16:04:49.617: [Crspe][4108216064]{0:5:6} Op 0x7fb4700c89d0 has 5 WOs
2016-07-13 16:04:49.618: [Crspe][4108216064]{0:5:6} RI [Ora. VOTDG.DG Phars1 1] New internal state: [Starting] old value: [STABLE]
2016-07-13 16:04:49.618: [crspe][4108216064]{0:5:6} sending message to Agfw:id = 287144
2016-07-13 16:04:49.618: [Crspe][4108216064]{0:5:6} crs-2672:attempting to start ' Ora. Votdg.dg ' on ' phars1 '

2016-07-13 16:04:49.618: [Agfw][4118722304]{0:5:6} AGFW Proxy Server received the Message:resource_start[ora. VOTDG.DG phars1 1] ID 4,098:287,144
2016-07-13 16:04:49.619: [Agfw][4118722304]{0:5:6} AGFW Proxy Server forwarding the Message:resource_start[ora. VOTDG.DG phars1 1] ID 4098:287144 to the AGENT/GR
Id/11.2.0/bin/oraagent_grid
2016-07-13 16:04:49.673: [Agfw][4118722304]{0:5:6} Received the reply to the Message:resource_start[ora. VOTDG.DG phars1 1] ID 4098:287145 from the AGENT/GRID/11
.2.0/bin/oraagent_grid
2016-07-13 16:04:49.673: [Agfw][4118722304]{0:5:6} AGFW Proxy Server sending the reply to PE for Message:resource_star T[ora. VOTDG.DG phars1 1] ID 4,098:287,144
2016-07-13 16:04:49.673: [Crspe][4108216064]{0:5:6} Received reply to action [Start] message id:287144
2016-07-13 16:04:49.701: [Agfw][4118722304]{0:5:6} Received the reply to the Message:resource_start[ora. VOTDG.DG phars1 1] ID 4098:287145 from the AGENT/GRID/11
.2.0/bin/oraagent_grid
2016-07-13 16:04:49.701: [Agfw][4118722304]{0:5:6} AGFW Proxy Server sending the last reply to PE for Message:resource _start[ora. VOTDG.DG phars1 1] ID 4,098:287,144
2016-07-13 16:04:49.701: [Crspe][4108216064]{0:5:6} Received reply to action [Start] message id:287144
2016-07-13 16:04:49.701: [Crspe][4108216064]{0:5:6} RI [Ora. VOTDG.DG Phars1 1] New internal state: [STABLE] Old value: [Starting]
2016-07-13 16:04:49.701: [Crspe][4108216064]{0:5:6} Crs-2674:start of ' Ora. Votdg.dg ' on ' phars1 ' failed

The journal also focuses on ' Ora. VOTDG.DG ' failure, which leads to the failure of CRS

3. Bug Resolved:

① first was to tip my CRS service not to communicate, so I first went to check my alert log and CRS log

② through view Crsd.log also see the following sentence

2016-07-15 10:17:24.000: [Ocrasm][992749344]proprasmo: The ASM Disk group VOTDG is isn't found or not mounted

Here is a hint that my votedisk magnetic disc is not found or mount

③ because my db is normal, I'm going to check my votedisk magnetic status

Sql> select Name,state from V$asm_diskgroup;

NAME State
------------------------------ -----------
BACKUPDG CONNECTED
DATADG CONNECTED
SYSDG CONNECTED
VOTDG dismounted

This shows my votedisk dismounted. Normal state is required mounted.

Hand Mount Votedisk

[Email protected]:/home/grid> sqlplus/as sysasm --Notice to use grid user's sysasm landing

Sql*plus:release 11.2.0.4.0 Production on Fri Jul 15 11:38:40 2016

Copyright (c) 1982, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0-64bit Production
With the Real application Clusters and Automatic Storage Management options

Sql> alter DiskGroup VOTDG Mount; --Hand Mount Votedisk Magnetic tray

DiskGroup altered.

This is done on both sides.

And then re-activate the Cluster service, just fine. Note that it is ineffective to re-activate without mount. Only after Mount has it stopped properly.

# Crsctl Stop Cluster–all

# Crsctl Start Cluster-all

Summary: CRS is largely due to Votedisk's no-visit. The main reason is to analyze the journal, according to the journal to get the correct approach.

Record once crs-0184:cannot communicate with the CRS daemon solution

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.