Oracle11gR2: Node Connectivity check failed solution
The prompt "Node connectivity check failed" is displayed during the installation check on the oracle11GR2 GUI, as shown in:
There are two main reasons for this problem:
1./etc/hosts not properly configured
2. Firewall is not disabled in Linux.
However, my firewall is disabled, and the hosts file is also configured.
Run the runcluvfy tool to check whether:
Grid @ node74:/home/software/linux_11gR2/grid>./runcluvfy. sh stage-post hwos-n node74, node76-verbose
Padding post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "node74"
Destination Node Reachable?
------------------------------------------------------------
Node74 yes
Node76 yes
Result: Node reachability check passed from node "node74"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------------------------------
Node76 passed
Node74 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status Comment
------------------------------------------------------------
Node76 passed
Node74 passed
Verification of the hosts config file successful
Interface information for node "node76"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
-----------------------------------------------------------------------------------------
Eth0 172.16.54.76 172.16.52.0 0.0.0.0 172.16.52.254 B4: 99: BA: BC: 31: 4A 1500
Eth0 172.16.54.176 172.16.52.0 0.0.0.0 172.16.52.254 B4: 99: BA: BC: 31: 4A 1500
Eth1 10.10.10.2 10.10.10.0 0.0.0.0 172.16.52.254 B4: 99: BA: BC: 31: 4C 1500
Eth2 255.255.20.2 255.255.20.0 0.0.0.0 172.16.52.254 B4: 99: BA: BC: 31: 4E 1500
Interface information for node "node74"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
-----------------------------------------------------------------------------------------
Eth0 172.16.54.74 172.16.52.0 0.0.0.0 172.16.52.254 98: 4B: E1: 04: C7: B4 1500
Eth0 172.16.54.174 172.16.52.0 0.0.0.0 172.16.52.254 98: 4B: E1: 04: C7: B4 1500
Eth0 172.16.54.175 172.16.52.0 0.0.0.0 172.16.52.254 98: 4B: E1: 04: C7: B4 1500
Eth1 10.10.10.1 10.10.10.0 0.0.0.0 172.16.52.254 98: 4B: E1: 04: C7: B6 1500
Eth2 255.255.20.1 255.255.20.0 0.0.0.0 172.16.52.254 98: 4B: E1: 04: C7: B8 1500
Check: Node connectivity of subnet "172.16.52.0"
Source Destination Connected?
----------------------------------------------------------------------------
Node76: eth0 node76: eth0 yes
Node76: eth0 node74: eth0 yes
Node76: eth0 node74: eth0 yes
Node76: eth0 node74: eth0 yes
Node76: eth0 node74: eth0 yes
Node76: eth0 node74: eth0 yes
Node76: eth0 node74: eth0 yes
Node74: eth0 node74: eth0 yes
Node74: eth0 node74: eth0 yes
Node74: eth0 node74: eth0 yes
Result: Node connectivity passed for subnet "172.16.52.0" with node (s) node76, node74
Check: TCP connectivity of subnet "172.16.52.0"
Source Destination Connected?
----------------------------------------------------------------------------
Node74: 172.16.54.74 node76: 172.16.54.76 passed
Node74: 172.16.54.74 node76: 172.16.54.176 passed
Node74: 172.16.54.74 node74: 172.16.54.174 passed
Node74: 172.16.54.74 node74: 172.16.54.175 passed
Result: TCP connectivity check passed for subnet "172.16.52.0"
Check: Node connectivity of subnet "10.10.10.0"
Source Destination Connected?
----------------------------------------------------------------------------
Node76: eth1 node74: eth1 yes
Result: Node connectivity passed for subnet "10.10.10.0" with node (s) node76, node74
Check: TCP connectivity of subnet "10.10.10.0"
Source Destination Connected?
----------------------------------------------------------------------------
Node74: 10.10.10.1 node76: 10.10.10.2 passed
Result: TCP connectivity check passed for subnet "10.10.10.0"
Check: Node connectivity of subnet "zookeeper 20.0"
WARNING:
Make sure IP address "your region 20.1" is up and is a valid IP address on node "node74"
Source Destination Connected?
----------------------------------------------------------------------------
Node76: eth2 node74: eth2 no
Result: Node connectivity failed for subnet "zookeeper 20.0"
Check: TCP connectivity of subnet "20171000020.0"
Source Destination Connected?
----------------------------------------------------------------------------
Node74: zookeeper 20.1 node76: zookeeper 20.2 failed ------ Note: if one of the two nodes is configured with an IP address in the same network segment but is not connected, node connectivity check fails.
Result: TCP connectivity check failed for subnet "commandid 20.0"
Interfaces found on subnet "172.16.52.0" that are likely candidates for VIP are:
Node76 eth0: 172.16.54.76 eth0: 172.16.54.176
Node74 eth0: 172.16.54.74 eth0: 172.16.54.174 eth0: 172.16.54.175
Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are:
Node76 eth1: 10.10.10.2
Node74 eth1: 10.10.10.1
Result: Node connectivity check failed
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
It turns out that the two nodes have a pair of NICs configured with the IP address of the CIDR block, but in fact the two NICs are not connected, resulting in node connectivity check failure.
Check: TCP connectivity of subnet "20171000020.0"
Source Destination Connected?
----------------------------------------------------------------------------
Node74: fail 201720.1 node76: fail 201720.2 failed
The solution is either to connect two NICs, or to change to different network segments, or simply disable these two NICs.
I disabled these two NICs and re-run the installation check.