RAC ocfs2 File System FAQs

Source: Internet
Author: User
Tags dmesg

 

Symptom 1:
Mount-T ocfs2-O datavolume, nointr/dev/sdb1/webdata
Mount. ocfs2: transport endpoint is not connected while mounting/dev/sdb1 on/webdata. Check 'dmesg' for more information on this error.

Possible problems:
1: The firewall is open, not closed, and the heartbeat port is blocked.
2: The/etc/init. d/o2cb configure values of each node are different.
3: one node is being mounted. Another node has just been configured and the ocfs2 service has been restarted. In this case, you only need to restart the service for each node to complete the mounting.

4: SELinux is not disabled.

The following is a case:

[Root @ test02 ~] # Mount-T ocfs2/dev/vg_ocfs/lv_u02/u02
Mount. ocfs2: transport endpoint is not connected while mounting/dev/vg_ocfs/lv_u02 on/u02. check 'dmesg' for more information on this error.
This error occurs because the o2cb_heartbeat_threshold node values are different when ocfs is configured. When I used/etc/init. d/o2cb configure, the values of each node were the same, but I forgot to restart o2cb on the first node, and I found it after checking the result for a long time. Next, of course, the mounted ocfs directory umount is dropped, and an error is returned:
[Root @ test01 u02] # umount-F/u02
Umount2: device or resource busy
Umount:/u02: device is busy
Umount2: device or resource busy
Umount:/u02: device is busy
At this time,/etc/init should be used. d/ocfs2 stop and/etc/init. d/o2cb stop: Stop ocfs2 and o2cb and then umoujnt. Then, after starting ocfs2 and o2cb, other nodes can mount ocfs smoothly.

Symptom 2:
#/Etc/init. d/o2cb online ocfs2

Starting cluster ocfs2: Failed

Cluster ocfs2 created

O2cb_ctl: configuration error discovered while populating cluster ocfs2. none of its nodes were considered local. A node is considered local when its node name in the configuration maches this machine's host name.

Stopping cluster ocfs2: OK

For host name issues, check the more/etc/ocfs2/cluster. conf and/etc/hosts file information, and modify the corresponding host name.

Note: To ensure that the ocfs2 file system can be automatically mounted at startup, you must add the automatic start option to/etc/fstab, the Host Name and IP address of the two nodes must be added to/etc/hosts for resolution, Host Name and/etc/ocfs2/cluster. the hostnames configured in conf must be the same.

 

Symptom 3

1: Starting o2cb cluster ocfs2: Failed
An error occurred while configuring o2cb after installing ocfs2:
[Root @ Rac1 ocfs2] #/etc/init. d/o2cb configure
Processing the o2cb driver.

This will configure the on-boot properties of the o2cb driver.
The following questions will determine whether the driver is loaded on
Boot. The current values will be shown in brackets ('[]'). Hitting
<Enter> without typing an answer will keep that current value. CTRL-C
Will abort.

Load o2cb driver on boot (y/n) [y]:
Cluster to start on boot (enter "NONE" to clear) [ocfs2]:
Specify heartbeat dead threshold (> = 7) [7]:
Writing o2cb configuration: OK
Starting o2cb cluster ocfs2: Failed
Cluster ocfs2 created
O2cb_ctl: configuration error discovered while populating cluster ocfs2. none of its nodes were considered local. A node is considered local when its node name in the configuration matches this machine's host name.
Stopping o2cb cluster ocfs2: OK
In this case, the ocfs is not configured. You can check that there is a graphical ocfs configuration command. You must configure it first, and it is best to use an IP address instead of a host name!
That is to say, when ocfs2 is started, the ocfs node configuration file must be configured properly. If the configuration is not correct, an error will be reported. When the graphic interface is configured,/etc/ocfs2/cluster. the conf file should be a null file, or an error will be reported!

Symptom 4
Mounting the ocfs2 File System
Mount. ocfs2: error when attempting to run/sbin/ocfs2_hb_ctl: "operation not permitted"
Mount-T ocfs2-O datavolume/dev/sdb1/u02/oradata/orcl
Ocfs2_hb_ctl: Bad magic number in superblock while reading UUID
Mount. ocfs2: error when attempting to run/sbin/ocfs2_hb_ctl: "operation not permitted"
This problem is caused by the error that the partition of the ocfs2 file system is not formatted. before mounting the ocfs2 file system, the partition used for this file system must be formatted.

Symptom 5:
Configuration assistant "Oracle cluster verification utility" failed
10g RAC installation ask Oracle 10.2.0.1 Solaris 5.9 the last step of Dual-host CRS installation is wrong, do not know how to solve?

Log information:
Info: configuration assistant "Oracle cluster verification utility" failed
-----------------------------------------------------------------------------
* ** Starting ouica ***
Oracle Home set to/orabase/product/10.2
Configuration directory is set to/orabase/product/10.2/export toollogs. All XML files under the directory will be processed
Info: the "/orabase/product/10.2/export toollogs/configtoolfailedcommands" script contains all commands that failed, were skipped or were canceled. this file may be used to run these configuration assistants outside of OUI. note that you may have to update this script with passwords (if any) before executing the same.
-----------------------------------------------------------------------------
Severe: OUI-25031: Some of the configuration assistants failed. it is stronugly recommended that you retry the configuration assistants at this time. not successfully running any "recommended" assistants means your system will not be correctly configured.
1. Check the details panel on the configuration assistant screen to see the errors resulting in the failures.
2. Fix the errors causing these failures.
3. Select the failed assistants and click the 'retry' button to retry them.
Info: User selected: yes/OK

This is because the VIP address is not started. We recommend that you finish running orainstroot. SH and root. sh command and then open a new window to execute vipca. After the CRS service is all up, execute the final verify step. You can try again.

Run crs_stat-T in the bin directory of CRS to check whether all services have started. In this case, the VIP is unavailable.

Symptom 6:
Failed to Upgrade Oracle cluster registry Configuration
When the CRS is installed and the./root. Sh command is executed on the second node, the following prompt is displayed. I am running normally on the first node. Please kindly advise me! Thank you!
[Root @ ractest2 CRS] #./root. Sh
Warning: directory '/APP/Oracle/product/10.2.0' is not owned by root
Warning: directory '/APP/Oracle/product' is not owned by root
Warning: directory '/APP/Oracle' is not owned by root
Warning: directory '/app' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR Backup Directory
Setting up NS Directories
PROT-1: failed to initialize ocrconfig
Failed to Upgrade Oracle cluster registry Configuration
Error cause:

This is because there is a problem with the permission to install the CRS device. For example, if your device uses raw to place OCR and vote, you must set the permissions for these hardware devices and the connected files, the following is my environment:
[Root @ rac2 javasrs] #
Lrwxrwxrwx 1 Root 13 Jan 27 :49 OCR. CRS->/dev/raw/raw1
Lrwxrwxrwx 1 Root 13 Jan 26 13:31 vote. CRS->/dev/raw/raw2
Chown root: oinstall/dev/raw/raw1
Chown root: oinstall/dev/raw/raw2
Chmod 660/dev/raw/raw1
Chmod 660/dev/raw/raw2
Where/dev/sdb1 places OCR,/dev/sdb2 places vote.
[Root @ rac2 upload RS] # service rawdevices reload
Assigning devices:
/Dev/raw/raw1 -->/dev/sdb1
/Dev/raw/raw1: bound to major 8, minor 17
/Dev/raw/raw2 -->/dev/sdb2
/Dev/raw/raw2: bound to major 8, minor 18
Done

Then execute the command again.

[Root @ rac2 license RS] #/Oracle/APP/Oracle/product/CRS/root. Sh
Warning: directory '/Oracle/APP/Oracle/product' is not owned by root
Warning: directory '/Oracle/APP/Oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR Backup Directory
Setting up NS Directories
Oracle cluster registry configuration upgraded successfully
Warning: directory '/Oracle/APP/Oracle/product' is not owned by root
Warning: directory '/Oracle/APP/Oracle' is not owned by root
Clscfg: existing configuration Version 3 detected.
Clscfg: Version 3 is 10g Release 2.
Assigning default hostname Rac1 for node 1.
Assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS = 49895 CRS = 49896 evmc = 49898 and evmr = 49897.
Node <nodenumber>: <nodename> <private interconnect Name> Node 1: Rac1 priv1 Rac1
Node 2: rac2 priv2 rac2
Clscfg: arguments check out successfully.

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.