Failed to upgrade Oracle Cluster Registry configuration, raccluster during 10g rac Installation
Failed to upgrade Oracle Cluster Registry configuration was encountered during 10g rac installation, because the BUG was triggered by using the DMM multi-path software device-mapper-multipath.
For more information about installing RAC on the DMM multi-path software device-mapper-multipath, see [ing raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OL5 (Document ID 564580.1), when using multi-path, you need to manually create a bare device-a bare device for OCR and voting disks in 10g, the 11g starts to support DMM mapping Files.
3. Create Raw Devices
During the installation of Oracle Clusterware 10g Release 2 (10.2.0), the Universal Installer (OUI) is unable to verify the sharedness of block devices, therefore requires the use of raw devices (whether to singlepath or multipath devices) to be specified for OCR and voting disks. as mentioned earlier, this is no longer the case from Oracle11g R1 (11.1.0) that can use multipathed block devices directly.
Manually create raw devices to bind against multipathed device partitions (/dev/mapper/* pN ). disregard device permissions for now-this will be addressed later. for example: # raw/dev/raw/raw1/dev/mapper/ocr1p1
/Dev/raw/raw1: bound to major 253, minor 11
Here, the root. sh script is run
[Root @ 315rac01 ~] # Sh/oracle/app/oracle/product/10.2.0/crs/root. sh
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/Etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Failed to upgrade Oracle Cluster Registry configuration
This is also described in the references for ING raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OL5 (Document ID 564580.1,
7. Install Oracle 10gR2 Clusterware
Proceed to install Oracle Clusterware 10g Release 2 (10.2.0), ensuring to specify the appropriate raw devices (/dev/rawN) For OCR and voting disks. OCR devices are initialised (formatted) as part of running the root. sh script. before running root. sh, be aware that several known issues exist that will cause the Clusterware installation to fail, namely:
- Bug.4679769 FAILED TO FORMAT OCR DISK USING CLSFMT
- Note.414163.1 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA Failures)
Due to Bug.4679769, initialisation of multipathed OCR devices will fail. therefore, before runningroot. sh, download and apply patch forBug.4679769. If root. sh was already run without first having applied patch forBug.4679769, remove (null) the failed, partially initialised OCR structures from all OCR devices, for example:
That is to say, you must patch Bug.4679769 before running the root. sh script, then DD The OCR disk, and then use clsfmt. bin to verify whether the format ocr disk can be used.
It is roughly as follows:
[Root @ 315rac01 bin] # cd/home/oracle/install/4679769/
[Root @ 315rac001 4679769] # ls
Clsfmt. bin README.txt
[Root @ 315rac001 4679769] # ls-al
Total 692
Drwxrwxr-x 2 oracle oinstall 4096 Nov 9 2005.
Drwxr-xr-x 6 oracle oinstall 4096 Dec 29 ..
-Rw-r -- 1 oracle oinstall 687320 Nov 9 2005 clsfmt. bin
-Rw-r -- 1 oracle oinstall 4266 Nov 9 2005 README.txt
[Root @ 315rac01 4679769] # cp clsfmt. bin/oracle/app/oracle/product/10.2.0/crs/bin/
[Root @ 315rac001 4679769] # cd-
/Oracle/app/oracle/product/10.2.0/crs/bin
[Root @ 315rac01 bin] # chmod 755 clsfmt. bin
[Root @ 315rac01 bin] # dd if =/dev/null of =/dev/raw/raw9 bs = 1024 k count = 1000
0 + 0 records in
0 + 0 records out
0 bytes (0 B) copied, 3.9e-05 secondds, 0.0 kB/s
[Root @ 315rac01 bin] # dd if =/dev/zero of =/dev/raw/raw9 bs = 1024 k count = 1000
1000 + 0 records in
1000 + 0 records out
1048576000 bytes (1.0 GB) copied, 2.00336 seconds, 523 MB/s
[Root @ 315rac01 bin] # dd if =/dev/zero of =/dev/raw/raw10 bs = 1024 k count = 1000
1000 + 0 records in
1000 + 0 records out
1048576000 bytes (1.0 GB) copied, 1.99393 seconds, 526 MB/s
[Root @ 315rac01 bin] #./clsfmtbin ocr/dev/raw/raw9
Bash:./clsfmtbin: No such file or directory
[Root @ 315rac01 bin] #./clsfmt. bin ocr/dev/raw/raw9
Clsfmt: successfully initialized file/dev/raw/raw9
[Root @ 315rac01 bin] #./clsfmt. bin ocr/dev/raw/raw10
Clsfmt: successfully initialized file/dev/raw/raw10
Run root. sh again.
[Root @ 315rac01 bin] # sh/oracle/app/oracle/product/10.2.0/crs/root. sh
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS = 49895 CRS = 49896 EVMC = 49898 and EVMR = 49897.
Node <nodenumber>: <nodename> <private interconnect name> Node 1: 315rac01 priv_rac01 315rac01
Node 2: 315rac02 priv_rac02 315rac02
Creating OCR keys for user 'root', privgrp 'root '..
Operation successful.
Now formatting voting device:/dev/raw/raw3
Now formatting voting device:/dev/raw/raw4
Now formatting voting device:/dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
315rac01
CSS is inactive on these nodes.
315rac02
Local node checking complete.
Run root. sh on remaining nodes to start CRS daemons.