Linux platform Oracle 10GR2 (10.2.0.5) RAC installation Part2:clusterware installation and upgrade

Source: Internet
Author: User

Linux platform Oracle 10GR2 (10.2.0.5) RAC installation Part2:clusterware installation and upgrade
Environment: OEL 5.7 + Oracle 10.2.0.5 RAC

3. Installing Clusterware

    • 3.1 Extracting Clusterware Mounting media
    • 3.2 Starting the installation of Clusterware
    • 3.3 Root user executes script as prompted
    • 3.4 VIPCA Creation (may not be required)

4. Upgrade Clusterware

    • 4.1 Unpacking the Patchset Package
    • 4.2 Starting Upgrade Clusterware
    • 4.3 root user executes script as prompted

Linux platform Oracle 10gr2 RAC Installation Guide:
Part1:linux platform Oracle 10GR2 (10.2.0.5) RAC installation PART1: Prepare for work
Part2:linux platform Oracle 10GR2 (10.2.0.5) RAC installation Part2:clusterware installation and upgrade
Part3:linux platform Oracle 10GR2 (10.2.0.5) RAC installation PART3:DB installation and upgrade

3. Install Clusterware3.1 decompression clusterware installation media

Assign the Oracle-related installation media directory to Oracle users:

[[email protected] media]# chown -R oracle:oinstall /u01/media/

Oracle User Decompression installation media:

To perform a pre-check:

3.2 Starting the installation of Clusterware

Start installing Clusterware with Xmanager (Mac system is Xquartz):

3.3 Root user executes script as prompted

Node 1 execution:

#开始没有对/dev/sd{a,b,c,d,e}, these 5 LUN partitions [[email protected] rules.d]#/u01/app/oracle/orainventory/ Orainstroot.shchanging permissions of/u01/app/oracle/orainventory to 770.Changing groupname of/u01/app/oracle/ Orainventory to Oinstall. The execution of the script is complete[[email protected] rules.d]#/u01/app/oracle/product/10.2.0.5/crshome_1/ Root.shWARNING:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/  Product ' is not owned by Rootwarning:directory '/u01/app/oracle ' are not owned by Rootwarning:directory '/u01/app ' are not Owned by Rootchecking to see if the Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directorysetting up NS directoriesfailed to upgrade Oracle Cluster Registry Configur ation# to/dev/sd{a,b,c,d,e}, these 5 LUNs are partitioned sd{a,b,c,d,e}1 after successful execution [[email protected] 10.2.0.5]#/u01/app/oracle/ Orainventory/orainstroot.shchanging Permissions of/u01/app/oracle/orAinventory to 770.Changing groupname of/u01/app/oracle/orainventory to Oinstall. The execution of the script is complete[[email protected] 10.2.0.5]#/u01/app/oracle/product/10.2.0.5/crshome_1/ Root.shWARNING:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/  Product ' is not owned by Rootwarning:directory '/u01/app/oracle ' are not owned by Rootwarning:directory '/u01/app ' are not Owned by Rootchecking to see if the Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directorysetting up NS directoriesoracle Cluster Registry Configuration upgraded SUC Cessfullywarning:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/  Product ' is not owned by Rootwarning:directory '/u01/app/oracle ' are not owned by Rootwarning:directory '/u01/app ' are not Owned by rootsuccessfully accumulated necessary OCR keys. Using ports:css=49895 crs=49896 evmc=49898 and Evmr=49897.node <nodenumber>: <nodename> <private interconnect name> < Hostname>node 1:oradb27 oradb27-priv oradb27node 2:oradb28 oradb28-priv oradb28creating OCR keys for user ' root ', PRI Vgrp ' root '. Operation successful. Now formatting voting device:/dev/raw/raw3now formatting voting device:/dev/raw/raw4now formatting voting device:/DEV/R Aw/raw5format of 3 voting devices complete. Startup'll is queued to init within seconds. Adding daemons to Inittabexpecting, the CRS daemons to being up within-seconds.        CSS is active on these nodes.        ORADB27CSS is inactive on these nodes. oradb28local node checking complete. Run root.sh on remaining nodes to start CRS daemons.  [[email protected] 10.2.0.5]#

The official solution to this error can refer to the MOS documentation: executing root.sh errors with "Failed to Upgrade Oracle Cluster Registry Configuration" (Document ID 46667 3.1)

Before running the root.sh on the first node of the cluster do the following:

  1. Download patch:4679769 from Metalink (contains a patched version of Clsfmt.bin).
  2. Do the following steps as stated in the patch README to fix the problem:
    Note:clsfmt.bin need only being replaced on the 1st node of the the cluster

Node 2 execution:

[[email protected] crshome_1]#/u01/app/oracle/orainventory/orainstroot.shchanging permissions of/u01/app/ Oracle/orainventory to 770.Changing groupname of/u01/app/oracle/orainventory to Oinstall. The execution of the script is complete[[email protected] crshome_1]#/u01/app/oracle/product/10.2.0.5/crshome_1/ Root.shWARNING:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/  Product ' is not owned by Rootwarning:directory '/u01/app/oracle ' are not owned by Rootwarning:directory '/u01/app ' are not Owned by Rootchecking to see if the Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directorysetting up NS directoriesoracle Cluster Registry Configuration upgraded SUC Cessfullywarning:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/ Product ' isn't owned by Rootwarning:directory '/u01/app/oracle ' are not owned by RooTwarning:directory '/u01/app ' isn't owned by rootclscfg:existing configuration version 3 Detected.clscfg:version 3 is 10G Release 2.Successfully accumulated necessary OCR keys. Using ports:css=49895 crs=49896 evmc=49898 and Evmr=49897.node <nodenumber>: <nodename> <private Interconnect name> 

Above this error message, you need to modify the contents of VIPCA and Srvctl files under/u01/app/oracle/product/10.2.0.5/crshome_1/bin:

[[email protected] bin]# ls -l vipca -rwxr-xr-x 1 oracle oinstall 5343 Jan 3 09:44 vipca[[email protected] bin]# ls -l srvctl -rwxr-xr-x 1 oracle oinstall 5828 Jan 3 09:44 srvctl加入unset LD_ASSUME_KERNEL

Re-run/u01/app/oracle/product/10.2.0.5/crshome_1/root.sh

[[email protected] bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.shWARNING: directory ‘/u01/app/oracle/product/10.2.0.5‘ is not owned by rootWARNING: directory ‘/u01/app/oracle/product‘ is not owned by rootWARNING: directory ‘/u01/app/oracle‘ is not owned by rootWARNING: directory ‘/u01/app‘ is not owned by rootChecking to see if Oracle CRS stack is already configuredOracle CRS stack is already configured and will be running under init(1M)

No more error, but no successful display for VIPCA creation.

3.4 VIPCA Creation (may not be required)

If the above 3.3 steps are performed successfully VIPCA, then this step is no longer required;
If the above 3.3 steps do not perform successfully vipca, then you need to manually vipca the last node to create:
The manual execution of VIPCA here also encountered an error as follows:

[[email protected] bin]# ./vipca Error 0(Native: listNetInterfaces:[3])  [Error 0(Native: listNetInterfaces:[3])]

View information about the network layer and manually register the information:

[[email protected] bin]#./oifcfg getif[[email protected] bin]#./oifcfg Iflisteth0 192.168.1.0eth1 10.10.10.0[[email protected] bin]# ifconfig eth0 Link encap:ethernet HWaddr 06:cb:72:01:07:88 inet A          ddr:192.168.1.28 bcast:192.168.1.255 mask:255.255.255.0 up broadcast RUNNING multicast mtu:1500 Metric:1          RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0 TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2196870 487 (2.0 GiB) TX bytes:43268497 (41.2 MiB) eth1 Link encap:ethernet HWaddr 22:1a:5a:de:c1:21 inet addr:10 .10.10.28 bcast:10.10.10.255 mask:255.255.255.0 up broadcast RUNNING multicast mtu:1500 metric:1 RX          packets:5343 errors:0 dropped:0 overruns:0 frame:0 TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1315035 ( 1.2 MiB) TX bytes:1219689 (1.1 MiB) Lo Link encap:local Loopback inet addr:127.0.0.1 mask:255.0.0.0 up Loopback RUNNING          mtu:16436 metric:1 RX packets:2193 errors:0 dropped:0 overruns:0 frame:0           TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0        RX bytes:65167 (63.6 KiB) TX bytes:65167 (63.6 KiB) [[email protected] bin]#./oifcfg-hprif-9: Incorrect usagename: Oifcfg-oracle Interface Configuration Tool.Usage:oifcfg iflist [-P [n]] oifcfg setif {-node <nodenam e> | -global} {<if_name>/<subnet>:<if_type>} ... oifcfg getif [-node <nodename> |-global] [-if <if_name>[/<subnet>] [-type <if_type>] oifcfg delif [-node <nodename> |-global] [<if_ NAME&GT;[/&LT;SUBNET&GT] oifcfg [-help] <nodename>-Name of the host, as known to a communications Network <if_name>-Name by which the interface isConfigured in the system <subnet>-subnet address of the interface <if_type>-type of the I Nterface {cluster_interconnect | public | storage}[[email protected] bin]#./oifcfg Setif-global eth0/192.168.1.0: Public[[email protected] bin]#/oifcfg getifeth0 192.168.1.0 Global public[[email protected] bin]# [[email& Nbsp;protected] bin]# [[email protected] bin]# [[email protected] bin]#./oifcfg Setif-global eth1/ 10.10.10.0:cluster_interconnect[[email protected] bin]#./oifcfg getifeth0 192.168.1.0 Global publiceth1 10.10.1  0.0 Global cluster_interconnect[[email protected] bin]#

Once the oifcfg Getif has obtained the information normally, run VIPCA again to create the success.

Then continue back to the installation Clusterware interface continues to show success as well.
Check that the status of the cluster should be normal at this point:

[[email protected] bin]$ crsctl check crscss appears Healthycrs appears HEALTHYEVM appears healthy[[email  Protected] bin]$ crs_stat-t-vname Type r/ra f/ft Target State Host--------------- -------------------------------------------------------ORA....B27.GSD application 0/5 0/0 online online ora    Db27 ora....b27.ons application 0/3 0/0 Online online oradb27 ORA....B27.VIP application 0/0 0/0 Online Online oradb27 ora....b28.gsd application 0/5 0/0 online online oradb28 ora ....    B28.ons application 0/3 0/0 Online online oradb28 ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28 [[email protected] bin]$ [[email protected] ~]$ crsctl check crscss appears Healthycrs AP Pears HEALTHYEVM appears healthy[[email protected] ~]$ crs_stat-t-vname Type r/ra f/ft Targ ET state Host----------------------------------------------------------------------ORA....B27.GSD application 0/5 0/0 ONLINE Online oradb27 ora....b27.ons application 0/3 0/0 online online oradb27 ora....b27.vip Applicati On 0/0 0/0 Online online oradb27 ora....b28.gsd application 0/5 0/0 online online oradb2     8 ora....b28.ons application 0/3 0/0 Online online oradb28 ora....b28.vip application 0/0 0/0  Online online oradb28 [[email protected] ~]$
4. Upgrade Clusterware4.1 Extract Patchset Package
[[email protected] media]$ unzip p8202632_10205_Linux-x86-64.zip[[email protected] media]$ cd Disk1/[[email protected] Disk1]$ pwd/u01/media/Disk1
4.2 Starting Upgrade Clusterware

Start upgrading Clusterware with Xquartz:
Ssh-x [email protected]

During the upgrade process, during the preinstallation check, there is a parameter setting that does not meet the check requirements, as follows:

Checking for rmem_default=1048576; found rmem_default=262144.   Failed <<<<

You can adjust the/etc/sysctl.conf configuration file, and then perform sysctl-p to take effect.

4.3 root user executes script as prompted
    1.  Log in as the root user.    2.  As the root user, perform the following tasks:        a.  Shutdown the CRS daemons by issuing the following command:                /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs        b.  Run the shell script located at:                /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh            This script will automatically start the CRS daemons on the            patched node upon completion.    3.  After completing this procedure, proceed to the next node and repeat.

That is, separate execution:

/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh

Node 1 execution:

[[email protected] bin]#/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crsStopping resources. Successfully stopped CRS resources stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [[email protected] bin]#/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.shcreating pre-patch Directory for saving Pre-patch clusterware filescompleted patching clusterware files to/u01/app/oracle/product/10.2.0.5 /crshome_1relinking some shared libraries. Relinking of patched files are complete. Warning:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/product '  Is isn't owned by Rootwarning:directory '/u01/app/oracle ' are not owned by Rootwarning:directory '/u01/app ' are not owned by Rootpreparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup'll is queued to init within-seconds. Starting up the CRS daemons.  Waiting for the patched CRS daemons to start. This could take a whiLe on some systems. 10205 Patch successfully Applied.clscfg:EXISTING configuration version 3 Detected.clscfg:version 3 is 10G Release 2.SUCC Essfully deleted 1 values from OCR. Successfully deleted 1 keys from OCR. Successfully accumulated necessary OCR keys. Using ports:css=49895 crs=49896 evmc=49898 and Evmr=49897.node <nodenumber>: <nodename> <private Interconnect name> 

Node 2 execution:

[[email protected] bin]#/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crsStopping resources. Successfully stopped CRS resources stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [[email protected] bin]#/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.shcreating pre-patch Directory for saving Pre-patch clusterware filescompleted patching clusterware files to/u01/app/oracle/product/10.2.0.5 /crshome_1relinking some shared libraries. Relinking of patched files are complete. Warning:directory '/u01/app/oracle/product/10.2.0.5 ' isn't owned by Rootwarning:directory '/u01/app/oracle/product '  Is isn't owned by Rootwarning:directory '/u01/app/oracle ' are not owned by Rootwarning:directory '/u01/app ' are not owned by Rootpreparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup'll is queued to init within-seconds. Starting up the CRS daemons.  Waiting for the patched CRS daemons to start. This could take a whiLe on some systems. 10205 Patch successfully Applied.clscfg:EXISTING configuration version 3 Detected.clscfg:version 3 is 10G Release 2.SUCC Essfully deleted 1 values from OCR. Successfully deleted 1 keys from OCR. Successfully accumulated necessary OCR keys. Using ports:css=49895 crs=49896 evmc=49898 and Evmr=49897.node <nodenumber>: <nodename> <private Interconnect name> 

Upgrade successfully, confirm the CRS version is 10.2.0.5, the cluster status is normal:

[[email protected] bin]$ crsctl query crs activeversionCRS active version on the cluster is [10.2.0.5.0][[email protected] ~]$ crsctl query crs activeversionCRS active version on the cluster is [10.2.0.5.0][[email protected] ~]$ crs_stat -t -vName Type R/RA F/FT Target State Host ----------------------------------------------------------------------ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27 ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27 ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27 ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28 ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28 ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28 

At this point, the Oracle Clusterware installation (10.2.0.1) and upgrade (10.2.0.5) are complete.

Linux platform Oracle 10GR2 (10.2.0.5) RAC installation Part2:clusterware installation and upgrade

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.