10gR2 Clusterware 簡明安裝步驟,10gr2clusterware
一、安裝作業系統(略)注意,我這裡用的都是OEL 5u5版本,因為內建了Oracle提供的環境變數安裝包,因此可以很方便地進行Oracle軟體的安裝,因此推薦使用該版本
二、配置本地yum源
--把安裝盤mount到mediamount -t iso9660 /dev/cdrom /media
--配置repos添加以下內容vi /etc/yum.repos.d/oel5.repos[oel5]name = oel 5 DVDbasurl = file:///media/Servergpgcheck =0 enabled = 1 --為1表示啟用這個repos
三、配置信任關係(oracle使用者)
--節點1和節點2分別執行:ssh-keygen -t rsassh-keygen -t dsa
--節點1執行:cat ~/.ssh/*.pub >> ~/.ssh/authorized_keysssh rac2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keysscp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
總之就是把2邊節點的資訊寫到公開金鑰檔案中去,然後互相ssh就不再要求輸入密碼了,因為RAC安裝時需要在節點之間複製檔案
--驗證節點1執行:ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date
節點2執行:ssh rac1 datessh rac2 datessh rac1-priv datessh rac2-priv date
多驗證幾次,直到不需要輸入密碼為止,這裡要注意authorized_keys中不要漏掉s,我曾經試了半天也沒有成功,最後發現是公開金鑰檔案名稱錯了,少了個s導致的
四、配置共用磁碟
--添加若干塊磁碟並分區fdisk -lfdisk /dev/sdbfdisk /dev/sdcfdisk /dev/sddfdisk /dev/sde
如果碰到需要重啟後才能識別分區資訊,可以使用linux內建的partporbe命令,執行一下就可以在不重啟的情況下讓kernel識別新的分區也可以執行partprobe -s進行查看
[root@rac1 ~]# fdisk /dev/sdc
Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-12, default 1): Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-12, default 12): Using default value 12
Command (m for help): wThe partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table.The new table will be used at the next reboot.Syncing disks.[root@rac1 ~]# partprobe -s/dev/sda: msdos partitions 1 2/dev/sdb: msdos partitions 1/dev/sdc: msdos partitions 1/dev/sdd: msdos partitions 1/dev/sde: msdos partitions 1[root@rac2 install]#
--綁定磁碟對於4.x和5.x的系統,Binder 方法是不同的
4.x:# vi /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdc1/dev/raw/raw2 /dev/sdd1/dev/raw/raw3 /dev/sde1
# vi /etc/udev/permissions.d/50-udev.permissions # raw devicesram*:root:disk:0660#raw/*:root:disk:0660raw/*:oracle:dba:0660
5.x:# vi /etc/udev/rules.d/60-raw.rulesACTION=="add",KERNEL=="/dev/sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"ACTION=="add",KERNEL=="/dev/sdc1",RUN+="/bin/raw /dev/raw/raw2 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"ACTION=="add",KERNEL=="/dev/sdd1",RUN+="/bin/raw /dev/raw/raw3 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"ACTION=="add",KERNEL=="/dev/sde1",RUN+="/bin/raw /dev/raw/raw4 %N"ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"ACTION=="add",KERNEL=="raw*",OWNER="oracle",GROUP="oinstall",MODE="660"
只需要添加紅色部分就可以了,然後重啟udev,配置正常就會顯示如下內容:
[root@rac1 ~]$ ll /dev/raw
total 0crw-rw---- 1 oracle oinstall 162, 1 Jan 15 22:54 raw1crw-rw---- 1 oracle oinstall 162, 2 Jan 15 22:24 raw2crw-rw---- 1 oracle oinstall 162, 3 Jan 15 22:56 raw3crw-rw---- 1 oracle oinstall 162, 4 Jan 15 22:54 raw4
五、添加環境變數
節點1:[oracle@rac1 ~]$vi .bash_profileexport ORACLE_SID=RAC1export ORACLE_HOME=/u01/oracle/10.2.0/db_1export ORA_CRS_HOME=/u01/oracle/10.2.0/crs_1export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$ORACLE_HOME/jdk/bin:$PATH
節點2:[oracle@rac1 ~]$vi .bash_profileexport ORACLE_SID=RAC2export ORACLE_HOME=/u01/oracle/10.2.0/db_1export ORA_CRS_HOME=/u01/oracle/10.2.0/crs_1export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$ORACLE_HOME/jdk/bin:$PATH
六、安裝Clusterware
--先進行安裝前校正cd ./clusterware/cluvfy
./cluvfy.sh -stage pre crsinst -n rac1,rac2 -verbose
會列出很多內容,檢測是否符合安裝Clusterware的條件,其中需要單獨安裝一個compat_db的包,oracle-validated包不會裝這個另外會報幾個其他compat包檢測失敗則不用理會,因為只是版本不對而已如果還碰到其他的沒有passed的內容,則需要處理,直到除以上幾個錯誤之外全都pass為止
--開始安裝cd ./clusterware
./runInstall -ignoreSysPreReqs (參數可以忽略大小寫,命令不允許)
運行OUI到最後,會要求分別在2個節點執行2個指令碼,順序為:指令碼1:RAC1->RAC2 ->指令碼2:RAC1->RAC2前3個次執行都沒有問題,到第4步,在節點2上執行root.sh的時候,會報錯:
[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.sh
WARNING: directory '/u01' is not owned by rootChecking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.assigning default hostname rac1 for node 1.assigning default hostname rac2 for node 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node <nodenumber>: <nodename> <private interconnect name> <hostname>node 1: rac1 rac1-priv rac1node 2: rac2 rac2-priv rac2clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. rac1 rac2CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeapps/u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
需要耐心等待好幾分鐘(90s+600s),然後會出現一個報錯,這是由於Oracle在10.2.0.1上的bug所致,解決辦法是通過修改$ORA_CRS_HOME/bin下滿的vipca和srvctl檔案,在檔案末尾添加unset LD_ASSUME_KERNEL儲存退出,再重新再節點2執行root.sh
[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.shWARNING: directory '/u01' is not owned by rootChecking to see if Oracle CRS stack is already configuredOracle CRS stack is already configured and will be running under init(1M)[root@rac2 bin]# ./crs_stat -tCRS-0202: No resources are registered.
此時由於還沒有配置vip,因此沒有資源被註冊,去任意節點運行vipca(前提是這個節點的vipca已修改過),如果報以下錯誤:
[oracle@rac1 bin]$ vipcaError 0(Native: listNetInterfaces:[3]) [Error 0(Native: listNetInterfaces:[3])]
那麼需要配置一下網卡:[oracle@rac1 bin]$ ./oifcfg iflisteth0 192.168.1.0eth1 10.0.0.0[oracle@rac1 bin]$ ./oifcfg getif[oracle@rac1 bin]$ ./oifcfg setif -global eth0/192.168.1.0:public[oracle@rac1 bin]$ ./oifcfg setif -global eth1/10.10.10.1:cluster_interconnect[oracle@rac1 bin]$ ./oifcfg getifeth0 192.168.1.0 global publiceth1 10.10.10.1 global cluster_interconnect
注意要有開啟圖形介面的許可權,並用root使用者去執行vipca,而不是oracle使用者,否則會報許可權不足[oracle@rac1 bin]$ vipcaInsufficient privileges.Insufficient privileges.
接著就會跳出vip配置助手的OUI介面,開始配置vip,輸入vip的節點別名後會自動填補vip的IP地址(過程略)運行完vipca後退出,再次執行crs_stat,就會發現資源都已經註冊到crs了[root@rac1 bin]# ./crs_stat -tName Type Target State Host ------------------------------------------------------------ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2