Oracle 10g rac deployment documentation host network configuration considerations: static IP Address Configuration: static gateway should specify hostname not to appear in the loopback address! If you have started the single-host asm service, stop ----------------------------- 1 and configure the IP addresses of the two machines to be pinged to 2. Change the machine name vi/etc/sysconfig/network3, configure static IP vi/etc/sysconfig/network-scripts/ifcfg-eth04, CP a NIC eth: 2 -- heartbeat IPcp ifcfg-eth0: 2 change the information inside DEVICE = eth0: 2 BOOTPROTO = staticIPADDR = 10.10.10.11GATEWAY = 10.10.10.1 Restart service network restart5, configure HOSTS file A and host B as 192.168.10.11 oracle1 -- Host static private IP192.168.10.12 oracle2 -- Host static private IP 192.168.10.21 2.16l E1-vip -- floating IP192.168.10.22 oracle2-vip -- floating IP 10.10.10.11 oracle1-priv -- heartbeat line ip10.10.12 oracle2-priv -- heartbeat line IP6, configure hangcheck-timervi/etc/rc. local7: Set ORACLE user password 8, set the private key and Public Key 9, send A to B, append B to authorized_keys, and then send it to both machines with the integrated information file 10, for SSH connections, verify that each node is verified twice, one is its own (static, heartbeat), and one is its peer (static, heartbeat ). 11, ----------------------------- run install on node1 & node2. sh sets a password for the oracle user to modify the oracle user. bashrc File export ORA_CRS_HOME =/u01/app/crs_1export ORACLE_SID = racdb # su-chown oracle. oinstall/u01/app-R: vi/etc/hosts ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~ 127.0.0.1 localhost. localdomain localhost: 1 localhost6.localdomain6 localhost6 # Public Network-(eth0) 192.168.3.50 stu50192.168.3.52 stu52 # Public Virtual IP (eth0: 1) 192.168.3.51 stu50-vip192.168.3.53 stu52-vip # Private Interconnect-(eth1-> eth0: 2) 10.0.0.50 stu50-priv10.0.0.52 stu52-priv configuration eth0: 2: cd/etc/sysconfig/network-scripts/cp ifcfg-eth0 ifcfg-eth0: 2 DEVICE = eth0: 2 BOOTPROTO = staticHWADDR = 00: E0: 4D: 3B: 0C: B2IPADDR = 10.0.0.504256init = bytes = yesNETMASK = 255.255.255.0GATEWAY = 10.0.0.1ONBOOT = yes configure hangcheck-timer: used to monitor whether the Linux kernel is suspended vi/etc/modprobe. confoptions hangcheck-timer hangcheck_tick = 30 hangcheck_margin = 180 automatically load the hangcheck-timer module/u01/app/oracle/product/10.2.0/db_1vi/etc/rc. localmodprobe hangcheck-timer -- run the following command to check whether the hangcheck-timer module has been loaded: lsmod | grep hangcheck_timer to configure the trust relationship: no De1: 192.168.3.50su-oraclessh-keygen-t rsassh-keygen-t dsacd. sshcat *. pub> authorized_keys node2: 192.168.3.52su-oraclessh-keygen-t rsassh-keygen-t dsacd. sshcat *. pub> authorized_keys node1: 192.168.3.50scp authorized_keys oracle@192.168.3.52:/home/oracle /. ssh/keys_dbs node2: 192.168.3.52cat keys_dbs> authorized_keysp_authorized_keys oracle@192.168.3.50:/home/oracle /. ssh/test Trust Relationship: node1: 192.168.3.50node2: 192.168.3.52ssh stu50ssh stu52ssh stu50-privssh stu52-priv prepare public volumes: iscsirpm-ivh compat-db-4.2.52-5.1.i386.rpmrpm-ivh libXp-1.0.0-8.1.el5.i386.rpmrpm-ivh openmotif22-2.2.3-18.i386.rpm node1: 192.168.3.50 stu50 (iscsi server) Partition 10g partition as iscsi shared Disk: partition: /dev/sda5 5889 7105 9775521 83 Linux iscsi server: rpm-ivh perl-Config-General-2.40-1.el5.noarch.rpmrpm-ivh scsi-target under the ClusterStorage directory -Rpm-ivh utils-0.0-5.20080917snap.el5.x86_64.rpmServer vi/etc/tgt/targets under the iscsi-initiator-utils-6.2.0.871-0.16.el5.i386.rpm directory. conf ---------------------------------------- <target iqn.2011-01.com. oracle. blues: luns1> backing-store/dev/sda9 initiator-address 10.1.1.0/24 </target> ---------------------------------------- vi/etc/udev/scripts/iscsidev. sh ----------------------------------------#! /Bin/bash BUS =$ {1} HOST =$ {BUS %: *} [-e/sys/class/iscsi_host] | exit 1 file = "/sys/class/iscsi_host/host $ {HOST}/device/session */iscsi_session */ targetname "target_name =$ (cat $ {file }) if [-z "$ {target_name}"]; then exit 1 fi echo "$ {target_name ##*:}" ---------------------------------------- chmod + x/etc/udev/scripts/iscsidev. sh chkconfig iscsi onchkconfig iscsid onchkconfig tgtd on service iscsi Startservice iscsid startservice tgtd start tgtadm -- lld iscsi -- op bind -- mode target -- tid 1-I ALL -- enable ISCSItgtadm -- lld iscsi -- op show -- mode target -- View LUNiscsiadm-m discovery- t sendtargets-p 10.1.1.103service iscsi startfdisk-l re-scan the server iscsiadm-m session-uiscsiadm-m discovery-t sendtargets-p 10.1.1.103 vi/etc/rc. localtgtadm -- lld iscsi -- op bind -- mode target -- tid 1-I ALLservice Iscsi start iscsi client configuration client: 10.1.1. 103rpm-ivh iscsi-initiator-utils-6.2.0.871-0.16.el5.i386.rpm vi/etc/udev/rules. d/55-openiscsi.rules ----------------------------------------------- KERNEL = "sd *", BUS = "scsi", PROGRAM = "/etc/udev/scripts/iscsidev. sh % B ", SYMLINK + =" iscsi/% c "----------------------------------------------- vi/etc/udev/scripts/iscsidev. sh ----------------------------------------#! /Bin/bashBUS =$ {1} HOST =$ {BUS %: *} [-e/sys/class/iscsi_host] | exit 1 file = "/sys/class/iscsi_host/host $ {HOST}/device/session */iscsi_session */ targetname "target_name =$ (cat $ {file }) if [-z "$ {target_name}"]; then exit 1 fiecho "$ {target_name ##*:}" ---------------------------------------- chmod + x/etc/udev/scripts/iscsidev. sh service iscsi startiscsiadm-m discovery-t sendtargets-p 10.1.1.18-lservi Ce iscsi startfdisk-l ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~ For an iscsi shared disk partition, change the iscsi shared partition to a bare device: vi/etc/udev/rules. d/60-raw.rulesACTION = "add", KERNEL = "sdb1", RUN + = "/bin/raw/dev/raw/raw1% N" ACTION = "add ", KERNEL = "sdb2", RUN + = "/bin/raw/dev/raw/raw2% N" ACTION = "add", KERNEL = "sdb3 ", RUN + = "/bin/raw/dev/raw/raw3% N" ACTION = "add", KERNEL = "sdb5 ", RUN + = "/bin/raw/dev/raw/raw4% N" KERNEL = "raw [1]", MODE = "0660", GROUP = "oinstall ", OWNER = "oracle" KERNEL = "raw [2]", MODE =" 0660 ", GROUP =" oinstall ", OWNER =" oracle "KERNEL =" raw [3] ", MODE =" 0660 ", GROUP =" oinstall ", OWNER = "oracle" KERNEL = "raw [4]", MODE = "0660", GROUP = "oinstall", OWNER = "oracle" Start udev in node1 & node2 respectively: start_udev confirm that the bare device is loaded at node1 & node2: [root @ stu50 ~] # Ll/dev/raw total 0crw-rw ---- 1 root oinstall 162, 1 01-11 raw1crw-rw ---- 1 oracle oinstall 162, 2 01-11 raw2crw-rw ---- 1 oracle oinstall 162, 3 01-11 raw3crw-rw ---- 1 oracle oinstall 162, 4 01-11 raw4 use CVU to verify cluster installation Feasibility :. /runcluvfy. sh stage-pre crsinst-n rac1, rac2-verbose install clusterware software (only one node is required, but other nodes must be manually added to the group):/mnt/clusterware/runInstaller note: run root. do not run root in the dialog box of the sh script. sh The script first modifies vipca and srvctl scripts. Otherwise, an error will be reported when java is called during script execution! Su-oraclevi + 123 $ CRS_HOME/bin/vipca adds a new row after 123 rows of fi: unset LD_ASSUME_KERNEL vi + $ CRS_HOME/bin/srvctl add unset LD_ASSUME_KERNEL after export LD_ASSUME_KERNEL to run root on the last node. sh if the following error occurs, please solve it in the blue font section below! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~ Running vipca (silent) for processing nodeappsError 0 (Native: listNetInterfaces: [3]) [Error 0 (Native: listNetInterfaces: [3])] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~ Pay attention to the red part! Modify the network according to your network configuration (note the network adapter name and IP address! Do not copy it blindly !)! Cd/u01/app/crs_1/bin #. /oifcfg iflist #. /oifcfg setif-global eth0/10.1.1.0: public #. /oifcfg setif-global eth0: 2/10. 0.0.0: cluster_interconnect #. /oifcfg getif if the network adapter of the two experimental machines is eth0 and eth1, modify as follows :. /oifcfg setif-node node1 eth0/: 10.1.1.0: public. /oifcfg setif-node 1 eth0: 0/172. 255.1.0: cluster_interconnect. /oifcfg setif-node 1 eth0: 1/172. 255.1.0: cluster_interconnect. /oifcfg setif-node node2 eth 1/: 10.1.1.0: public. /oifcfg setif-node node2 eth1: 0/172. 255.1.0: cluster_interconnect. /oifcfg setif-node node2 eth1: 1/172. listen 1.0: cluster_interconnect ############ effect ####### [root @ server bin] #. /oifcfg getifeth0 10.1.1.0 node1 publiceth0: 0 172.20.1.0 node1 failed: 1 172.20.1.0 node1 cluster_interconnecteth1 10.1.1.0 node2 publiceth1: 0 172.20.1.0 node2 cluster_interconnecteth1: 1. 20.1.0 node2 cluster_interconnect ############ effect ####### manually run vipcaunset LANG on the current node after setting the network interface. after the/vipca Wizard starts the resource, view the status of each resource cd $ ORA_CRS_HOME/bin. /crs_stat-t :. /crs_stat. /crs_stat-p clusterware: Back up ocr after the software is successfully installed !. /Ocrconfig-export/home/oracle/ocr. bak installs database software (only one node is required, and multiple nodes will be selected): During installation, choose to only install software without the key Library/mnt/database/runInstaller clusterware management: view the location of the voting disk :#. /crsctl query css votedisk backup voting diskdd if = voting_disk_name of = backup_file_name bs = 4 k restore voting diskdd if = backup_file_name of = voting_disk_name bs = 4 k Add a new voting Disk: # crsctl add css votedisk <new voting disk path> delete voting disk: # crsctl delete css votedisk <old voting dis K path> If Oracle Clusterware on all nodes is disabled, use the-force option: # crsctl add css votedisk <new voting disk path>-force # crsctl delete css votedisk <old voting disk path>-force view the OCR position #. /ocrcheck find physical backup: $ ocrconfig-showbackup check ocr content: # ocrdump-backupfile file_name check OCR Integrity: $ cluvfy comp ocr-n all OCR will be automatically backed up at the following time: every 4 hours: CRS retains the last 3 copies. At the end of each day: CRS retains the last two copies. At the end of each week: CRS retains the last two copies. Change the default location of automatic backup: # ocrconfig-backuploc/shared/bak restore OCR physical backup: # crsctl stop crs # ocrconfig-restore <crs home>/cdata/jfv_clus/day. ocr # crsctl start crs manual backup:/data/oracle/crs/bin/ocrconfig-export/data/backup/rac/ocrdisk. bak restore logical OCR backup: # crsctl stop crs # ocrconfig-import/shared/export/ocrback. dmp # crsctl start crs check OCR Integrity: $ cluvfy comp ocr-n all stop crs:/etc/init. d/init. crs stop starts crs:/etc/ Init. d/init. crs start: tail-f/var/log/message ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~ Manually uninstall clusterware: If clusterware installation fails, use the following method to uninstall clusterware! If the installation is successful, do not uninstall it! Do not use the script! Cd/u01/app/crs_1/install. /rootdelete. sh. /rootdeinstall. sh rm-fr/etc/ora * rm-fr/etc/init. d /*. crsrm-fr/etc/init. d /*. crsdrm-fr/etc/init. d /*. cssrm-fr/etc/init. d /*. cssdsu-Rule lerm-fr $ ORACLE_BASE/* [node1.up.com] [node2.up.com] [region] node1.up.com: 192.168.0.7 node2.up.com: 192.168.0.8 storage.up.com: 192.168.0.123 storage configuration (storage): 1, scsi [root @ storage ~] # Rpm-qa | grep scsiscsi-target-utils-0.0-5.20080917snap.el5 2, ready> 5G dd if =/dev/zero of =/tmp/disk. img bs = 1G count = 5 3, publish this space/etc/tgt/targets. conf ---------------------------- <target iqn.2010-04-07.com. up. storage: sharedisk> backing-store/tmp/disk. img initiator-address 192.168.0.7 initiator-address 192.168.0.8 </target> --------------------------- service tgtd start Detection: tgtadm -- lld iscsi -- op show -- Mode target 4, chkconfig tgtd on node configuration (nodeX): 1, iscsi client [root @ node1 cluster] # rpm-qa | grep iscsiiscsi-initiator-utils-6.2.0.868-0.18.el5 [root @ node1 cluster] # ls arb/iscsi/ifaces isns nodes send_targets slp static if there is a replacement, clear it: [root @ node1 cluster] # rm-rf arb/iscsi /*!!! Idea: service iscsid start [root @ node1 cluster] # iscsiadm-m discovery-t sendtargets-p 192.168.0.123: 3260192.168.0.123: 3260,1 iqn.2010-04-07.com. up. storage: sharedisk 2, log on to the storage/etc/init. d/iscsi start 3, Udev policy method: udevinfo-a-p/sys/block/sdX Based on the output information. Then: [root @ rules. d] # cat/etc/udev/rules. d/55-iscsi.rules SUBSYSTEM = "block", SYSFS {size }== "19551042", SYSFS {model }== "VIRTUAL-DISK", SYMLINK = "iscsidisk" Refresh: start_udev 5, import storage [root @ node1 cluster] #/etc/init. d/iscsi startiscsid (pid 5714 5713) is running... set the iSCSI target: Logging in to [iface: default, target: iqn.2010-04-07.com. up. storage: sharedisk, portal: 192.168.0.123, 3260] Login to [iface: default, target: iqn.2010-04-07.com. up. storage: sharedisk, portal: 192.168.0.123, 3260]: successful [OK] [root @ node1 cluster] # ls/dev/iscsi/-l total 0 lrwxrwxrwx 1 root 6 04-07 sharedisk-> .. /sdb 6, modify LVM (supports Cluster) yum install lvm2-cluster [root @ node1 cluster] # lvmconf -- enable-cluster [root @ node1 cluster] # ls/etc/lvm. conf/etc/lvm. conf 7. When clvmd is started and cman is required to start/etc/init. d/clvmd start 8, normally configure your LVM 63 pvcreate/dev/iscsidisk 64 vgcreate vg/dev/iscsidisk 65 lvcreate-n 66 lvcreate-h 67 lvcreate-L 4G-n lv01 vg tar-xzvf ora.tar.gz decompress alter system set LOCAL_LISTENER = "(ADDRESS = (PROTOCOL = TCP) (HOST = <VIP_address>) (PORT = 1521) "scope = both sid = 'instance'