On three nodes, configure the Hosts file, both the new node and the original node are configured for the same
12.16.10.5 Rac1
12.16.10.6 RAC2
12.16.10.4 RAC3
12.16.10.7 RAC1-VIP
12.16.10.8 RAC2-VIP
12.16.10.5 RAC3-VIP
12.16.12.5 Rac1-priv
12.16.12.6 Rac2-priv
12.16.12.4 Rac3-priv
12.16.10.9 Scan
Find the UID and GID of 1 nodes and 2 nodes (enter ID at the corresponding user to view or ID <username>)
UID=1200 (Oracle) gid=1000 (oinstall) groups=1000 (Oinstall), (DBA), 1201 (Oper), 1300 (ASMDBA)
Uid=1100 (GRID)
gid=1000 (Oinstall) groups=1000 (Oinstall), 1100 (asmadmin), 1300 (ASMDBA), 1301 (Asmoper)
/usr/sbin/groupadd-g Oinstall
/USR/SBIN/GROUPADD-G 1100 Asmadmin
/usr/sbin/groupadd-g DBA
/USR/SBIN/GROUPADD-G 1201 Oper
/usr/sbin/groupadd-g 1300 ASMDBA
/usr/sbin/groupadd-g 1301 Asmoper
Useradd-u 1100-g oinstall-gasmadmin,asmdba,asmoper Grid
Useradd-u 1200-g OINSTALL-GDBA,OPER,ASMDBA Oracle
Modify grid, Oracle user password
#如果发现user的uid不一样 Use usermod-u<uid><user>
Enable the added node to access the shared storage
Configure the Oracle_base and Oracle_home directories and permissions for the grid and Oracle (view the Oracle_base and Oracle_home directories for two users of existing RAC nodes and establish the appropriate folders)
Create a grid directory
Mkdir-p/u01/app/11.2.0/grid
Chown grid:oinstall/u01/-R
chmod 775/u01/-R
Create an Oracle Directory
Mkdir-p/u02/app/oracle/product/11.2.0/dbhome_1
Chown oracle:oinstall/u02/-R
chmod 775/u02/-R
Configure operating system parameters (refer to 1 nodes and 2 nodes) as you would when installing RAC
Vi/etc/security/limits.conf
Vi/etc/sysctl.conf
Vi/etc/pam.d/login
Remove time synchronization Options
Chkconfig ntpd off
Service NTPD Stop
Mv/etc/ntp.conf/etc/ntp.conf.bak
Modifying environment variables for grids and Oracle
Configuring Trust relationships
First generate PublicKey under the Grid,oracle user
Su-grid
[[email protected] ~]$/usr/bin/ssh-keygen-t RSA
[[Email protected] ~]$ CD. ssh/
[email protected]. ssh]$ ls
Id_rsa id_rsa.pub
Su-oracle
[[email protected] ~]$/usr/bin/ssh-keygen-t RSA
[[Email protected] ~]$ CD. ssh/
[email protected]. ssh]$ LL
-RW-------1 Oracle oinstall 1675 Mar 2521:40 Id_rsa
-rw-r--r--1 Oracle oinstall 404 Mar 21:40 id_rsa.pub
Grid user SSH (11grac2<--->,11grac1<--->rac3 22 configuration)
[email protected]. ssh]$ SCP id_rsa.pubrac3:/home/grid/.ssh/id_rsa1.pub
[email protected]. ssh]$ SCP id_rsa.pub11grac1:/home/grid/.ssh/id_rsa2.pub
[email protected]. ssh]$ Cat *.pub>>authorized_keys
[email protected]. ssh]$ Cat *.pub>>authorized_keys
[email protected]. ssh]$ SCP id_rsa.pubrac3:/home/grid/.ssh/id_rsa2.pub
[email protected]. ssh]$ SCP id_rsa.pub11grac2:/home/grid/.ssh/id_rsa1.pub
[email protected]. ssh]$ Cat *.pub>>authorized_keys
[email protected]. ssh]$ Cat *.pub>>authorized_keys
The same is true for Oracle User Configuration. Don't dwell on it.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In fact oracle11g has helped me to provide an automated script sshusersetup.sh
/g01/11201_install/grid/sshsetup (the directory after the grid package was unzipped)
[[email protected] sshsetup]# ls
sshusersetup.sh
[Email protected] sshsetup]$ pwd
/u01/11201_install/database/sshsetup (the directory after the Oracle package decompression)
[[email protected] sshsetup]$ ls
sshusersetup.sh
Execute the following command.
./sshusersetup.sh-user grid-hosts "Rac1 rac2 rac3"-advanced-nopromptpassphrase
./sshusersetup.sh-user oracle-hosts "Rac1 rac2 rac3"-advanced-nopromptpassphrase
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Old node Oracle and grid user check:
Cluvfy Stage-pre nodeadd-n Rac3–verbose
To perform an add operation on node 1
Execute the following command with the grid user entering the Oui/bin directory under Oracle_home:
Export Ignore_preaddnode_checks=y
./addnode.sh-silent "Cluster_new_nodes={rac3}" "Cluster_new_virtual_hostnames={rac3-vip}"
Execute the script with the root user on RAC3:
/u01/app/orainventory/orainstroot.sh
/u01/app/11.2.0/grid/root.sh
If execution/u01/app/11.2.0/grid/root.sh does not succeed in performing the following removal configuration, find the cause after you re-execute the root.sh script
[Email protected] ~]#/u01/app/11.2.0/grid/crs/install/roothas.pl-deconfig–force
Encountered the following error
Workaround:
Cd/lib64
[Email protected] lib64]# LS-LRT Libcap
libcap-ng.so.0 libcap-ng.so.0.0.0 libcap.so.2 libcap.so.2.16
[Email protected] lib64]# LS-LRT libcap.so.2
lrwxrwxrwx. 1 root root 14 December 21:21libcap.so.2-libcap.so.2.16
[Email protected] lib64]# ln-s libcap.so.2.16libcap.so.1
Re-execute can
Add Database software:
On Node 1, execute the following command with the Oracle User:
[Email protected] sshsetup]# su–oracle
[Email protected] ~]$ CD $ORACLE _home/oui/bin
./addnode.sh-silent "Cluster_new_nodes={rac3}"
/u02/app/oracle/product/11.2.0/dbhome_1/root.sh#on nodes RAC3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run thescripts in each cluster node
Adding a DB instance
Run the DBCA command on Node 1 to add a graphical interface instance
Sys/oracle
If you click Next in this step to see the service name or instance name is not specified error prompt, please run NETCA with the grid user to establish the listener, the problem can be resolved
Confirm information after adding a node
1,[[email protected] ~] $crsctl stat res–t
2,[[email protected] ~]$ cd$oracle_home/opatch/
[[Email protected]]$./opatch lsinventory
3,[[email protected] ~]$ olsnodes–s
[[Email protected]~]$ olsnodes–n
User equivalence
[Email protected] ~]$ cluvfy comp admprv-odb_config-d/u02/app/oracle/product/11.2.0/dbhome_1-n rac3–verbose
Check the integrity of the cluster
[Email protected] ~]$ cluvfy stage-post nodeadd-n rac3–verbose
11g two node RAC add a third node