Here we will introduce the installation of Oracle RAC in Linux. Oracle Real Application Server, Real Application cluster, Oracle RAC for short, is an Oracle Parallel cluster, oracle instances in Different server systems access the same Oracle database at the same time, and nodes communicate with each other over a private network. All control files, online logs, and data files are stored on shared devices, it can be read and written by all nodes in the cluster at the same time.
System Configuration
1. Create users and group oinstall dba users
- oracle -g oinstall -G dba
Anonymous user: confirm that the anonymous user nobody exists in the system. After the installation is complete, the nobody user must execute some extension tasks extjob.) check whether the nobody user exists:
- #id nobody
If the user does not exist, add the user.
Note: Set the password for oracle users.
2. Configure the network
IP address planning by modifying/etc/sysconf/network-script/ifcfg-ethx
Configure Permanent modification of the Host Name of the static IP address of the NIC:/etc/network
Local DNS configuration:/etc/hosts
Configure the/etc/host. conf file to specify the order of domain name resolution methods:
- order hosts,bind
It indicates that it is first parsed through the/etc/hosts file. If there is no correspondence between the host name and the IP address in the file, it is then parsed through the Domain Name Server bind.
3. Configure the Secure SSH channel. For ORACLE users, create rsa and dsa keys on each node.
- rac1#su - oracle
- rac1#mkdir .ssh
- rac1#chmod 700 .ssh
- rac1#cd .ssh
Rac1 # create an ssh-keygen-t rsa key pair.
Perform the same operation on another host.
- Rac2 # su-oracle
- Rac2 # mkdir. ssh
- Rac2 # chmod 700. ssh
- Rac2 # cd. ssh
- Rac2 # create an ssh-keygen-t rsa key pair.
Rac1 # ssh rac1 cat/home/oracle /. ssh/id_rsa.pub> authorized_keys directs the master key of Node 1 to a file. Because no key pair exists, a password is prompted. Just enter the password.
- Rac1 # ssh rac2 cat/home/oracle/. ssh/id_rsa.pub> authorized_keys directs the master key of Node 2 to a file
- Rac1 # cat authorized_keys view the contents of the common keys of Node 1 and node 2, and copy them to node 2.
- Rac1 # scp authorized_keys rac2:/home/oracle/. ssh/enter the password of Node 2
Change the permission
- rac1#chmod 600 authorized-keys
The same is true for the DSA key settings.
- Rac1 # remote execution of the date command using ssh rac1 date to test the common key connection. You do not need to enter a password.
- Rac1 # remote execution of the date command using ssh rac2 date to test the common key connection. You do not need to enter a password.
In this case, the security channel is successfully configured.
4. Check Required Software
# Rpm-qa | software package required by grep
5. Configure Kernel Parameters
- vi /etc/sysctl.conf
- kernel.sem=250 32000 100 128
- kernel.shmmni=4096
- kernel.shmall=2097152
- kernel.shmmax=2147483648
- net.ipv4.ip_local_port_range=1024 65000
- net.core.rmem_default=1048576
- net.core.rmem_max=1048576
- net.core.wmem_default=262144
- net.core.wmem_max=262144
Then run: # sysctl-p
6. Set shell restrictions on oracle users
Vi/etc/security/limits. conf perform the same operation on each node: edit/etc/security/limits. conf and add the following content:
- oracle soft nproc 2047
- oracle hard nproc 16384
- oracle soft nofile 1024
- oracle hard nofilw 65536
Edit/etc/pam. d/login and add the following content: session required/lib/security/pam_limits.so
Edit/etc/profile and add the following content:
- if[$USER="ORACLE"]; THEN
- if[$SHELL="/bin/ksh"];then
- ulimit -p 16384
- ulimit -n 65536
- else
- ulimit -u 16384 -n 65536
- fi
- fi
Disk configuration
Install CRS
Install true RAC
Storage Options:
| Project |
Storage System |
Storage location |
| Clusterware |
Local (EXT3) or NFS |
Local or NFS |
| Voting disk |
OCFS2 Raw device NFS |
Shared disk NFS |
| OCR |
OCFS2 Raw device NFS |
Shared disk NFS |
| Database Software |
OCFS2 local NFS |
Local or shared disk NFS |
| Database |
OCFS2 ASM Raw device NFS |
Shared disk NFS |
| Restore Files |
OCFS2 ASM NFS |
Shared disk NFS |
Storage Mechanism
|
Clusterware |
Database |
Restore Files |
| ASM |
No |
Yes |
Yes |
| OCFS2 |
Yes |
Yes |
Yes |
| Raw Device |
Yes |
Yes |
No |
| NFS |
Yes |
Yes |
Yes |
OCFS2 install this software download 3 software packages on both sides to install OCFS2-2.6.9-22.ELsmp-1.2.3-1.i686.rpm this file corresponds to the uname-a system version ocfs2console-1.2.1-1.i386.rpm ocfs2-tools-1.2.1-1.i386.rpm
Installation sequence: tools ---- kernel module ----- console
Disk Processing
- # Fdisk-l view disk partitions
- # Fdisk/dev/sdb create a partition
- # Export DISPLAY = IP address of the Local Machine
- # Ocfs2console open the ocfs2 console for ocfs2 Configuration
Start formatting the partition under the menu Task.
Preparations before Oracle Installation
- # Mkdir-p/orac/mongohome
- # Mkdir-p/orac/oradata
- # Mount-t ocfs2/dev/sdb1/orac/orahome
- # Df-h view mounting status
- # Mount-t ocfs2-o datavolume, nointr/dev/sdb2 orac/oradata
- # Df-h
- # Mounted. ocfs2-f check the loading status of the ocfs2 File System
Another node, rac2
- #/Etc/init. d/o2cd load this module
- #/Etc/init. d/o2cd status view the status of the loaded Module
The status of Node 2 ocfs2 is offline.
- #/Etc/init. d/o2cd online to make it online
- #/Etc/init. d/o2cd status view the status of the loaded Module
- # Mount-t ocfs2/dev/sdb1/orac/orahome
- # Df-h
- # Mounted. ocfs2-f
- # Mount-t ocfs2-o datavolume, nointr/dev/sdb2/orac/oradata
- # Mounted. ocfs2-f
When the system starts, it automatically loads the ocfs2 file system and starts the corresponding module-both nodes must be configured
- #/Etc/init. d/o2cd configure automatically loads ocfs2 modules at startup
- # Vi/etc/fstab is automatically mounted to the file system
- /Dev/sdb1/orac/orahome ocfs2 _ netdev 0 0
- /Dev/sdb2/orac/oradata ocfs2 _ netdev, datavolume, nointr 0 0
Install the cluster clusterware folder crs when installing the clusterware oradata database, use orahome to install the oracle database software
Modify the primary users of these folders
- #cd /orac
- #chown root.oinstall crs
- #chown oracle.oinstall orahome
- #chown oracle.oinstall oradata
- #chmod -R 775 ors
- #chmod -R 775 orahome
- #chmod -R 775 oradata
- #ls -l
Modify the node.
Copy the cluster clusterware Installation Software
- # Su-oracle
- # Export DISPLAY = Local ip: 0.0
- #./RunInstaller
Install database software select the installation Type Enterprise Edition
Create a database: # dbca
Test Database: Modify the connection file of the Client: NETWORK/ADMIN/tnsnames. ora in the installation path of the Client
- ORATEST tns service name =
- (DESCRIPTION =
- (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.116.121 node 1IP) (PORT = 1521 ))
- (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.116.129 node 2IP) (PORT = 1521 ))
- )
- (CONNECT_DATA =
- (SERVICE_NAME = oratest.sinobest.com global database name)
- )
- )
-
- EXTPROC_CONNECTION_DATA =
- (DESCRIPTION =
- (ADDRESS_LIST =
- (ADDRESS = (PROTOCOL = IPC) (KEY = EXTPROC0 ))
- )
- (CONNECT_DATA =
- (SID = PLSExtProc)
- (PRESENTATION = RO)
- )
- )
Test the SQLPLUS client: Open the command line in windows or directly open Oracle SQL * Plus
- > Sqlplus/nolog
-
- SQL> conn sys/123456 @ tnsname as sysdba
- SQL> select * from V $ instance; view the current instance
- SQL> set wrap off; set the Display Mode
- SQL> set linesize 200;
- SQL> select * from V $ instance; view the current instance
- SQL> select * from gv $ instance; view the global instance View