Install rac11.2.0.3 in rhel5.6

Source: Internet
Author: User
Tags nslookup

 

Operating System and storage environment

Linux version:
[Root @ Rac1 ~] # Lsb_release-
LSB version: core-4.0-amd64: core-4.0-ia32: core-4.0-noarch: graphics-4.0-amd64: graphics-4.0-ia32: graphics-4.0-noarch: printing-4.0-amd64: printing-4.0-ia32: printing-4.0-noarch
Distributor ID: redhatenterpriseserver
Description: Red Hat Enterprise Linux Server Release 5.6 (tikanga)
Release: 5.6
Codename: tikanga

Memory Condition

[Root @ Rac1 server] # Free
Total used free shared buffers cached
Mem: 4043728 714236 3329492 0 38784 431684
-/+ Buffers/cache: 243768 3799960
Swap: 33551744 0 33551744

Storage

Split several Luns from openfiler, and draw 3 1 GB partitions for OCR and voting, 3 GB partitions for data file storage, and 3 60 GB partitions for flash recovery.

Next, we will install 2-node RAC. You can directly use the 11.2.0.3 patch for the installation media.

 

Check whether the required package is installed in the system.

 

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \compat-libstdc++-33 \elfutils-libelf \elfutils-libelf-devel \gcc \gcc-c++ \glibc \glibc-common \glibc-devel \glibc-headers \ksh \libaio \libaio-devel \libgcc \libstdc++ \libstdc++-devel \make \sysstat \unixODBC \unixODBC-devel

 

Create user

 

Create user group

 

Groupadd-G 1000 oinstall

Groupadd-G 1020 asmadmin

Groupadd-G 1021 asmdba

Groupadd-G 1031 DBA

Groupadd-G 1022 asmoper

 

Create user

 

Useradd-u 1100-G oinstall-G asmadmin, asmdba, DBA Grid

Useradd-u 1101-G oinstall-G dBA, asmdba Oracle

 

Passwd Oracle

Passwd Grid

 

Environment Variables of grid users

 

If [-T 0]; then
Stty intr ^ C
Fi
Export oracle_base =/opt/APP/Oracle
Export ORACLE_HOME =/opt/APP/11.2.0/Grid
Export oracle_sid = + ASM1
Export Path = $ ORACLE_HOME/bin: $ path
Umask 022

 

Oracle user environment variables

 

If [-T 0]; then
Stty intr ^ C
Fi
Export oracle_base =/opt/APP/Oracle
Export ORACLE_HOME =/opt/APP/Oracle/product/11.2.0/db_1
Export oracle_sid = oradb_1
Export Path = $ ORACLE_HOME/bin: $ path
Umask 022

 

Root User Environment Variable

 

Export Path =/opt/APP/11.2.0/GRID/bin:/opt/APP/Oracle/product/11.2.0/db_1/bin: $ path

 

Configure the network

 

Modify the/etc/hosts file

 

# Do not remove the following line, or various programs
# That require Network functionality will fail.
127.0.0.1 localhost. localdomain localhost
# Public network-(eth0, eth1 --- bond0)
192.168.106.241 Rac1 rac1.wildwave.com
192.168.106.242 rac2 rac2.wildwave.com
# Private interconnect-(eth2, eth3-bond1)
Rac1-priv 10.10.241
Rac2-priv 10.10.242

# Public virtual IP (VIP) addresses for-(eth0, eth1 --- bond0)
192.168.106.243 rac1-vip rac1-vip.wildwave.com
192.168.106.244 rac2-vip rac2-vip.wildwave.com

 

Configure DNS

 

Here, a DNS server is configured on each of the two nodes so that they can use the DNS server on the local machine to resolve scan and VIP. The configuration process on only one node is shown below. On rac2, You need to modify a few places in the two files to be parsed in the forward and reverse directions.

 

First, modify the/var/named/chroot/etc/named. conf file.

 

Options {
Listen-on port 53 {Any ;};
Listen-on-v6 port 53 {: 1 ;};
Directory "/var/named ";
Dump-file "/var/named/data/cache_dump.db ";
Statistics-file "/var/named/data/named_stats.txt ";
Memstatistics-file "/var/named/data/named_mem_stats.txt ";

// Those options shocould be used carefully because they disable port
// Randomization
// Query-source port 53;
// Query-source-v6 port 53;

Allow-query {192.168.106.0/24 ;};

};
Logging {
Channel default_debug {
File "Data/named. Run ";
Severity dynamic;
};
};
View localhost_resolver {
Match-clients {192.168.106.0/24 ;};
Match-destinations {Any ;};
Recursion yes;
Include "/etc/named. rfc1912.zones ";
};

Controls {
Inet 127.0.0.1 allow {localhost;} Keys {"rndckey ";};
};
Include "/etc/rndc. Key ";

 

Modify the/var/named/chroot/etc/named. rfc1912.zones File

 

Zone "wildwave.com" in {
Type master;
File "wildwave. Zone ";
Allow-update {none ;};
};

Zone "106.168.192.in-ADDR. Arpa" in {
Type master;
File "named. wildwave ";
Allow-update {none ;};
};

Since the database server does not need to connect to the Internet, named. Ca is not configured here.

Configure forward parsing/var/named/chroot/var/named/wildwave. Zone File

 

$ TTL 86400
@ In SOA rac1.wildwave.com.
Root.wildwave.com .(
2010022101; Serial (D. Adams)
3 h; refresh
15 m; retry
1 W; expiry
1D); Minimum

@ In NS rac1.wildwave.com.
Rac1 in a 192.168.106.241
Rac2 in a 192.168.106.242
Rac1-vip in a 192.168.106.243
Rac2-vip in a 192.168.106.244
RAC-scan in a 192.168.106.245
RAC-scan in a 192.168.106.246
RAC-scan in a 192.168.106.247

 

Configure reverse resolution/var/named/chroot/var/named. wildwave

 

$ TTL 86400
@ In SOA rac1.wildwave.com. root.wildwave.com .(
2010022101; Serial
28800; refresh
14400; retry
3600000; expire
86400); Minimum
@ In NS rac1.wildwave.com.
241 in PTR rac1.wildwave.com.
242 in PTR rac2.wildwave.com.
243 in PTR rac1-vip.wildwave.com.
244 in PTR rac2-vip.wildwave.com.
245 in PTR rac-scan.wildwave.com.
246 in PTR rac-scan.wildwave.com.
247 in PTR rac-scan.wildwave.com.

 

Then restart the named service and configure it to start automatically upon startup. The server configuration is complete:

Chkconfig named on
Service named restart


Next, configure the DNS Client.

We need to configure the DNS server address in/etc/resolv. conf.

Nameserver 192.168.106.241

Note that the addresses on the two nodes are different. In fact, they are used to configure the cost host address and use the DNS server on the local host.

 

Then modify/etc/nsswitch. conf

Hosts: Files DNS

Changed to: hosts: DNS files

That is, DNS is used for parsing before reading from the/etc/hosts file.

 

Test whether DNS is correctly configured

 

[Root @ rac2 ~] # NSLookup 192.168.106.243
Server: 192.168.106.242
Address: 192.168.106.242 #53

243.106.168.192.in-ADDR. Arpa name = rac1-vip.wildwave.com.

[Root @ rac2 ~] # NSLookup rac1-vip.wildwave.com
Server: 192.168.106.242
Address: 192.168.106.242 #53

Name: rac1-vip.wildwave.com
Address: 192.168.77.243

 

Time Synchronization

 

Use ctss for time synchronization, so disable NTP

/Etc/init. d/ntpd stop
Chkconfig ntpd off
MV/etc/NTP. CONF/etc/ntp.conf.org


Kernel Parameter Configuration

 

Kernel. shmmax = 4294967295

Kernel. Shmall = 2097152
Kernel. shmmni = 4096
Kernel. SEM = 250 32000 100 128
FS. File-max = 6815744
Net. ipv4.ip _ local_port_range = 9000 65500
Net. Core. rmem_default = 262144
Net. Core. rmem_max = 4194304
Net. Core. wmem_default = 262144
Net. Core. wmem_max = 1048576
FS. AIO-max-Nr = 1048576

 

The shmmax parameter is set based on the actual memory size.

 

Modify resource limits

 

Add the following content to/etc/security/limits. conf:

Grid soft nproc 2047
Grid hard nproc 16384
Grid soft nofile 1024
Grid hard nofile 65536
Oracle soft nproc 2047
Hard nproc 16384
Oracle soft nofile 1024
Oracle hard nofile 65536

 

Modify/etc/PAM. d/login

Session required pam_limits.so

 

Modify/etc/profile

If [$ user = "oracle"] | [$ user = "Grid \"]; then
If [$ shell = "/bin/KSh"]; then
Ulimit-P 16384
Ulimit-N 65536
Else
Ulimit-u 16384-N 65536
Fi
Umask 022
Fi

 

Create related directories

 

Mkdir-P/opt/APP/orainventory
Chown-r grid: oinstall/opt/APP/orainventory
Chmod-r 775/opt/APP/orainventory

Mkdir-P/opt/APP/11.2.0/Grid
Chown-r grid: oinstall/opt/APP/11.2.0/Grid
Chmod-r 775/opt/APP/11.2.0/Grid

Mkdir-P/opt/APP/Oracle
Mkdir/opt/APP/Oracle/export toollogs
Chown-r ORACLE: oinstall/opt/APP/Oracle
Chmod-r 775/opt/APP/Oracle

Mkdir-P/opt/APP/Oracle/product/11.2.0/db_1
Chown-r ORACLE: oinstall/opt/APP/Oracle/product/11.2.0/db_1
Chmod-r 775/opt/APP/Oracle/product/11.2.0/db_1


Install and configure asmlib

 

Install the package according to the specific environment to select the corresponding version (official website is http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html)

Rpm-IVH oracleasm-support-2.1.7-1.el5.x86_64.rpm \
Oracleasmlib-2.0.4-1.el5.x86_64.rpm \
Oracleasm-2.6.18-238.el5-2.0.5-1.el5.x86_64.rpm

Configuration

/Etc/init. d/oracleasm configure

Default User to own the driver interface []: Grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [N]: Y
Scan for Oracle ASM disks on boot (y/n) [y]: Y
Writing Oracle ASM library driver configuration: Done
Initializing the Oracle asmlib DRIVER: [OK]
Scanning the system for Oracle asmlib disks: [OK]

Add ASM Disk

Here we also need to create based on the specific storage situation

/Usr/sbin/oracleasm createdisk disk1/dev/sdc1
/Usr/sbin/oracleasm createdisk disk2/dev/sdd1
/Usr/sbin/oracleasm createdisk disk3/dev/sde1
/Usr/sbin/oracleasm createdisk disk4/dev/sdf1
/Usr/sbin/oracleasm createdisk disk5/dev/sdf2
/Usr/sbin/oracleasm createdisk disk6/dev/sdf3
/Usr/sbin/oracleasm createdisk disk7/dev/sdg1
/Usr/sbin/oracleasm createdisk disk8/dev/sdg2
/Usr/sbin/oracleasm createdisk disk9/dev/sdg3

View the created ASM Disk

/Usr/sbin/oracleasm scandisks
/Usr/sbin/oracleasm listdisks

 

Install the cvuqdisk package

 

Install this package under rpm in the directory where the grid Installation File is located

[Root @ rac rpm] # rpm-IVH cvuqdisk-1.0.9-1.rpm
Preparing... ######################################## ### [100%]
Using default group oinstall to install package
1: cvuqdisk ####################################### #### [100%]

If the owner of cvuqdisk is not oinstall, you must set the value of the environment variable cvuqdisk_grp to this user group.

 

Configure SSH for grid users between two nodes

 

Switch to the grid user and run the following commands on the two nodes:
Ssh-keygen-T DSA
Ssh-keygen-T RSA

Then, run
[Grid @ Rac1 ~] $ Cat ~ /. Ssh/id_dsa.pub> ~ /. Ssh/authorized_keys
[Grid @ Rac1 ~] $ Cat ~ /. Ssh/id_rsa.pub> ~ /. Ssh/authorized_keys
[Grid @ Rac1 ~] $ SSH rac2 "cat ~ /. Ssh/id_dsa.pub "> ~ /. Ssh/authorized_keys

[Grid @ Rac1 ~] $ SSH rac2 "cat ~ /. Ssh/id_rsa.pub "> ~ /. Ssh/authorized_keys

[Grid @ Rac1 ~] $ SCP rac2 ~ /. Ssh/authorized_keys rac2 :~ /. SSH

Then switch between Rac1 and rac2 over SSH, so that information about Rac1 and rac2 is added to known_hosts.

 

Check prerequisites

 

(For specific usage, see Oracle clusterware administration and deployment guide 11g Release 2 (11.2 ))

Log On As a grid user and find the runcluvfy. Sh file under the root directory of the grid Installation File.

Check the network status:
[Grid @ rac2 grid] $./runcluvfy. Sh stage-post hwos-N Rac1, rac2-verbose

Check prerequisites before CRS installation
[Grid @ rac2 grid] $./runcluvfy. Sh stage-pre crsinst-N Rac1, rac2-verbose

If it fails, follow the prompts in the failed area to solve the problem.

 

Install grid infrastructure
Log On with the grid user and run runinstallscan name to match the configuration in the DNS server. Otherwise, you cannot add information for all nodes.

Set the NIC used for external service to public, private for internal communication, and do not use for others

We recommend that you use ASM for Oracle.

Configure the disk group used by CRS and voting. Here we select normal redundancy. We need to select three ASM disks, two of which are used as the failure group.

ASM password. We recommend that you set the password length, character/number, and Case sensitivity Requirements for the 11g password. This is just a test. You can ignore it.

No device that supports IPMI. Select do not use...

After the verification is passed, select install to start installation.

Finally, the root user is required to execute the following script on two nodes respectively.

Grid infrastructure is successfully installed

View the current status of the Cluster

[Grid @ Rac1 grid] $ crs_stat-T

Except for GSD, all other resources should be in the online status.

Install RDBMS to log on as an oracle user. First, configure the user equivalence. Similar to the above grid user, configure the SSH check prerequisites for oracle users between two nodes.
[Oracle @ Rac1 bin] $/opt/APP/11.2.0/GRID/bin/cluvfy stage-pre dbinst-N Rac1, rac2-verbose

After installation, you can start the installation.
[Oracle @ Rac1 database] $./runinstaller

If you want to test RAC one node, you can select the last one to pass the installation verification. Click Install to start the installation and run the script prompted by the root user on the two nodes respectively, after the database is installed and created, run asmca under the grid user to create two disk groups to store data. After the database is created, run dbca as the Oracle user to create a database with the new features of 11 GB: the policy-managed database configuration is similar to that in 10 Gb. We will not go into details about how to use crs_stat to view the cluster status after the database is created, you can see that two more disk groups and resources corresponding to the database are created to go to sqlplus. Check that the instance has been opened. You can use SQL> select inst_id, instance_name, status from GV $ instance;

Inst_id instance_name status
--------------------------------------
1 oradb_1 open
2 oradb2. the RAC installation is complete now.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.