"Oracle cluster" 11G RAC detailed tutorial on RAC on Linux using NFS pre-installation Preparation (vi)

Source: Internet
Author: User
Tags custom name


RAC pre-Installation for NFS on Linux (vi)

Overview: write down the original intent and motivation of this document, derived from the Oracle Basic operating Manual of the previous article. The Oracle Basic Operations Manual is a summary of the author's vacation on Oracle's foundational knowledge learning. Then form a summary of the system, a review review, the other is easy to use. This document is also derived from this text. Before reading the Oracle RAC installation and use tutorial, I will first comb the whole idea and formation of this article. Due to the different levels of knowledge storage for readers, I will start with the preparation and planning of Oracle RAC before installing the deployment of Oracle RAC. Began with the guidance of Dr. Tang, the database cluster configuration installation, before and after 2, 3 months of groping. A lot of problems are encountered in the middle. This document will also be documented. This article original/finishing, reproduced please mark the original source:ORACLE One G Version 2 RAC on Linux use NFS pre-installation Preparation (vi)


Bai Ningsu July 18, 2015 10:28:41


Introduction

Download software
l Oracle Enterprise Linux 5.7

l Oracle 11 g version 2 (11.2) and modifications and database software

Operating system installation
This article uses Oracle Enterprise Linux 5.7. A general graphical operating system installation guide is here. More specifically, it should be a 2G switch (preferably 3-4G) installed on a server, Linux with firewall disabled and security. Oracle recommends a default server installation, but if you perform a custom installation includes the following package groups:

GNOME desktop environment, editor, graphical network, text-based network, development library, development tool, server configuration tool, management tool, base, system tool, X window system

In line with the rest of this article, the following information should be set during installation.

RAC1.

Host name: rac1.localdomain

IP address eth0: 192.168.0.101 (public address)

Default gateway eth0: 192.168.0.1 (public address)

IP address eth1: 192.168.1.101 (private address)

Default gateway eth1: no

RAC2.

Host name: rac2.localdomain

IP address eth0: 192.168.0.102 (public address)

Default gateway eth0: 192.168.0.1 (public address)

IP address eth1: 192.168.0.102 (private address)

Default gateway eth1: no

You are free to change the IP address to suit your network, but remember to keep the adjustments consistent throughout the rest of this article.

Oracle installation prerequisites automatically set [all nodes]
If you plan to use the "oracle-validated" package to perform all prerequisite settings, follow the instructions at http://public-yum.oracle.com to set up OL's yum repository, and then execute the following command.

# yum install oracle-validated

mkdir / media / disk #New mount directory

cd / usr / local / src #View the uploaded OEL image file

mv rhel-server-6.5-x86_64-dvd.iso /usr/local/src/OEL57.iso #Rename the image file

 mount -t iso9660 -o loop /usr/local/src/OEL57.iso / media / disk

vim /etc/yum.repos.d/rhel-source.repo

cd /etc/yum.repos.d/

touch rhel-media.repo #Create a yum configuration file

vi rhel-media.repo #Edit the configuration file, add the following

[OEL57]

name = Oracle Enterprise Linux 5.7 #Custom name

baseurl = file: /// media / disk / Service #Local CD mount path

enabled = 1 #Enable yum source, 0 is not enabled, 1 is enabled

gpgcheck = 1 #Check GPG-KEY, 0 is not checked, 1 is checked

yum install oracle-validated #Install the oracle-validated package and check its installation configuration

Note: oracle-validated installs crs and oracle dabase required patch packages and creates oracle users

Extra settings
Perform the following steps to log in to the "ol5-112 rac1" virtual machine as the root user at the same time.

Modify oracle user password: temporarily set password oracle

Passwd oracle

Install the following packages from Oracle Grid Media in your defined group

cd / media / rpmname #Upload the / grid / rpm package to / media
rpm -Uvh cvuqdisk * #Install rpm
If you don't use DNS, the "/ etc / hosts file must contain the following information.

Vi / etc / hosts

127.0.0.1 localhost.localdomain localhost
# Public
192.168.0.101 rac1.localdomain rac1
192.168.0.102 rac2.localdomain rac2
# Private
192.168.1.101 rac1-priv.localdomain rac1-priv
192.168.1.102 rac2-priv.localdomain rac2-priv
# Virtual
192.168.0.103 rac1-vip.localdomain rac1-vip
192.168.0.104 rac2-vip.localdomain rac2-vip
# SCAN
192.168.0.105 scan.localdomain scan
192.168.0.106 scan.localdomain scan
192.168.0.107 scan.localdomain scan
# NAS
192.168.0.108 nasl.localdomain nasl
caution. SCAN (scan address) should not be defined in the host file. Instead, you should define the public IP addresses of the same 3 subnets in the loop between DNS. For this installation, we will compromise and use the hosts file. This may cause problems if 11.2.0.2 is used forward.

Change the SELinux settings to allow editing the / etc / SELinux / config file to ensure that the SELinux flags are set as follows.

SELINUX = disable
Alternatively, this change can be made using the GUI tools (System> Administration> Security Level and Firewall). Click on the SELinux tab and disable features.

# service iptables stop #Disable firewall
# chkconfig iptables off
Stop the NTP service

# service ntpd stop
Shutting down ntpd: [OK]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid
Configure time synchronization

Vi / etc / sysconfig / ntpd
OPTIONS = "-x -u ntp: ntp -p /var/run/ntpd.pid"
# service ntpd restart
Create oracle installation directory

mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle: oinstall / u01
chmod -R 775 / u01 /
Login as oracle, open the following file, paste the following content at the end

# Oracle Settings
TMP = / tmp; export TMP
TMPDIR = $ TMP; export TMPDIR
 
ORACLE_HOSTNAME = rac1.localdomain; export ORACLE_HOSTNAME
ORACLE_UNQNAME = RAC; export ORACLE_UNQNAME
ORACLE_BASE = / u01 / app / oracle; export ORACLE_BASE
GRID_HOME = / u01 / app / 11.2.0 / grid; export GRID_HOME
DB_HOME = $ ORACLE_BASE / product / 11.2.0 / db_1; export DB_HOME
ORACLE_HOME = $ DB_HOME; export ORACLE_HOME
ORACLE_SID = RAC1; export ORACLE_SID
ORACLE_TERM = xterm; export ORACLE_TERM
BASE_PATH = / usr / sbin: $ PATH; export BASE_PATH
PATH = $ ORACLE_HOME / bin: $ BASE_PATH; export PATH
 
LD_LIBRARY_PATH = $ ORACLE_HOME / lib: / lib: / usr / lib; export LD_LIBRARY_PATH
CLASSPATH = $ ORACLE_HOME / JRE: $ ORACLE_HOME / jlib: $ ORACLE_HOME / rdbms / jlib; export CLASSPATH
 
if [$ USER = "oracle"]; then
  if [$ SHELL = "/ bin / ksh"]; then
    ulimit -p 16384
    ulimit -n 65536
  else
    ulimit -u 16384 -n 65536
  fi
fi
 
alias grid_env = ‘. / home / oracle / grid_env’
alias db_env = ‘. / home / oracle / db_env’
Note: Modify at rac2 node:

ORACLE_HOSTNAME = rac2.localdomain; export ORACLE_HOSTNAME
ORACLE_SID = RAC2; export ORACLE_SID
Create a file / home / oracle / grid_env for both nodes and add the following.

ORACLE_HOME = $ GRID_HOME; export ORACLE_HOME
PATH = $ ORACLE_HOME / bin: $ BASE_PATH; export PATH
 
LD_LIBRARY_PATH = $ ORACLE_HOME / lib: / lib: / usr / lib; export LD_LIBRARY_PATH
CLASSPATH = $ ORACLE_HOME / JRE: $ ORACLE_HOME / jlib: $ ORACLE_HOME / rdbms / jlib; export CLASSPATH
Create a file / home / oracle / db_env on both nodes and add the following. Rac2 modify ORACLE_SID = RAC2

#touch / home / oracle / db_env
#vi / home / oracle / db_env
ORACLE_SID = RAC1; export ORACLE_SID
ORACLE_HOME = $ DB_HOME; export ORACLE_HOME
PATH = $ ORACLE_HOME / bin: $ BASE_PATH; export PATH
 
LD_LIBRARY_PATH = $ ORACLE_HOME / lib: / lib: / usr / lib; export LD_LIBRARY_PATH
CLASSPATH = $ ORACLE_HOME / JRE: $ ORACLE_HOME / jlib: $ ORACLE_HOME / rdbms / jlib; export CLASSPATH
Shut down:

# shutdown -r now or # init 0
Create shared disk
First, we need to set up some NFS shares.

 

In this case we will do this on the RAC1 node, but you can do it on the NAS or server. Create the following directories on the RAC1 node.

mkdir / shared_config
mkdir / shared_grid
mkdir / shared_home
mkdir / shared_data
Add the following code at the end of the file / etc / exports:

vi / etc / exports
/ shared_config * (rw, sync, no_wdelay, insecure_locks, no_root_squash)
/ shared_grid * (rw, sync, no_wdelay, insecure_locks, no_root_squash)
/ shared_home * (rw, sync, no_wdelay, insecure_locks, no_root_squash)
/ shared_data * (rw, sync, no_wdelay, insecure_locks, no_root_squash)
Run the following command everywhere NFS share

chkconfig nfs on
service nfs restart
RAC1 and RAC2 create Oracle software installation directory

mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
mkdir -p / u01 / oradata
mkdir -p / u01 / shared_config
chown -R oracle: oinstall / u01 / app / u01 / app / oracle / u01 / oradata / u01 / shared_config
chmod -R 775 / u01 / app / u01 / app / oracle / u01 / oradata / u01 / shared_config
Add the following lines to the / etc / fstab file

# vi / etc / fstab
nas1: / shared_config / u01 / shared_config nfs rw, bg, hard, nointr, tcp, vers = 3, timeo = 600, rsize = 32768, wsize = 32768, actimeo = 0 0 0
nas1: / shared_grid /u01/app/11.2.0/grid nfs rw, bg, hard, nointr, tcp, vers = 3, timeo = 600, rsize = 32768, wsize = 32768, actimeo = 0 0 0
nas1: / shared_home /u01/app/oracle/product/11.2.0/db_1 nfs rw, bg, hard, nointr, tcp, vers = 3, timeo = 600, rsize = 32768, wsize = 32768, actimeo = 0
nas1: / shared_data / u01 / oradata nfs rw, bg, hard, nointr, tcp, vers = 3, timeo = 600, rsize = 32768, wsize = 32768, actimeo = 0 0 0
Mount NFS on both nodes

mount / u01 / shared_config
mount /u01/app/11.2.0/grid
mount /u01/app/oracle/product/11.2.0/db_1
mount / u01 / oradata
Make sure the shared directory permissions are correct: grant access

chown -R oracle: oinstall / u01 / shared_config
chown -R oracle: oinstall /u01/app/11.2.0/grid
chown -R oracle: oinstall /u01/app/oracle/product/11.2.0/db_1
chown -R oracle: oinstall / u01 / oradata
Test: Create a test directory test in the / u01 / oradata directory of RAC1, then open the / u01 / oradata of rac to see if it exists, delete it in rac2, and check whether rac1 is synchronized

RAC1 #cd / u01 / oradata
RAC1 #mkdir --r test
RAC1 #ls
 
RAC2 # cd / u01 / oradata
RAC2 #ls
RAC2 #rm --rf test
references
Oracle's three highly available cluster solutions
Introduction to Cluster Concepts: Parker Education Oracle Advanced Courses-Theoretical Textbook
Oracle 11 RAC Survival Guide
Oracle 11gR2 RAC Management and Performance Optimization
Oracle Database 11g Release 2 RAC On Linux Using NFS
Best practices for installing Oracle Database 11g Release 2 RAC on Oracle Linux 5.7 using VirtualBox
Oracle RAC installation and configuration-NFS (1)
Detailed explanation of tnsnames.ora listening configuration file (blog park)
Article navigation
Introduction to cluster concepts (1)
ORACLE cluster concepts and principles (2)
RAC working principle and related components (3)
Cache fusion technology (4)
RAC special problems and practical experience (5)
ORACLE 11 G version 2 RAC on Linux using NFS before preparation (6)
DATABASE 11G RAC cluster installation under Oracle Enterprise Linux 5.7 (7)
ORACLE ENTERPRISE LINUX 5.7 DATABASE 11G RAC database installation (eight)
Basic test and use of DATABASE 11G RAC under Oracle Enterprise Linux 5.7 (9)
Note: This article is original / organized. Please reprint the original source. (The next article is to prepare for Oracle RAC installation, cluster installation, database installation, and testing in the real environment. This is the key content of this series)

[Oracle cluster] 11G RAC knowledge graphic detailed tutorial of RAC on LINUX before using NFS installation preparation (6)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.