Simulate an Oracle 10g RAC cluster on a single machine in Linux (1)

Source: Internet
Author: User

I. Introduction

One of the most effective ways to familiarize yourself with Oracle's true application cluster (RAC) 10g technology is to access an actual Oracle RAC 10g cluster. There is no way to better understand their benefits than simply experiencing them-including fault tolerance, security, load balancing, and scalability.

The core of Oracle RAC is the shared disk subsystem. All nodes in the cluster must be able to access all data of all nodes in the cluster, redo log files, control files and parameter files. The data disk must be available globally to allow all nodes to access the database. Each node has its own redo log and control file, but other nodes must be able to access these files to restore the node when the system fails.

I hope this article will provide a reference for some friends who have only one PC but no real dual-host environment.

Ii. Practical environment of this Article

1. Main PC configurations:

Celon (R) CPU 1.80 GHz
Maxtor 6E040L0, ata disk drive 40G
One RelTek 8139C Nic

Memory DDR333 512 MB * 2
Display Card: ATI [Radeon 9200 SE]
Operating System: White Box Enterprise Linux 3 this article also applies to RedHat Enterprise Edition, the difference is that WBEL3 can be used for commercial use for free at present)
One remote terminal with windowsXP and remote X-server software

2. Server partitioning Solution

Oracle Database Files
RAC node name
Instance name
Database Name
$ ORACLE_BASE
File System
Dbrac
Orcl1
Orcl
/Home/oracle
ASM
Oracle CRS shared files
File Type
File Name
Partition
Mount point
File System
Oracle cluster Registry
/U01/orcl/orcfile
/Dev/hda8
/U01
OCFS
CRS voting Disk
/U01/orcl/cssfile
/Dev/hda8
/U01
OCFS

3. Software involved
1) oracle 10 Gb database software
Ship.db.lnx32.cpio.gz
2) oracle 10g cluster service software
Ship.crs.lnx32.cpio.gz
3) OCFS File System Support
Ocfs-2.4.21-EL-1.0.14-1.i686.rpm
Ocfs-support-1.0.10-1.i386.rpm
Ocfs-tools-1.0.10-1.i386.rpm
4) ASMlib driver
Oracleasm-2.4.21-EL-1.0.3-1.i686.rpm
Oracleasm-support-1.0.3-1.i386.rpm
Oracleasmlib-1.0.0-1.i386.rpm
The preceding software packages can be downloaded from the official oracle website.
WBEL linux: http://www.whiteboxlinux.org/download.html

III. Basic operations

1. install linux

Note the following points during installation:

1) disk partition: The swap partition size is recommended to be twice the memory size. Here it is 2048 MB. Specify the root partition/, var partition/var, usr partition/usr, home partition/home, temporary file partition/tmp. Note: Do not divide all the hard disk space into the operating system, and leave half of the space for the oarcle cluster disk to be installed later. This article provides an example.

2) file system capacity mount point
/Dev/hda1 1012 M/
/Dev/hda2 7.7G/home
/Dev/hda7 1012 M/tmp
/Dev/hda3 5.8G/usr
/Dev/hda5 2.0G/var

3) component selection: Select delvelopment tools and X-windows. To save space, do not

4) Firewall: it is best not

5) network settings: eth0
Deselect the [Configure using DHCP] Option
Select [Activate on boot]
IP Address: 192.168.22.44
Network mask: 255.255.255.0
6) Host Name: dbrac
2. Check required RPM after installation
3. the following package or a later version must be installed ):
Make-3.79.1
Gcc-3.2.3-34
Glibc-2.3.2-95.20
Glibc-devel-2.3.2-95.20
Glibc-headers-2.3.2-95.20
Glibc-kernheaders-2.4-8.34
Cpp-3.2.3-34
Compat-db-4.0.14-5
Compat-gcc-7.3-2.96.128
Compat-gcc-c ++-7.3-2.96.128
Compat-libstdc ++-7.3-2.96.128
Compat-libstdc ++ devel-7.3-2.96.128
Openmotif-2.2.2-16
Setarch-1.3-1

Iv. Settings

1. Change/etc/hosts

 
 
  1. vi /etc/hosts   
  2. 127.0.0.1         localhost.localdomain localhost   
  3. 192.168.22.44   dbrac int-dbrac   
  4. 192.168.22.244 vip-dbrac  

Make sure that the RAC node name does not appear in the return address.
This setting is very important and cannot be skipped. You must follow this setting. You can set the IP address and host alias as needed.
Oracle 10g RAC uses the virtual IP (VIP) technology, which is an exciting high-availability, multi-machine seamless switching solution, but it is only a form in a single-host simulation environment, you have to configure it for future installation.
2. Adjust kernel network settings Parameters
Edit/etc/sysctl. conf and add the following settings:
Vi/etc/sysctl. conf
# Default setting in bytes of the socket receive buffer
Net. core. rmem_default = 262144
# Default setting in bytes of the socket send buffer
Net. core. wmem_default = 262144
# Maximum socket receive buffer size which may be set by using
# The SO_RCVBUF socket option
Net. core. rmem_max = 262144
# Maximum socket send buffer size which may be set by using
# The SO_SNDBUF socket option
Net. core. wmem_max = 262144
3. Add module options:
Add the following lines to/etc/modules. conf:
Options sbp2 sbp2_exclusive_login = 0
4. Create an oracle user and Directory
$ Su-
# Groupadd dba
# Useradd-g dba-m oracle
# Passwd oracle
5. Edit the. bash_profile file and add oracle environment variables.
$ Vi. bash_profile
Export PATH
Unset USERNAME
Export LANG = zh_CN.EUC
ORACLE_BASE =/home/oracle; export ORACLE_BASE
Export ORACLE_HOME = $ ORACLE_BASE/product/10.1.0/db_1
Export ORA_CRS_HOME = $ ORACLE_BASE/product/10.1.0/crs_1
Export ORACLE_SID = rac1
Export NLS_LANG = 'simplified CHINESE_CHINA.ZHS16GBK'
PATH = $ ORACLE_HOME/bin:/sbin:/usr/bin:/usr/ccs/bin:/usr/local/bin:/usr/ucb; export PATH
LD_LIBRARY_PATH = $ ORACLE_HOME/lib: $ ORACLE_HOME/network/lib: $ ORACLE_HOME/mongom/lib:/usr/local/lib:/usr/lib; export LD_LIBRARY_PATH
Export ORACLE_TERM = xterm
Export CLASSPATH = $ ORACLE_HOME/JRE: $ ORACLE_HOME/jlib: $ ORACLE_HOME/rdbms/jlib: $ ORACLE_HOME/network/jlib
Export THREADS_FLAG = native
Export TEMP =/tmp
Export TMPDIR =/tmp
Export LD_ASSUME_KERNEL = 2.4.1
6. Create CRS partitions and data file partitions.
1) create a CRS partition mount point first.
Mkdir/u01
Chown oracle: dba/u01
2) create a CRS partition and a shared data file partition.
Fdisk/dev/hda
The CRS partition only needs MB, and the rest is divided into data file partitions. Here, the data file is divided into only one partition/dev/hda9.
The CRS partition is/dev/hda8.
[Root @ dbrac root] # fdisk/dev/hda
The number of cylinders for this disk is set to 4997.
[Root @ dbrac root] # fdisk/dev/hda
The number of cylinders for this disk is set to 4997.
There is nothing wrong with that, but this is larger than 1024,
And coshould in certain setups cause problems:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(E.g., dos fdisk, OS/2 FDISK)
Command (m for help): p
Disk/dev/hda: 41.1 GB, 41109061120 bytes
255 heads, 63 sectors/track, 4997 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Device Boot Start End Blocks Id System
/Dev/hda1*1 131 1052226 83 Linux
/Dev/hda2 132 1151 8193150 83 Linux
/Dev/hda3 1152 1916 6144862 + 83 Linux
/Dev/hda4 1917 4998 24756165 f Win95 Ext 'd (LBA)
/Dev/hda5 1917 2177 2096451 83 Linux
/Dev/hda6 2178 2438 2096451 82 Linux swap
/Dev/hda7 2439 2569 1052226 83 Linux
Command (m for help): n
First cylinder (2570-4998, default 2570 ):
Using default value 2570
Last cylinder or + size or + sizeM or + sizeK (2570-4998, default 4998): + 500 M
Command (m for help): n
First cylinder (2632-4998, default 2632 ):
Using default value 2632
Last cylinder or + size or + sizeM or + sizeK (2632-4998, default 4998): + 15000 M
Command (m for help): p
Disk/dev/hda: 41.1 GB, 41109061120 bytes
255 heads, 63 sectors/track, 4997 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Device Boot Start End Blocks Id System
/Dev/hda1*1 131 1052226 83 Linux
/Dev/hda2 132 1151 8193150 83 Linux
/Dev/hda3 1152 1916 6144862 + 83 Linux
/Dev/hda4 1917 4998 24756165 f Win95 Ext 'd (LBA)
/Dev/hda5 1917 2177 2096451 83 Linux
/Dev/hda6 2178 2438 2096451 82 Linux swap
/Dev/hda7 2439 2569 1052226 83 Linux
/Dev/hda8 2570 2631 497983 + 83 Linux
/Dev/hda9 2632 4456 14659281 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl () to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: the device or resource is busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
7. edit/etc/sysctl. conf and add the following two lines to set the system shared memory size and file handle.
Kernel. shmmax = 2147483648
Kernel. sem = 250 32000 100 128
The default Kernel Parameter settings of the linux operating system involved in this article mostly meet the requirements of oracle Installation and do not need to be changed.
8. Configure the hangcheck-timer kernel module
Add the following line to/etc/modules. conf.
Options hangcheck-timer hangcheck_tick = 30 hangcheck_margin = 180
To ensure that the hangcheck-timer module can be automatically loaded every time the system restarts, add the following line to the/etc/rc. local file.
Echo "modprobe hangcheck-timer">/etc/rc. local
Restart the system and check whether the hangcheck-timer module has been loaded.
[Root @ dbrac root] # lsmod | grep hangcheck-timer
Hangcheck-timer 2616 0 (unused)
9. Configure the RAC node for remote access
When you run Oracle Universal Installer on a RAC node, it copies the Oracle software to all other nodes in the RAC cluster using the rsh, rcp, or scp commands. Although it is a standalone simulation, it still needs to be configured and cannot be skipped. Since oracle 10 Gb has supported the ssh protocol, this article will first try to use it
Use an oracle user to create an ssh public key:
[Oracle @ dbrac oracle] $ ssh-keygen-t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/. ssh/id_dsa ):
Enter passphrase (empty for no passphrase ):
Enter same passphrase again:
Your identification has been saved in/home/oracle/. ssh/id_dsa.
Your public key has been saved in/home/oracle/. ssh/id_dsa.pub.
The key fingerprint is:
2d: 09: 9a: c0: 40: c7: 99: 46: ea: 43: 0d: 22: 4b: d0: a0: 26 oracle @ dbrac
Copy the public key to another node)
[Oracle @ dbrac oracle] $ cp-v. ssh/id_dsa.pub. ssh/authorized_keys
Test whether the key is effective
[Oracle @ dbrac oracle] $ ssh dbrac
The authenticity of host 'dbrac (192.168.22.44) 'can't be established.
RSA key fingerprint is e7: ff: ce: 5e: 92: ac: c4: 96: a8: ca: 3e: 20: 2e: 5c: 75: AE.
Are you sure you want to continue connecting (yes/no )? Yes
Warning: Permanently added 'dbrac, 192.168.22.44 '(RSA) to the list of known hosts.
[Oracle @ dbrac oracle] $
Login without a password indicates that the key has taken effect
10. install and configure OCFS
OCFS is an Oracle Cluster File System (OCFS) developed by Oracle. It eliminates the burden of Database Administrators and System administrators on managing original devices, it provides the same functions and usage as common file systems. Try not to use linux binary file system operation commands on the OCFS File System
Currently, Version 1 supports the following file types:
Oracle Database Files
Online redo log files
Archive and redo log files
Control File
Server parameter file (SPFILE)
Oracle cluster Registry (OCR) File
CRS voting disk.

Installation and configuration
1) Upload ocfs-2.4.21-EL-1.0.14-1.i686.rpm, ocfs-support-1.0.10-1.i386.rpm to/home/oracle/install/rac/ocfs directory
2) Run rpm-ivh ocfs *. rpm to start installation.
[Oracle @ dbrac oracle] $ su-
Password:
[Root @ dbrac root] # cd/home/oracle/install/rac/ocfs
[Root @ dbrac ocfs] # rpm-ivh ocfs *. rpm
Preparing... ######################################## ### [100%]
1: ocfs-support ##################################### ###### [33%]
2: ocfs-2.4.21-EL ####################################### #### [67%]
3: ocfs-tools ##################################### ###### [100%]
3) Generate and configure the/etc/ocfs. conf file
Root @ dbrac ocfs] # ocfstool &
4) use the ocfstool GUI tool to perform the following steps:
5) select [Task]-[Generate Config]
In the "OCFS Generate Config" dialog box, enter the private interconnection interface and DNS name.
Exit the application after verifying that all values on all nodes are correct.
6) Check/etc/ocfs. conf
[Root @ dbrac ocfs] # cat/etc/ocfs. conf
#
# Ocfs config
# Ensure this file exists in/etc
#
Node_name = dbrac
Ip_address = 192.168.22.44
Ip_port = 7000
Comm_voting = 1
Guid = B907DC7945D81C0A2C8C000D61EB0166
Note that the guid corresponds to only one node in the cluster. To replace the NIC, run the ocfs_uid_gen-c command to create a new one.
7) restart the system and confirm that the ocfs module has been correctly loaded.
[Oracle @ dbrac oracle] $ lsmod | grep ocfs
Ocfs 299104 0 (unused)
Create an OCFS File System
[Oracle @ dbrac oracle] $ id
Uid = 500 (oracle) gid = 500 (dba) groups = 500 (dba)
[Oracle @ dbrac oracle] $ su-
Password:
[Root @ dbrac root] # mkfs. ocfs-F-B 128-L crs-m/u01-u '000000'-g'000000'-p 500/dev/hda8
Cleared volume header sectors
Cleared node config sectors
Cleared publish sectors
Cleared vote sectors
Cleared bitmap sectors
Cleared data block
Wrote volume header
Note that-u and-g are the oracle user IDs and dba group IDs. Be sure to enter them correctly.-p is used to set the access permission for the/u01 directory, if you want to set dba group users to have the right to manage cluster registration files, set 0775
9) load the OCFS File System
$ Su-
# Mount-t ocfs/dev/hda8/u01
10) configure the OCFS partition to automatically load at startup
11) Add the following entries to the/etc/fstab file.
/Dev/hda8/u01 ocfs _ netdev 0 0
12) restart the server and check that the CRS partition has been correctly installed.
[Root @ dbrac root] # mount | grep ocfs
/Dev/hda8 on/u01 type ocfs (rw)
If it is not automatically loaded, run the following command echo "mount-t ocfs/dev/hda8/u01">/etc/rc. local and restart
You can also use the Linux kernel installed with patches officially provided by oracle to solve this problem.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.