1. Build Oracle RAC 11g cluster on Oracle Enterprise Linux and ISCSI
2. All shared disk storage for Oracle RAC will be based on ISCSI,ISCSI using Openfiler version 2.3, which runs on the third node, which is referred to as the networked storage server in this article x86_64
3. Each Linux node is configured with only two network interfaces-eth0 is used to connect to a public network, eth1 for Oracle RAC private interconnect "and" to connect to a networked storage server for shared ISCSI access. In the implementation of production-level RAC, the private interconnect should at least be gigabit (or higher), have redundant paths, and be "only" for Oracle to transmit data related to Cluster Manager and Cache Fusion. A third private network interface (for example, eth2) should be configured on another redundant gigabit network to access the NAS server (openfiler).
4.
Networked Storage servers
Built on RPath Linux, Openfiler is a free browser-based network storage Management utility that provides file-based network-attached storage (NAS) and block-based storage area networks (Sans) in one framework. The entire software system is connected to many open source applications such as Apache, Samba, LVM2, ext3, Linux NFS, and ISCSI Enterprise Target. Openfiler combines these ubiquitous technologies into an easy-to-manage, small management solution that is the front-end of a powerful WEB-based management interface.
Openfiler supports CIFS, NFS, Http/dav, and FTP, but we will only use its ISCSI capabilities to achieve a low-cost SAN for shared storage components required by Oracle RAC 11g. The operating system and Openfiler applications will be installed on a built-in SATA disk. Another built-in 73GB 15K SCSI drive will be configured as a "volume group" to meet all shared disk storage requirements. The Openfiler server will be configured to use this volume group for ISCSI-based storage and will be used in our Oracle RAC 11g configuration to store the shared files required by the Oracle Grid Infrastructure and Oracle RAC Database 。
5.
In Oracle Grid Infrastructure 11g version 2nd (11.2), automatic storage Management (ASM) and Oracle Clusterware Software are packaged together in a binary version and installed in the same home directory, which is called the Grid Infrastructure home directory. In order to use Oracle RAC 11g version 2nd, you must install the Grid Infrastructure. After the setup inquiry process, the configuration Assistant is launched to configure ASM and Oracle Clusterware. Although this combination of product installation is known as the Oracle Grid Infrastructure, Oracle Clusterware and Automatic Storage Manager are still standalone products.
After installing and configuring the Oracle Grid Infrastructure on two nodes in the cluster, the next step is to install the Oracle RAC Software on two Oracle RAC nodes.
6.
The two Oracle RAC nodes and network storage servers are configured as follows:
Node |
Node name |
Instance Name |
Database name |
Processor |
Ram |
Operating system |
Racnode1 |
Racdb1 |
Racdb. Idevelopment. Info |
1 Dual-core Intel xeon,3.00 GHz |
4GB |
OEL 5.4-(x86_64) |
Racnode2 |
Racdb2 |
1 Dual-core Intel xeon,3.00 GHz |
4GB |
OEL 5.4-(x86_64) |
Openfiler1 |
|
|
2 x Intel xeon,3.00 GHz |
6GB |
Openfiler 2.3-(x86_64) |
Network configuration |
Node name |
Public IP Address |
Private IP Address |
Virtual IP Address |
SCAN Name |
SCAN IP Address |
Racnode1 |
192.168.1.151 |
192.168.2.151 |
192.168.1.251 |
Racnode-cluster-scan |
192.168.1.187 |
Racnode2 |
192.168.1.152 |
192.168.2.152 |
192.168.1.252 |
Openfiler1 |
192.168.1.195 |
192.168.2.195 |
|
Oracle Software Components |
Software components |
Operating system users |
Primary Group |
Auxiliary groups |
Home Directory |
Oracle Base directory/oracle home Directory |
Grid Infrastructure |
Grid |
Oinstall |
Asmadmin, ASMDBA, Asmoper |
/home/grid |
/u01/app/grid /u01/app/11.2.0/grid |
Oracle RAC |
Oracle |
Oinstall |
DBA, Oper, ASMDBA |
/home/oracle |
/u01/app/oracle /u01/app/oracle/product/11.2.0/dbhome_1 |
Storage components |
Storage components |
File system |
Volume size |
ASM Volume Group Name |
ASM Redundancy |
Openfiler volume name |
ocr/voting disk |
Asm |
2GB |
+crs |
External |
Racdb-crs1 |
Database files |
Asm |
32GB |
+racdb_data |
External |
Racdb-data1 |
Quick Recovery Zone |
Asm |
32GB |
+fra |
External |
Racdb-fra1 |
7.
ISCSI initiators
Essentially, the ISCSI initiator is a client device that connects to a service provided by the server (in this case, the ISCSI target) and initiates a request for the service. The ISCSI initiator software needs to be installed on each Oracle RAC node (racnode1 and Racnode2).
ISCSI initiators can be implemented either in software or in hardware. Software ISCSI initiators are available for most major operating system platforms. For this article, we will use the free Linux OPEN-ISCSI software driver provided in Iscsi-initiator-utils RPM. ISCSI software initiators are typically used in conjunction with standard network interface cards (NICs), in most cases, Gigabit Ethernet cards. The hardware initiator is an ISCSI HBA (or TCP Offload engine (TOE) card), which is essentially a dedicated Ethernet card on which the SCSI ASIC can offload all work (TCP and SCSI commands) from within the system CPU. ISCSI HBAs can be purchased from many vendors, including Adaptec, Alacritech, Intel, and QLogic.
8.
ISCSI Target
The iSCSI target is the "server" component of the iSCSI network. It is typically a storage device that contains the information you need and responds to requests from (one or more) initiators. For this article, node Openfiler1 will be the ISCSI target.
9. Hardware for building a sample Oracle RAC 11g environment includes three Linux servers (two Oracle RAC nodes and one networked storage server)
10, 1 Ethernet Switches
Used to interconnect between RACNODE1-PRIV and Racnode2-priv that will be deployed on the 192.168.2.0 network. The switch will also be used to transmit openfiler of network storage traffic.
11. Oracle RAC Node (racnode1) Dell PowerEdge T100
- Dual-core Intel (R) Xeon (r) e3110,3.0 GHZ,6MB cache, 1333MHz
- 4gb,ddr2,800mhz
- 160GB 7.2K RPM SATA 3Gbps Hard Drive
- Integrated graphics-(ATI ES1000)
- Integrated Gigabit Ethernet Card-(Broadcom (R) netxtreme IITM 5722)
- 16x DVD Drive
- No keyboard, monitor, or mouse-(connect to a KVM switch device)
$450 1 Ethernet LAN Card
RAC interconnect for Racnode2 and Openfiler networked storage.
Each Linux server for the Oracle RAC should contain two NIC adapters. The Dell PowerEdge T100 includes an embedded Broadcom (R) NetXtreme IITM 5.722 million Gigabit Ethernet NIC that will be used to connect to the public network. Another NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage). The selected NIC adapter is compatible with the maximum data transfer speed of the network switch that the private network will use. For this article, I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) in the private network.
Gigabit Ethernet
- Intel (R) pro/1000 PT Server Adapter-(EXPI9400PT)
90 USD Oracle RAC node (racnode2) Dell PowerEdge T100
- Dual-core Intel (R) Xeon (r) e3110,3.0 GHZ,6MB cache, 1333MHz
- 4gb,ddr2,800mhz
- 160GB 7.2K RPM SATA 3Gbps Hard Drive
- Integrated graphics-(ATI ES1000)
- Integrated Gigabit Ethernet Card-(Broadcom (R) netxtreme IITM 5722)
- 16x DVD Drive
- No keyboard, monitor, or mouse-(connect to a KVM switch device)
$450 1 Ethernet LAN Card
RAC interconnect for Racnode2 and Openfiler networked storage.
Each Linux server for the Oracle RAC should contain two NIC adapters. The Dell PowerEdge T100 includes an embedded Broadcom (R) NetXtreme IITM 5.722 million Gigabit Ethernet NIC that will be used to connect to the public network. Another NIC adapter will be used for the private network (RAC interconnect and Openfiler networked storage). The selected NIC adapter is compatible with the maximum data transfer speed of the network switch that the private network will use. For this article, I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) in the private network.
Gigabit Ethernet
- Intel (R) pro/1000 PT Server Adapter-(EXPI9400PT)
$90 NAS Server-(openfiler1) Dell PowerEdge 1800
- Dual 3.0GHz XEON/1MB Cache/800FSB (SL7PE)
- 6GB ECC Memory
- 500GB SATA Internal Hard Drive
- 73GB 15K SCSI Internal Drive
- Integrated graphics
- An embedded Intel Ethernet/Digital Gigabit NIC
- 16x DVD Drive
- No keyboard, monitor, or mouse-(connect to a KVM switch device)
Note: The operating system and Openfiler applications will be installed on 500GB internal SATA disks. Configure another built-in 73GB 15K SCSI hard disk for database storage. The Openfiler server will be configured to use another hard disk for ISCSI-based storage and will be used in the Oracle RAC 11g configuration to store the shared files and cluster database files required by Oracle Clusterware.
Note that any type of hard disk (internal or external) can be used for database storage as long as the network storage server (Openfiler) is recognized and the hard disk has enough space. For example, I have built an additional partition on the 500GB built-in SATA disk for ISCSI targets, but decided to use a faster SCSI disk in this example.
$800 1 Ethernet LAN Card
For networked storage on a private network.
A networked Storage server (Openfiler server) should contain two NIC adapters. The Dell PowerEdge 1800 includes an integrated, A/E Ethernet adapter that will be used to connect to the public network. Another NIC adapter will be used for the private network (Openfiler networked storage). The selected NIC adapter is compatible with the maximum data transfer speed of the network switch that the private network will use. For this article, I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) in my private network.
Gigabit Ethernet
- Intel (R) pro/1000 MT Server Adapter-(PWLA8490MT)
$125 Other Components 1 Ethernet switches
Used to interconnect between RACNODE1-PRIV and Racnode2-priv that will be deployed on the 192.168.2.0 network. The switch will also be used to transmit openfiler of network storage traffic. For this article, I used a Gigabit Ethernet switch (and a 1Gb Ethernet card) in my private network.
Gigabit Ethernet
- D-Link 8 port/desktop switch-(DGS-2208)
50 USD 6 Network cable
- Class 6 Wiring Cable-(connect racnode1 to public network)
- Class 6 Wiring Cable-(connect Racnode2 to public network)
- Class 6 Wiring Cable-(connect openfiler1 to public network)
- Class 6 Wiring Cable-(connect RACNODE1 to Ethernet interconnect switch)
- Class 6 Wiring Cable-(connect Racnode2 to Ethernet interconnect switch)
- Class 6 Wiring Cable-(connect Openfiler1 to Ethernet interconnect switch)
US $10
US $10
US $10
US $10
US $10
10 USD Optional Component KVM switching device
In order to install the operating system and perform multiple configuration tasks, this guide requires a console that can access all nodes (servers). When managing a small number of servers, it is possible to connect each server to its own monitor, keyboard, and mouse in order to access its console. However, this solution becomes more difficult to implement as the number of servers that need to be managed is increasing. A more practical solution is to configure a dedicated computer that includes a monitor, keyboard, and mouse that can directly access the console of each server. This solution can be implemented using a keyboard, video, and mouse switching device (also known as a KVM switching device). A KVM switching device is a hardware device that allows users to control multiple computers through a keyboard, video display, and mouse. Avocent offers a high-quality, low-cost, 4-port switching device with 4 6-foot cables:
- SwitchView-(4sv1000bnd1-001)
For detailed instructions and guidance on KVM switching devices and their use, refer to the article "Home and enterprise KVM switching Devices".
$340 Total $2,455