MSC + FailSafe dual-host cluster HA Summary
I have recently implemented an Oracle HA application solution. I will summarize it for your reference.
Hardware configuration:
Two Lenovo 82550-wide T630 servers (standard, each server has two intel NICS)
The SUREFIRE200R array cabinet is equipped with 13 18 GB hard disks. It will be used as a cluster Disk
Network requirements:
• Unique NetBIOS cluster name.
• Five unique static ip addresses: Two NICs for the private network, two NICs for the public network, and one for the cluster itself.
• The domain user account used for cluster services (all nodes must be members of the same domain ).
• Each node should have two NICs-one for connecting to the public network and the other for the dedicated cluster network of the node to the node.
Node 1 Node 2
Server Name: T630R T630L
Activity Directory domain name cluster.legend.com
Cluster name Mycluster
Public network IP address 192.0.35.1 192.0.35.2
Public network subnet mask 255.255.255.0 255.255.0
Private network IP address 10.1.1.1 10.1.1.2
VPC subnet mask 255.0.0.0 255.0.0.0
Cluster virtual IP address 192.0.35.100
Cluster virtual IP subnet mask 255.255.255.0
Cluster account
With the above plan, you can start system installation:
These steps are:
• Install windows 2000 Advanced Server on each node.
• Install the network.
• Install the disk.
• Install the Active Directory
After completing the preceding steps, perform the following steps:
• Install MSC on each node.
Step 1: Install 2000
We skipped:
Step 2 install the network:
Each cluster node requires at least two NICs-one connecting to the public network and the other connecting to a dedicated network that only contains the cluster nodes.
The dedicated network adapter establishes node-to-node communication, cluster status signals, and cluster governance. The public network adapter of each node connects the cluster to the public network of the client.
Verify that all network connections are correct. The dedicated network adapter is only connected to other dedicated network adapters, while the public network adapter is connected to the public network. The connections are shown in figure 1 below. Perform these steps on each cluster node before continuing to install the shared disk.
Step 3: Skip the following steps to install the disk:
Step 4: Install the Active Directory:
All nodes in the cluster must be members of the same domain and can access the domain controller and DNS server. You can configure them as member servers or domain controllers. If you decide to configure a node as a domain controller, you should configure all other nodes in the same domain as the domain controller. Configure the two nodes as the domain controller.
Considerations:
If there is no DNS server in the domain, and the first node server is used as the DNS server when installing the Active Directory of the second server, before configuring the ip address of the NIC, configure DNS as the IP address of the first server, so that you can correctly resolve the domain name by installing the Active Directory of the second server.
After completing the preceding steps correctly, you can install the MSC:
Note: During cluster service installation on the first node, you must either disconnect all other nodes or stop them before Windows 2000 is started. Power on all shared storage devices.
1. Click Start, click Settings, and then click Control Panel.
2. Double-click Add/delete programs.
3. Double-click to add/delete Windows Components.
4. Select cluster service. Click Next.
5. The cluster service file is located in the Windows 2000 Advanced Server or Windows 2000 Datacenter Server CD-ROM. Enter x: i386 (where x is the drive letter of the CD-ROM .) For Windows 2000 installed from the network, enter the appropriate network path. (If a flash screen is displayed on Windows 2000, disable it .) Click OK.
6. click Next.
7. Click I understand to accept the following conditions: only the hardware cluster service on the hardware compatibility list can be supported.
8. Because this is the first node in the cluster, you must create the cluster itself. Select the first node in the cluster and click Next.
9. Enter the cluster name according to Table 1 and click Next.
10. Enter the username of the cluster service account created during pre-installation. (In this example, the user name is cluster .) No password. Type the domain name, and then click Next.
In general, you should provide a secure password for this user account.
In this case, the cluster service Configuration Wizard verifies the user account and password.
11. click Next.
Configure a cluster Disk
Note by default, all SCSI disks that are not on the same bus as the system disk will appear in the managed disk list. Therefore, if the node has multiple SCSI buses, it may also list disks that do not use shared storage devices (such as internal SCSI drives. Such a disk should be deleted from the "managed disk" list.
1. In the add or delete managed disks dialog box, specify which disks on the shared SCSI bus will be used by the cluster service. Add or delete disks as needed, and then click Next.
2. Click Next in the configure cluster network dialog box.
3. Make sure that the network name and IP address comply with the network interface of the "public" network.
4. Select the enable network check box for cluster use.
5. select the option for customer access only (Public Network ).
6. click Next.
7. Configure the private network in the next dialog box. Make sure that the network name and IP address comply with the network interface of the "dedicated" network.
8. Select the enable network check box for cluster use.
9. select the option for internal cluster communication only.
10. click Next.
11. Make sure that the first connection in the list is a dedicated cluster connection, and then click Next.
Important when setting the connection order, you must first place the dedicated cluster connection in the list.
12. Enter the unique IP address and subnet mask of the cluster according to table 1, and then click Next.
The cluster service configuration wizard automatically associates the cluster IP address with a public network or hybrid network. It uses the subnet mask to select the correct network.
13. Click Finish to complete the cluster configuration of the first node.
The cluster Service Installation Wizard copies the files required for cluster service installation and ends the installation process for the first node. After the file is copied, the Registry Key of the cluster service will be created, the log file will be created on the arbitration resource, and then the cluster service will be started on the first node.
A dialog box is displayed, indicating that the cluster service has been successfully started.
14. Click OK.
15. Close the Add/delete programs window.
Verify cluster Installation
Use the cluster controller Governance Unit to verify whether the cluster service is successfully installed on the first node.
Configure the second node
Note in this section, enable Node 1 and the power of all shared disks. Power on the second node.
It takes less time to install cluster services on the second node than on the first node. The installation process is based on the configuration of the first node. Configure the cluster service network settings of the second node.
The steps for starting cluster service installation on the second node are exactly the same as those on the first node. During the installation of the second node, the first node must run.
The installation process is basically the same as that used to install the cluster service on the first node, except for the following differences:
1. In the create or add cluster dialog box, select the second or next node in the cluster and click Next.
2. enter the name of the created Cluster (LegendCluster in the experiment) and click Next.
3. Do not select the account below to connect to the cluster. The cluster service configuration wizard automatically provides the name of the user account selected when the first node is installed. Use the same account used to install the first cluster node.
4. Enter the account password (if any) and click Next.
5. In the next dialog box, click Finish to complete the configuration.
6. The cluster service will be started. Click OK.
7. Close the Add/delete programs.
Configure cluster attributes
Right-click ClusterGroup and click Properties. In order to test the system failover and Failover performance, in the experiment, set the preferred server to SRV1, set the Failover threshold to 0, and set the fault recovery to immediate.
When the preceding steps are completed correctly, the oracle HA application has basically completed more than half of the steps. Then, you can perform FAILSAFE installation. If the MSC is not correctly installed, the following cannot be installed because Oracle Fail Safe is an Oracle product built on Microsoft Cluster Server (MSC) and must be installed correctly.
Installation steps:
1. Install Oracle 9.0.1 on node 1, select Custom installation, and select create database during installation;
2. Restart Node 1;
3. Install Oracle 9.0.1 on node 2, select Custom installation, and select create database during installation;
4. Restart Node 2;
5. Create a database to be added to the cluster on node 1. do not configure Net8 for the new database, that is, do not configure the listener. ora and tnsnames. ora files. Note that Control files, redo log files, and data files must be created in the shared partition of the Ark;
6. Add the ORACLE_SID of the database created in step 5 to the registry of Node 1 and node 2;
7. Install FailSafe 3.2 on node 1;
8. Restart Node 1;
9. Install FailSafe 3.2 on node 2;
10. Restart Node 2;
11. Modify the file c: winntsystem32driversetchosts on the two machines as follows:
192.0.35.1 clunode1 (node 1)
192.0.35.2 clunode2 (node 2)
192.0.35.100 mycluster (the alias of the MSC Cluster, that is, the alias of the virtual IP)
Note that the IP address used by listener on the node should be a virtual IP address;
12. Open Oracle Fail Safe Manager
Start-> PRograms-> OraHome91>-> Oracle Fail Safe Manager
The Add Cluster to Tree dialog box appears. In this dialog box, enter the CLUSTER name.
The system prompts you to enter the user name and password with the permission to govern in the domain. after entering the user name, The Verify Cluster dialog appears.
To verify the effectiveness of the cluster resources;
13. Check for OracleMSCSServices on each node and start the service;
14. Start the Cluster controller (START-> Program-> governance tool-> Cluster controller) on each node and check whether Oracle Services for MSC exists in the Cluster Group; check whether Oracle exists in Resource Types.