MSC + FailSafe dual-host cluster HA Summary

Source: Internet
Author: User
Tags node server

Hardware configuration:
Two Lenovo 82550-wide T630 servers (standard, each server has two intel NICS)
The SUREFIRE200R array cabinet is equipped with 13 18 GB hard disks. It will be used as a cluster Disk
Network requirements:
· Unique NetBIOS cluster name.
· Five unique static IP addresses: Two NICs for the private network, two NICs for the public network, and one for the cluster itself.
· All nodes of the domain user account used for cluster services must be members of the same domain ).
· Each node should have two NICs-one for connecting to the public network and the other for the dedicated cluster network of nodes.
Node 1 Node 2
Server Name: T630R T630L
Activity Directory domain name cluster.legend.com
Cluster name Mycluster
Public network IP address 192.0.35.1 192.0.35.2
Public network subnet mask 255.255.255.0 255.255.0
Private network IP address 10.1.1.1 10.1.1.2
VPC subnet mask 255.0.0.0 255.0.0.0
Cluster virtual IP address 192.0.35.100
Cluster virtual IP subnet mask 255.255.255.0
Cluster account
With the above plan, you can start system installation:
These steps are:
· Install Windows 2000 Advanced Server on each node.
· Install the network.
· Install the disk.
· Install the Active Directory
After completing the preceding steps, perform the following steps:
· Install MSC on each node.
Step 1: Install 2000
We skipped:
Step 2 install the network:
Each cluster node requires at least two NICs-one connecting to the public network and the other connecting to a dedicated network that only contains the cluster nodes.
The dedicated network adapter establishes node-to-node communication, cluster status signals, and cluster management. The public network adapter of each node connects the cluster to the public network of the client.
Verify that all network connections are correct. The dedicated network adapter is only connected to other dedicated network adapters, while the public network adapter is connected to the public network. Perform these steps on each cluster node before continuing to install the shared disk.
Step 3: Skip the following steps to install the disk:
Step 4: Install the Active Directory:
All nodes in the cluster must be members of the same domain and can access the domain controller and DNS server. You can configure them as member servers or domain controllers. If you decide to configure a node as a domain controller, you should configure all other nodes in the same domain as the domain controller. Configure the two nodes as the domain controller.
Note:
If no DNS server exists in the domain, use the first node server as the DNS server when installing the Active Directory. Before installing the Active Directory of the second server, When configuring the ip address of the NIC, configure DNS as the IP address of the first server, so that you can correctly resolve the domain name by installing the Active Directory of the second server.
After completing the preceding steps correctly, you can install the MSC:
Note: During cluster service installation on the first node, you must either disconnect all other nodes or stop them before Windows 2000 is started. Power on all shared storage devices.
1. Click Start, click Settings, and then click Control Panel.
2. Double-click Add/delete programs.
3. Double-click to add/delete Windows Components.
4. Select cluster service. Click Next.
5. The cluster service file is located in the Windows 2000 Advanced Server or Windows 2000 Datacenter Server CD-ROM. Enter x: \ i386 where x is the drive letter of the CD-ROM .) If Windows 2000 is installed from the network, enter the appropriate network path. If a flash screen is displayed on Windows 2000, disable it .) Click OK.
6. click Next.
7. Click I understand to accept the following conditions: only the hardware cluster service on the hardware compatibility list can be supported.
8. Because this is the first node in the cluster, you must create the cluster itself. Select the first node in the cluster and click Next.
9. Enter the cluster name according to Table 1 and click Next.
10. Enter the username of the cluster service account created during pre-installation. In this example, the user name is cluster .) No password. Type the domain name, and then click Next.
In general, you should provide a secure password for this user account.
In this case, the cluster service Configuration Wizard verifies the user account and password.
11. click Next.
Configure a cluster Disk
Note: by default, all SCSI disks not on the same bus as the system disk will appear in the managed disk list. Therefore, if the node has multiple SCSI buses, it may also list disks that do not need to be used as shared storage devices, such as internal SCSI drives. Such a disk should be deleted from the "managed disk" list.
1. In the add or delete managed disks dialog box, specify which disks on the shared SCSI bus will be used by the cluster service. Add or delete disks as needed, and then click Next.
2. Click Next in the configure cluster network dialog box.
3. Make sure that the network name and IP address comply with the network interface of the "public" network.
4. Select the enable network check box for cluster use.
5. select the option only for the customer to access the public network ).
6. click Next.
7. Configure the private network in the next dialog box. Make sure that the network name and IP address comply with the network interface of the "dedicated" network.
8. Select the enable network check box for cluster use.
9. select the option for internal cluster communication only.
10. click Next.
11. Make sure that the first connection in the list is a dedicated cluster connection, and then click Next.
Important when setting the connection order, you must first place the dedicated cluster connection in the list.
12. Enter the unique IP address and subnet mask of the cluster according to table 1, and then click Next.
The cluster service configuration wizard automatically associates the cluster IP address with a public network or hybrid network. It uses the subnet mask to select the correct network.
13. Click Finish to complete the cluster configuration of the first node.
The cluster Service Installation Wizard copies the files required for cluster service installation and ends the installation process for the first node. After the file is copied, the Registry Key of the cluster service will be created, the log file will be created on the arbitration resource, and then the cluster service will be started on the first node.
A dialog box is displayed, indicating that the cluster service has been successfully started.
14. Click OK.
15. Close the Add/delete programs window.
Verify cluster Installation
Use the "Cluster Manager" Management Unit to verify whether the cluster service is successfully installed on the first node.
Configure the second node
Note in this section, enable Node 1 and the power of all shared disks. Power on the second node.
It takes less time to install cluster services on the second node than on the first node. The installation process is based on the configuration of the first node. Configure the cluster service network settings of the second node.
The steps for starting cluster service installation on the second node are exactly the same as those on the first node. During the installation of the second node, the first node must run.
The installation process is basically the same as that used to install the cluster service on the first node, except for the following differences:
1. In the create or add cluster dialog box, select the second or next node in the cluster and click Next.
2. enter the name of the created cluster in the experiment as LegendCluster), and then click Next.
3. Do not select the account below to connect to the cluster. The cluster service configuration wizard automatically provides the name of the user account selected when the first node is installed. Use the same account used to install the first cluster node.
4. Enter the account password if any) and click Next.
5. In the next dialog box, click Finish to complete the configuration.
6. The cluster service will be started. Click OK.
7. Close the Add/delete programs.
Configure cluster attributes
Right-click ClusterGroup and click Properties. In order to test the system failover and Failover performance, in the experiment, set the preferred server to SRV1, set the Failover threshold to 0, and set the fault recovery to immediate.
When the above steps are completed correctly, the oracle HA application has basically completed more than half of the steps, and then you can perform FAILSAFE installation. If the MSC is not correctly installed, the following cannot be installed, because Oracle Fail Safe is an Oracle product built on Microsoft Cluster servermsc) and must be installed correctly.
Installation steps:
1. Install Oracle 9.0.1 on node 1, select Custom installation, and select create database during installation;
2. Restart Node 1;
3. Install Oracle 9.0.1 on node 2, select Custom installation, and select create database during installation;
4. Restart Node 2;
5. Create a database to be added to the cluster on node 1. do not configure Net8 for the new database, that is, do not configure the listener. ora and tnsnames. ora files. Note that the Control file, redo log files, and data files must be created in the shared partition of the Ark;
6. Add the ORACLE_SID of the database created in step 5 to the registry of Node 1 and node 2;
7. Install FailSafe 3.2 on node 1;
8. Restart Node 1;
9. Install FailSafe 3.2 on node 2;
10. Restart Node 2;
11. Modify the file c: \ winnt \ system32 \ drivers \ etc \ hosts on the two machines as follows:
192.0.35.1 clunode1 Node 1)
192.0.35.2 clunode2 Node 2)
192.0.35.100 alias of mycluster MSC Cluster, that is, the virtual IP address alias)
Note that the IP address used by listener on the node should be a virtual IP address;
12. Open Oracle Fail Safe Manager
Start-> Programs-> OraHome91>-> Oracle Fail Safe Manager
The Add Cluster to Tree dialog box appears. In this dialog box, enter the CLUSTER name.
Prompt to enter the user name and password with administrative permissions in the domain. after entering the user name, The Verify Cluster dialog appears.
To verify the effectiveness of the cluster resources;
13. Check for OracleMSCSServices on each node and start the service;
14. Start the Cluster Manager on each node and choose Programs> Management Tools> Cluster Manager. Check whether Oracle Services for MSC is available in the Cluster Group; check whether there are Oracle Database and Oracle TNS Listener in Resource Types.
15. Select Create in the Groups menu of Oracle Fail Safe Manager. After the Create Group Wizard is enabled, you can set the failover and failback policies, and automatically open Add Resource to Group Wizard to Add virtual address to the Group, select Resources, and then Add to Group ).
16. On the Troubleshooting menu, select Verify Standalone Database to Verify the Oracle Database and Oracle Net configuration. This command confirms that Oracle Fail Safe can access the Database, and the standlone Database is located in the shared partition;
17. Select Add to Group from the Resources menu, and select Oracle Database to open Add Resource to Group Wizard, this wizard is used to configure the single-instance Oracle database server as a high-availability database server Based on MSC;
Other Instructions:
After fail safe is installed, two resource types are displayed in cluster configuration> resource type> cluster configuration in Cluster Manager in Windows 2000: Oracle Database and Oracle TNS Listener, both use the resource DLL FsResOdbs. dll.
After you add the group "maid" in FailSafe and add the Database Cluster, you also add the group to the Cluster Manager. The group contains the following content: IP Address, Network Name scsi817, and Disk R:, OracleOraHome81TNSListenerFslscsi817, and LEGEND database instance name)
The content of (C: \ winnt \ system \ drivers \ etc) is as follows :)
192.0.35.1 t630r
192.0.35.3 t630l
192.0.35.100 mycluster cluster name)
192.168.34.71 scsi817 IP address and network name used by the Oracle cluster)
Basically, oracle ha has been completed, and the next step is to add the application as a service to the Cluster Manager.
In this way, the installation of applications is very convenient, and the high reliability of the system can be achieved. It should be said that things are cheap and beautiful, but some disadvantages are that traffic balancing cannot be performed.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.