Summary
This step by step provides®2000 how to install the cluster service on the Advanced Server and Windows 2000 Datacenter Server operating system Server. This Guide also describes how to install the cluster service on a cluster node. This Guide does not describe how to install cluster applications, but guides you through a typical two-node cluster installation process.
Introduction
A server cluster is a group of independent servers that run the cluster service and work together as a system. The server cluster runs multiple Windows 2000®The servers of Advanced Server or Windows 2000 Datacenter Server are put together to greatly improve the availability, scalability and manageability of resources and applications.
The purpose of a server cluster is to maintain client access to applications and resources in the event of a fault or scheduled downtime. If one server in the cluster is unavailable due to a fault or maintenance, the resources and applications will be transferred to another valid cluster node.
For cluster systems, use the term "High Availability" instead of "Fault Tolerance", because fault tolerance technology provides more advanced recovery and repair capabilities. Fault Tolerant servers usually use highly redundant hardware and specialized software to instantly recover the system from any hardware or software failure. The cost of these solutions is much higher than the cost of the cluster solution, because each organization must pay for the redundant hardware, which is usually idle and can only be shown when an error occurs. Fault-tolerant servers are used in applications with high transaction value and fast speed, such as a ticket exchange, ATM, or stock exchange.
Although the cluster service cannot guarantee that the operation will not stop, it provides sufficient availability for most key applications. The cluster service can monitor applications and resources, automatically identify most faults, and perform recovery. In this way, the flexibility of managing the workload in the cluster is greatly enhanced, and the overall availability of the system is also improved.
The cluster Service has the following advantages:
High availability. With the cluster service, ownership of resources such as disk drives and IP addresses is automatically transferred from faulty servers to non-faulty servers. When a system or application in the cluster fails, the cluster software restarts the faulty application on the server that is not faulty, or distribute the work from the faulty node to other nodes. As a result, the user only feels that the service has paused instantly.
Fault response. When the faulty server returns online, the cluster service automatically balances the workload in the cluster.
Easy to manage. You can use the "Cluster Manager" to manage the cluster as a system, and the management of applications is no different from that of applications running on a server. You can drag and drop cluster objects to move applications between different servers in the cluster. You can also move data between different servers in the same way. This method can be used to manually balance the workload of the server and unload its workload for planned maintenance. You can also monitor the status of clusters, all nodes, and resources from any location in the network.
Scalability. The cluster service can be expanded to meet increasing demands. When the overall load of an application supporting clusters exceeds the cluster capacity, more nodes can be added.
This White Paper provides instructions for installing cluster services on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server. This section describes how to install the cluster service on a cluster node. It does not describe how to install the cluster application, but guides you through the installation process of a typical two-node cluster.
Check List of Cluster Server Installation
This checklist helps you prepare for installation. After checking the list, it is our step-by-step operation guide.
Software requirements
Install Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server on all computers in the cluster.
Name resolution methods, such as Domain Name System (DNS), Windows Internet naming system (WINS), and HOSTS.
We recommend that you use terminal servers for remote cluster management.
Hardware requirements
The hardware of the cluster service node must meet the hardware requirements of Windows 2000 Advanced Server or Windows 2000 Datacenter Server. These requirements can be found on the product compatibility search page.
The hardware of the cluster must be in the cluster service hardware compatibility list (HCL. You can access the Windows Hardware compatibility list and search for the latest version of cluster service HCL in "cluster.
The following configurations apply to two computers that comply with the HCL protocol:
The boot disk of Windows 2000 Advanced Server or Windows 2000 Datacenter Server is installed. The boot disk cannot be located in the shared storage BUS described below.
Shared disk, independent PCI storage host adapter SCSI or fiber channel) It is a supplement to the boot disk adapter.
Two PCI NICs are installed on each machine in the cluster.
An external disk memory unit that is connected to all computers and conforms to HCL. It will be used as a cluster disk. We recommend that you use an independent redundant disk array (RAID ).
Connect the shared storage device to the storage cables of all computers. When configuring storage devices, refer to the manufacturer's operation guide. For more information, see Appendix.
All hardware should be the same, slot-to-slot, card-to-card. This will simplify the configuration process and eliminate potential compatibility issues.
Network requirements
The unique NetBIOS cluster name.
Five unique static IP addresses: Two NICs for the private network, two NICs for the public network, and one for the cluster itself.
All nodes of the domain user account used for cluster service must be members of the same domain ).
Each node should have two NICs-one for connecting to the public network and the other for the dedicated cluster network of nodes. If the two types of connections only use one Nic, this configuration is not supported. The HCL certificate requires a separate dedicated network adapter.
Shared disk requirements:
All shared disks, including arbitration disks, must be physically connected to the shared bus.
Verify that the disks connected to the shared bus can be viewed from all nodes. This can be checked at the master adapter installation level. Refer to the manufacturer's documentation for instructions on how to find specific adapters.
According to the manufacturer's guide, a unique scsi id must be assigned to the SCSI device and the device must be correctly connected. 1
All shared disks must be configured as basically not dynamic.
All disk partitions on the disk must be formatted as NTFS.
Although not required, fault-tolerant RAID is strongly recommended for all disks. The key concept here is fault tolerance RAID configuration-not the Strip set without parity.