Create a cluster under RedHat

Source: Internet
Author: User
Article title: create a cluster under RedHat. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Environment description:
  
1. hardware:
Note: At least four NICs and other related hardware are required according to the standard. However, due to restrictions, some modifications have been made to the hardware and settings.
  
Servers: PE4300 and PE4600
Nic: two
RAID: two pieces of PERC2/SC
Storage: PV220S and SCSI cable
  
2. software:
OS: RedHat Advance Server 2.1
Kernel: 2.4.9-e.3smp
  
Configuration allocation:
IP: node1: 10.0.0.1/8
Node2: 10.0.0.2/8
Cluster IP: 10.0.0.3 is different from the Cluster server IP address in Microsoft. This address is used for management.
  
Basic steps:
  
   I. installing OS
Note: To ensure smooth installation, we recommend that you remove the RAID card first. Otherwise, the system cannot start normally after the file is copied. Because when an external memory cabinet is connected using a plug-in card (RAID/SCSI), they will always take precedence over local hard disk detection, which will cause local drive letters to be changed.
  
Follow the usual steps to install the OS. after the system is installed, insert the RAID or scsicard back to the host. The system will automatically discover new devices upon next restart. However, to ensure that the device can be automatically started during each system boot, you still need to do the following two steps:
  
A. Edit the/etc/modules. conf file and add the following content to the file content.
  
Alias scsi_hostadapter megaraid
Options scsi_mod max_scsi_luns = 255. you can add this line in advance because there is a shared inventory.
When there are multiple LUNs on the top, the system requires this.
  
B. execute
  
The content of Mkinitrd initrd-2.4.9-e.3smp.img 2.4.9-e.3smp à is mainly determined by the kernel.
  
Example:
/Etc/modules. conf:
  
Alias scsi_hostadapter megaraid
  
Options scsi_mod max_scsi_luns = 255
  
  
   II. physical connection
This step is the same as that of W2K, that is, first preparing a LUN with one node, and then preparing the LUN read from the disk by the other node. at the same time, ensure that the Cluster function of the two RAID cards is enabled, and the scsi id cannot conflict, at the same time, the PV220S switch is switched to the Cluster mode.
  
   3. install and prepare partitions
1. create a sub-drive
Note: We recommend that you only open one node at this time. After the system starts, the system can identify the newly created LUN on the cabinet. Device names are usually arranged in sequence after a local hard disk. For example, if the local device is/dev/sda, the new device is/dev/sdb.
  
Partition principles and requirements: Quorum partition requirements: cannot be less than 10 M, must be a bare device, cannot have a file system.
The Quorum partition can only be used for Cluster status and configuration information.
Quorum requires two partitions: primary and slave.
Cluster Application Service partition: one cluster service is required. For example, if you want to create three Cluster Application Services: SQL, NFS, and SAMBA, you must create a partition for each of the three services.
  
For example, fdisk/dev/sdb creates a new partition.
  
/Dev/sdb1 à Quorum primary partition
/Dev/sdb2 à Quorum slave partition
/Dev/sdb3à SQL partition
/Dev/sdb4à NFS partition
/Dev/sdb5à samba partition
  
Note: After creating the sub-drive, you must restart the host. we recommend that you restart all devices.
  
2. create a file system ?? Format the partition.
  
Note: Quorum must be a bare device, so you do not need to format it. However, other partitions must be formatted and the default Block size is increased to 4096. the default value is 1024.
  
Example:
Mkfs? T ext2? J? Is B 4096/dev/sdbx? -"X indicates the partition
  
3. create a Quorum partition for the Cluster
Note: Run cat/proc/devices to check whether the system supports bare devices. if you see the following output, the system supports bare devices.
  
162 raw
  
Edit the/etc/sysconfig/rawdevices file on the two servers to bind the partition to the bare device.
  
Example:/etc/sysconfig/rawdevices
# Format:
#
# Example:/dev/raw/raw1/dev/sda1
#/Dev/raw/raw2 8 5
/Dev/raw/raw1/dev/sdb1
/Dev/raw/raw2/dev/sdb2
  
Restart service rawdevices restart
  
4. check and verify the Quoram partition
  
Are two nodes cludiskutil? P make sure that both nodes have the following output:
  
----- Shared State Header ------
Magic # = 0x39119fcd
Version = 1
Updated on Thu Sep 14 05:43:18 2000
Updated by node 0
  
   4. create a Cluster service
Note: ensure that all NICs work properly. Set basic network requirements.
  
1. edit the/etc/hosts file
  
127.0.0.1 localhost. localdomain localhost
10.0.0.1 node1.test.com node1
10.0.0.2 node2.test.com node2
10.0.0.3 Clusteralias.test.com Clusteralias
  
2. run/sbin/cluconfig
  
Note: The system will automatically generate the configuration file/etc/cluster. conf for the cluster. this step only needs to run on one of the nodes.
  
You will see the following output:
  
Enter cluster name [Cluster]: Cluster
Enter IP address for cluster alias [x. x]: 10.0.0.3
--------------------------------
Information for Cluster Member 0
--------------------------------
Enter name of cluster member [storage0]: node1
Looking for host storage0 (may take a few seconds )...
Enter number of heartbeat channels (minimum = 1) [1]: 1
Information about Channel 0
Channel type: net or serial [net]:
Enter hostname of the cluster member on heartbeat channel 0 \
[Node1]: node1
Looking for host node1 (may take a few seconds )...
Information about Quorum Partitions
Enter Primary Quorum Partition [/dev/raw/raw1]:/dev/raw/raw1
Enter Shadow Quorum Partition [/dev/raw/raw2]:/dev/raw/raw2
Information About the Power Switch That Power Cycles Member 'storage0'
Choose one of the following power switches:
O NONE
O RPS10
O BAYTECH
O APCSERIAL
O APCMASTER
O WTI_NPS
Power switch [NONE]: NONE
  
Information for Cluster Member 1
--------------------------------
Enter name of cluster member [node2]: node2
Looking for host storage1 (may take a few seconds )...
Information about Channel 0
Enter hostname of the cluster member on heartbeat channel 0 \
[Node2]: node2
Looking for host storage1 (may take a few seconds )...
Information about Quorum Partitions
Enter Primary Quorum Partition [/dev/raw/raw1]:/dev/raw/raw1
Enter Shadow Quorum Partition [/dev/raw/raw2]:/dev/raw/raw2
Information About the Power Switch That Power Cycles Member 'node2'
Choose one of the following power switches:
O NONE
O RPS10
O BAYTECH
O APCSERIAL
O APCMASTER
O WTI_NPS
Power switch [NONE]: NONE
  
Cluster name: Development Cluster
Cluster alias IP address: 10.0.0.154
Cluster alias netmask: 255.255.254.0
--------------------
Member 0 Information
--------------------
Name: node1
Primary quorum partition:/dev/raw/raw1
Shadow quorum partition:/dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: node1
Power switch IP address or hostname: node1
--------------------
Member 1 Information
--------------------
Name: node2
Primary quorum partition:/dev/raw/raw1
Shadow quorum partition:/dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: node2
Save the cluster member information? Yes/no [yes]:
Writing to configuration file... done
Configuration information has been saved to/etc/cluster. conf.
----------------------------
Setting up Quorum Partitions
----------------------------
Running cludiskutil-I to initialize the quorum partitions: done
Saving configuration information to quorum partitions: done
Do you wish to allow remote monitoring of the cluster? Yes/no \
[Yes]: yes
----------------------------------------------------------------
Configuration on this member is complete.
To configure the next member, invoke the following command on that system:
#/Sbin/cluconfig -- init =/dev/raw/raw1
See the manual to complete the cluster installation
  
3. prepare the second node
  
Run cluconfig -- init =/dev/raw/raw1
  
4. start the Cluster service.
  
Run service cluster start on two nodes respectively.
You will see the following Daemon
• Cluquorumd-Quorum daemon
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.