How to build a two-node IBM GPFS cluster on IBM AIX

Source: Internet
Author: User
Tags file system ssh

Overview

The purpose of this article is to provide a step-by-step guide to installing and configuring a simple dual-node GPFS cluster on AIX. The following figure provides a visual representation of the cluster configuration.

Figure 1. Visual representation of cluster configuration

Gpfs

GPFS provides a true "shared file system" feature with exceptional performance and scalability. GPFS allows a group of computers to access a set of common file data at the same time through a common storage area network (SAN) infrastructure, a network, or a mixed connection type. GPFS provides storage management, information lifecycle management tools, and centralized management, and allows shared access to the file system from a remote GPFS cluster that provides a global namespace.

GPFS provides data tiering, replication, and many other advanced features. Depending on your needs, the configuration can be simple or complex.

Prepare the AIX environment for GPFS.

We assume that you have purchased the necessary licenses and software for GPFS. If you have GPFS software media available, you can copy the GPFS file set to each AIX node that needs to run GPFS.

In this article, each partition is built with the AIX version 7.1, Technology level 2, Service Pack 1:

# oslevel-s
7100-02-01-1245

Each AIX system is configured with 7 SAN disks. One disk is used for the AIX operating system (ROOTVG) and the remaining 6 disks are for GPFS use.

# LSPV
Hdisk0 00c334b6af00e77b ROOTVG Active
Hdisk1 None None
Hdisk2 None None
Hdisk3 None None
Hdisk4 None None
Hdisk5 None None
Hdisk6 None None

SAN disks (which will be used with GPFS) are assigned to two nodes (that is, they are shared between the two partitions). The two AIX partitions are configured with virtual Fibre Channel adapters and access their shared storage over the SAN, as shown in the following illustration.

Figure 2. Deployment diagram

Use the chdev command to change the following properties for each Hdisk, as shown in the following table.

The lsattr command can be used to verify that each property is set to the correct value:

# Lsattr-el Hdisk6–a queue_depth–q algorithm–a reserve_policy
Algorithm Round_robin algorithm True
Queue_depth Queue Depth True
Reserve_policy No_reserve Reserve Policy True

The next step is to configure Secure Shell (SSH) so that the two nodes can communicate with each other. When building a GPFS cluster, you must ensure that the nodes in the cluster have properly configured SSH so that they no longer require password authentication. This requires configuring the Rivest-shamir-adleman algorithm (RSA) key pair to complete the root user SSH configuration. This configuration is required for all nodes in the GPFS cluster in two directions.

The MM command in GPFS needs to be authenticated before it can work. If the key is not configured correctly, these commands will prompt you for the root password each time, and the GPFS cluster may fail. A good way to test this configuration is to ensure that the SSH command will continue to function without being blocked by the root password request.

You can refer to the step-by-Step guide for configuring the SSH key on AIX:

You can use the following command on each node to confirm that each node is able to communicate with each other (without obstruction) using SSH:

aixlpar1# SSH aixlpar1a Date
aixlpar1# SSH aixlpar2a Date

aixlpar2# SSH aixlpar2a Date
aixlpar2# SSH aixlpar1a Date

If SSH works, configure the Wcoll (Working Collective) environment variable for the root user. For example, create a text file that lists all nodes in the form of one node per line:

# vi/usr/local/etc/gfps-nodes.list
Aixlpar1a
Aixlpar2a

Copy the node file to all nodes in the cluster.

Add the following entries to the root user. kshrc file. This will allow the root user to use DSH or MMDSH commands on all nodes in the GPFS cluster to execute commands.

Export Wcoll=/usr/local/etc/gfps-nodes.list

The root user PATH should be modified to ensure that all GPFS mm commands are available to the system administrator. Add the following entry to the root user's. kshrc file.

Export path= $PATH:/usr/sbin/acct:/usr/lpp/mmfs/bin

File/etc/hosts should be consistent across all nodes in the GPFS cluster. Each IP address of each node must be added to the/etc/hosts on each cluster node. This is recommended even when domain Name System (DNS) is configured on each node. For example:

# Gpfs_cluster1 Cluster-test

# # GPFS Admin network-en0
10.1.5.110 Aixlpar1a aixlpar1
10.1.5.120 AIXLPAR2A Aixlpar2

# # GPFS Daemon-private network–en1
10.1.7.110 aixlpar1p
10.1.7.120 aixlpar2p

Install GPFS on AIX

The AIX environment is now configured, and the next step is to install the GPFS software on each node. This is a very simple process.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.