The GPFS file system can be distributed across all hosts on all disks, with striped read/write and high performance. Signaling Management Mechanism with good concurrency. You can configure a failgroup with high availability. The following is the deployment process of the GPFS cluster ......
1. Environment preparation:
Yum install-y compat-libstdc ++-33 rpm-build kernel-headers kernel-devel imake gcc-c ++ libstdc ++ RedHat-lsb
2. Install GPFS:
Install multiple servers
Rpm-ivh gpfs. base-3.4.0-0.x86_64.rpm
Rpm-ivh gpfs.doc s-3.4.0-0.noarch.rpm
Rpm-ivh gpfs. gpl-3.4.0-0.noarch.rpm
Rpm-ivh gpfs. msg. en_us-3.4.0-0.noarch.rpm
[Root @ Web02_a base] # rpm-qa | grep gpfs
En_US-3.4.0-0. gpfs. msg.
Gpfs. gpl-3.4.0-0
Gpfs. base-3.4.0-0
Gpfs.doc s-3.4.0-0
3. GPFS upgrade
Install multiple servers
Rpm-Uvhgpfs. base-3.4.0-21.x86_64.update.rpm
Rpm-Uvh gpfs.doc s-3.4.0-21.noarch.rpm
Rpm-Uvh gpfs. gpl-3.4.0-21.noarch.rpm
Rpm-Uvh gpfs. msg. en_US-3.4.0-21.noarch.rpm
[Root @ Web02_a update] # rpm-qa | grep gpfs
Gpfs. gpl-3.4.0-21
En_US-3.4.0-21. gpfs. msg.
Gpfs. Basic-3.4.0-21
Gpfs.doc s-3.4.0-21
4. Compile GPFS source code
Install multiple servers
[Root @ Web02_a update] # cd/usr/lpp/mmfs/src/
[Root @ Web02_a src] # makeLINUX_DISTRIBUTION = REDHAT_AS_LINUX Autoconfig
[Root @ Web02_a src] # make World
[Root @ Web02_a src] # make InstallImages
[Root @ Web02_a src] # make rpm # generate the rpm package. A prompt is displayed when the path is generated.
[Root @ Web02_a src] # rpm-ivh/usr/src/redhat/RPMS/x86_64/gpfs. gplbin-2.6.18-308.el5-3.4.0-21.x86_64.rpm
[Root @ Web02_a src] # rpm-qa | grep gpfs
Gpfs. gpl-3.4.0-21
En_US-3.4.0-21. gpfs. msg.
Gpfs. gplbin-2.6.18-308.el5-3.4.0-21
Gpfs. Basic-3.4.0-21
Gpfs.doc s-3.4.0-21
5. Configure host time synchronization
If the time between servers is not synchronized, GPFS cluster deployment fails.
[Root @ Web02_a src] # crontab-l
# Time sync by yangrong at 2014-1-24
*/10 */usr/sbin/ntpdate pool.ntp.org>/dev/null 2> & 1
[Root @ Nagios update] # crontab-l
# Time sync by yangrong at 2014-1-24
*/10 */usr/sbin/ntpdate pool.ntp.org>/dev/null 2> & 1
6. Configure ssh key-free Login
Note: You can also configure rsh without keys. By default, gpfs uses the rsh key to log on to the remote host.
[Root @ Web02_a src] # cd/root/. ssh/
[Root @ Web02_a. ssh] # ssh-keygen-t rsa
[Root @ Web02_a. ssh] # cp id_rsa.pubauthorized_keys
[Root @ Web02_a. ssh] # ssh Web02_a # log in and test it yourself
[Root @ Web02_a. ssh] # cat/etc/hosts
10.0.0.243 Nagios
10.0.0.236 Web02_a
[Root @ Web02_a. ssh] # scp-r/root/. sshroot @ Nagios:/root # copy the key to another host
[Root @ Web02_a. ssh] # ssh Nagios
Last login: Fri Jan 24 13:59:19 2014 from192.168.2.53
[Root @ Nagios ~] # Exit
[Root @ Nagios src] # ssh Web02_a
Warning: Permanently added the RSA host keyfor IP address '10. 0.0.236 'to the list of known hosts.
Last login: Fri Jan 24 15:03:44 2014 fromlocalhost. localdomain
[Root @ Web02_a ~] # Exit
7. Configure the GPFS environment variable
[Root @ Web02_a. ssh] # echo 'exportpath = $ PATH:/usr/lpp/mmfs/bin'>/etc/profile
[Root @ Web02_a. ssh] # source/etc/profile
[Root @ Web02_a. ssh] # mmfs
Mmfsadm mmfsd mmfsfuncs. Linux
Mmfsck mmfsenv mmfsmnthelp
Mmfsctl mmfsfuncs mmfsmount
# The following operations only need to be performed on one server. Mutual trust is configured, and all configuration file information is automatically synchronized to other servers.
8. Create a cluster
[Root @ Web02_a. ssh] # cat/tmp/gpfsfile
Web02_a: quorum-manager
Nagios: quorum-manager
[Root @ Web02_a. ssh] # mmcrcluster-N/tmp/gpfsfile-p Web02_a-s Nagios-r/usr/bin/ssh-R/usr/bin/scp
# By default, GPFS uses rcp for copying and rsh for remote use. Modify the remote mode and replication mode.
# Query command: mmlscluster
9. License Configuration
[Root @ Web02_a ~] # Mmchlicense server -- accept-N Web02_a, Nagios
Note: server is used for service nodes or quorum nodes, while client is used for other nodes. client nodes only have the permission to mount the file system and cannot change the configuration. The command is as follows:
Mmchlicense client -- accept-N host_a, host_ B
10. Configure the nsd Disk
Currently, GPFS clusters are deployed in multiple partitions.
Current partition:
[Root @ Web02_a ~] # Fdisk-l/dev/sdb
Disk/dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Device Boot Start End Blocks Id System
/Dev/sdb1 1 13 104391 83 Linux
/Dev/sdb2 14 26 104422 + 83 Linux
/Dev/sdb3 27 39 104422 + 83 Linux
/Dev/sdb4 40 130 730957 + 5 Extended
/Dev/sdb5 40 52 104391 83 Linux
/Dev/sdb6 53 65 104391 83 Linux
/Dev/sdb7 66 78 104391 83 Linux
[Root @ Nagios ~] # Fdisk-l/dev/sdb
Disk/dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Device Boot Start End Blocks Id System
/Dev/sdb1 1 13 104391 83 Linux
/Dev/sdb2 14 26 104422 + 83 Linux
/Dev/sdb3 27 39 104422 + 83 Linux
/Dev/sdb4 40 130 730957 + 5 Extended
/Dev/sdb5 40 52 104391 83 Linux
/Dev/sdb6 53 65 104391 83 Linux
/Dev/sdb7 66 78 104391 83 Linux
Edit nsd Configuration
[Root @ Web02_a ~] # Cat/tmp/nsdfile
/Dev/sdb1: Web02_a: dataAndMetadata: 01:
/Dev/sdb2: Web02_a: dataAndMetadata: 01:
/Dev/sdb3: Web02_a: dataAndMetadata: 01:
/Dev/sdb5: Web02_a: dataAndMetadata: 01:
/Dev/sdb1: Nagios: dataAndMetadata: 02:
/Dev/sdb2: Nagios: dataAndMetadata: 02:
/Dev/sdb3: Nagios: dataAndMetadata: 02:
# Note: At this time, the number of disks in the failgroup group 1 and group 2 is not the same, but it does not matter if the disk is not equal. The two groups are equivalent to raid 1.
[Root @ Web02_a ~] # Mmcrnsd-F/tmp/nsdfile-v no
# Generate an NSD File
[Root @ Web02_a ~] # Cat/tmp/nsdfile
#/Dev/sdb1: Web02_a: dataAndMetadata: 01:
Gpfs1nsd: dataAndMetadata: 01: system
#/Dev/sdb2: Web02_a: dataAndMetadata: 01:
Gpfs2nsd: dataAndMetadata: 01: system
#/Dev/sdb3: Web02_a: dataAndMetadata: 01:
Gpfs3nsd: dataAndMetadata: 01: system
#/Dev/sdb5: Web02_a: dataAndMetadata: 01:
Gpfs4nsd: dataAndMetadata: 01: system
#/Dev/sdb1: Nagios: dataAndMetadata: 02:
Gpfs5nsd: dataAndMetadata: 02: system
#/Dev/sdb2: Nagios: dataAndMetadata: 02:
Gpfs6nsd: dataAndMetadata: 02: system
#/Dev/sdb3: Nagios: dataAndMetadata: 02:
Gpfs7nsd: dataAndMetadata: 02: system
#/Dev/sdb5: Nagios: dataAndMetadata: 02:
Gpfs8nsd: dataAndMetadata: 02: system
11. Configure the arbitration Disk
# Role of an arbitration Disk: this cluster is unavailable when half of the disks defined as arbitration disks are unavailable.
In addition, when the number of valid disks is less than or equal to half of the total number of disks, the entire file system is unavailable.
[Root @ Web02_a ~] # Mmchconfig tiebreakerDisks = "gpfs1nsd; gpfs2nsd; gpfs3nsd"
Verifying GPFS is stopped on all nodes...
Mmchconfig: Command successfully completed
Mmchconfig: Propagating the clusterconfiguration data to all
Affected nodes. This is anasynchronous process.
[Root @ Web02_a tmp] # mmgetstate-
[Root @ Web02_a tmp] # mmgetstate-
Nodenumber Node name GPFS state
------------------------------------------
1 Web02_a active
2 Nagios active
If the mmgetstate-a status is down, make sure that the firewall is disabled and the two servers synchronize time (the time zone must be the same). The 127.0.0.1 field does not exist in/etc/hosts.
Gpfs Error log Path:/var/adm/ras/mmfs. log. latest
# Modify the node IP address mmchnode -- daemon-interface = 10.0.0.236-NWeb02_a
12. Create a GPFS File System
[Root @ Web02_a tmp] # mmcrfs vol_data-F/tmp/nsdfile-B 256 K-m 2-r 2-j cluster-T/vol_data-v no
The following disks of vol_data will beformatted on node Web02_a:
Gpfs1nsd: size 104391 KB
Gpfs2nsd: size 104422 KB
Gpfs3nsd: size 104422 KB
Gpfs4nsd: size 104391 KB
Gpfs9nsd: size 104391 KB
Gpfs10nsd: size 104422 KB
Gpfs11nsd: size 104422 KB
Gpfs12nsd: size 104391 KB
Formatting file system...
Disks up to size 6.4 GB can be added tostorage pool 'system '.
Creating Inode File
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system/dev/vol_data.
Mmcrfs: Propagating the clusterconfiguration data to all
Affected nodes. This is anasynchronous process.
13. Mount the file system:
[Root @ Web02_a ras] # mmmount/vol_data-
Fri Jan 24 20:04:25 CST 2014: mmmount: mouning file systems...
[Root @ Web02_a ras] # df-hT
Filesystem Type Size Used Avail Use % Mounted on
/Dev/sda3 ext3 19 GB 11G 7.0G 60%/
/Dev/sda1 ext3 190 M 12 M 169 M 7%/boot
Tmpfs 123 M 0 123 M 0%/dev/shm
/Dev/vol_data gpfs 814 M 333 M 481 M 41%/vol_data
[Root @ Nagios ras] # df-hT
Filesystem Type Size Used Avail Use % Mounted on
/Dev/sda3 ext3 6.6G 3.5G 2.8G 56%/
/Dev/sda1 ext3 190 M 12 M 169 M 7%/boot
Tmpfs 249 M 0 249 M 0%/dev/shm
/Dev/vol_data gpfs 814 M 333 M 481 M 41%/vol_data
Installation is complete.
14. Automatic Start
Mmchconfig autoload = yes
Or Add/usr/lpp/mmfs/bin/mmstartup-a to/etc/rc. local.
15. Reliability Test
Down Nagiogs server test. Check whether the data is read normally
[Root @ Web02_a ras] # cd/vol_data/
[Root @ Web02_a vol_data] # cp/etc/hosts.
[Root @ Web02_a vol_data] # ll
Total 0
-Rw-r -- 1 root 375 Jan 26 09: 25 hosts
[Root @ Web02_a vol_data] # cat hosts
# Do not remove the following line, orvarious programs
# That require network functionality willfail.
127.0.0.1 localhost. localdomain localhostbogon
: 1 localhost6.localdomain6 localhost6
10.0.0.236 Web02_a
10.0.0.243 Nagios
[Root @ Web02_a vol_data] # ssh Nagios
Last login: Sun Jan 26 09:08:28 2014 fromweb02_a
[Root @ Nagios ~] #/Etc/init. d/networkstop # down The Nagios server Nic
Shutting down interface eth0:
[Root @ Web02_a vol_data] # mmgetstate-a # Check that a node has been down.
Nodenumber Node name GPFS state
------------------------------------------
1 Web02_a active
2 Nagios unknown
[Root @ Web02_a vol_data] # cat/vol_data/hosts # can also be read normally to ensure high cluster availability.
# Do not remove the following line, orvarious programs
# That require network functionality willfail.
127.0.0.1 localhost. localdomain localhostbogon
: 1 localhost6.localdomain6 localhost6
10.0.0.236 Web02_a
10.0.0.243 Nagios
Test OK.
The maintenance and command description of GPFS will be provided in the next article.