RHCS + GFS (Red Hat HA + GFS)

Source: Internet
Author: User
1. environment Introduction ha management end: luci: 192.168.122.1ha node ricci: 192.168.122.34192.168.122.33 use iscsi shared disk co-ha node on iis82 to create GFS file system 2. iscsi disk sharing server: LVM is used for sharing to facilitate later expansion. [root @ 12782 ~] # Fdisk-cu/dev/sda

1. Environment Introduction

Ha management end: luci: 192.168.122.1

Ha node ricci: 192.168.122.34 192.168.122.33

Create a GFS File System Using the iscsi shared disk ha node on Route 82

 

2. iscsi disk sharing

Server:

LVM-based sharing is used to facilitate future extension;

[Root @ override 82 ~] # Fdisk-cu/dev/sda

 

Command (m for help): n

Commandaction

E extended

P primary partition (1-4)

P

Selectedpartition 4

Lastsector, + sectors or + size {K, M, G} (11495424-16777215, default16777215): + 2G

Command (m for help): T

Partitionnumber (1-4): 4

Hexcode (type L to list codes): 8e

Changedsystem type of partition 4 to 8e (Linux LVM)

 

Command (m for help): wq

[Root @ override 82 ~] # Pvcreate/dev/sda5

[Root @ override 82 ~] # Vgcreate ha/dev/sda5

[Root @ override 82 ~] # Lvcreate-L 1900 M-n hadamo ha

[Root @ override 82 ~] # Yum-y install scsi-target-utils.x86_64

 

[Root @ override 82 ~] # Vim/etc/tgt/targets. conf

Backing-store/dev/ha/hadamo

Initiator-address 192.168.122.33

Initiator-address 192.168.122.34

[Root @ override 82 ~] #/Etc/init. d/tgtd start

[Root @ override 82 ~] # Tgtadm -- lld iscsi -- op show -- mode target # Check whether the sharing is successful

Target1: iqn.2013-09. 41582.example: server.tar get1

............

Backing store path:/dev/ha/hadamo

Backing store flags:

Account information:

ACL information:

192.168.122.33

192.168.122.34

 

HaEnd-to-End hard disk import:

Perform the following operations on two nodes:

[Root @ brief 34 ~] # Yum-y install iscsi-initiator-utils.x86_64

[Root @ brief 34 ~] # Iscsiadm-m discovery-t st-p 192.168.122.82

Startingiscsid: [OK]

192.168.122.82: 3260, 1iqn. 2013-09.10982.example: server.tar get1

[Root @ brief 34 ~] # Iscsiadm-m node-l iqn.2013-09. 41082.example: server.tar get1

[Root @ brief 34 ~] # Fdisk-l

..............................

Disk/dev/sda: 1992 MB, 1992294400 bytes

62 heads, 62 sectors/track, 1012 cylinders

Units = cylinders of 3844*512 = 1968128 bytes

.................................

Operation 33

 

 

3. Create and configure a GFS File System

Perform the following operations on a node:

[Root @ brief 34 ~] # Fdisk-cu/dev/sda

Device Boot Start End Blocks Id System

/Dev/sda1 2048 3891199 1944576 83 Linux

 

[Root @ brief 34 ~] # Pvcreate/dev/sda1

[Root @ brief 34 ~] # Vgcreate-c y havg/dev/sda1 #-c y indicates that vg supports cluster

[Root @ brief 34 ~] # Vgdisplay

Clustered yes

[Root @ brief 34 ~] # Lvmconf -- enable-cluster

[Root @ brief 34 ~] #/Etc/init. d/clvmd restart

# Activate lvm and cluster support

# After creating a vg, the vg will be synchronized to another node.

[Root @ brief 34 ~] # Lvcreate-L 1000 M-n hadamo havg

 

If the following problem occurs:

Errorlocking on node 192.168.122.33: Volume group for uuid not found: Rule

Failed to activate new LV.

It indicates that there is no synchronization on another node, and it can be on another node:

[Root @ override 33 ~] # Iscsiadm -- mode node -- targetnameiqn.2013-09. ipv82.example: server.tar get1 -- portal192.168.122.82: 3260-logout

[Root @ override 33 ~] # Iscsiadm-m node-l iqn.2013-09. 41082.example: server.tar get1

Re-import the disk to ensure Disk Synchronization and re-create

 

After creation:

[Root @ brief 34 ~] # Lvs

Lv vg Attr LSize Pool Origin Data % Move Log Copy % Convert

Hadamo havg-wi-a --- 1000.00 m

[Root @ override 33 ~] # Lvs

Lv vg Attr LSize Pool Origin Data % Move Log Copy % Convert

Hadamo havg-wi-a --- 1000.00 m

 

Create a GFS File System:

Mkfs. gfs2-p lock_dlm-t wangzi_1: gfs2-j 3/dev/havg/hadamo (formatted as gfs2)

-P is defined as the DLM lock mode. If this parameter is not added, when the partition is mounted to both systems, it will be in the format of EXT3

The information of the two systems cannot be synchronized.

-The table name of the twangzi_1 DLM lock is the volume tag of the gfs partition in your cluster name.

-The maximum number of nodes that can be mounted simultaneously in the j GFS partition can be dynamically adjusted during use. The value is usually set to the number of nodes + 1.

Wangzi_1: gfs2 # wangzi_1 indicates the cluster name, And gfs2 indicates a flag.

 

 

[Root @ mongo34cluster] # mkfs. gfs2-p lock_dlm-t wangzi_1: gfs2-j 3/dev/havg/hadamo

Thiswill destroy any data on/dev/havg/hadamo.

Itappears to contain: symbolic link to '../dm-2'

 

Areyou sure you want to proceed? [Y/n] y

 

Device:/dev/havg/hadamo

Blocksize: 4096

DeviceSize 0.98 GB (256000 blocks)

FilesystemSize: 0.98 GB (255997 blocks)

Journals: 3

ResourceGroups: 4

LockingProtocol: "lock_dlm"

LockTable: "wangzi_1: gfs2"

UUID: c62e7ef7-0179-8f8c-d6db-d15278ce4fc8

Related reading:

Web Service shared storage cluster architecture http://www.linuxidc.com/Linux/2013-05/84888.htm Based on RHCS + iSCSI + CLVM

Linux environment iSCSI storage and multi-path function configuration http://www.linuxidc.com/Linux/2013-05/84635.htm

Build an iSCSI storage system http://www.linuxidc.com/Linux/2013-05/84570.htm Based on IP SAN

ISCSI connection failure resolution http://www.linuxidc.com/Linux/2013-01/78462.htm

Install CentOS 6.0 in Citrix XenServer and configure the iSCSI service http://www.linuxidc.com/Linux/2013-01/78461.htm

CentOS 5.3 http://www.linuxidc.com/Linux/2011-01/31529.htm with iSCSI Mount storage cabinet

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.