High Availability is combined with gfs2 to implement cluster file systems and cluster logical volumes.

Source: Internet
Author: User

In what scenarios are cluster file systems applicable, I will summarize them in one sentence. When multiple nodes need to read and write the same file system, they need to use the cluster file system, it can pass the lock information held by the file system to each node.


Experiment 1: Create a gfs2 File System Using the disk shared by iSCSI. multiple nodes can be mounted to the same file system to ensure data synchronization.

Lab platform: rhel6

Environment topology:

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/42/81/wKioL1PYUc3zcQxAAAG8ds0pAao715.jpg "Title =" .png "alt =" wkiol1pyuc3zcqxaaag8ds0paao715.jpg "/>

Ansible Configuration

Iscsi Server Configuration


Use the control terminal to allow three nodes to install the required packages.

  • Ansible all-M shell-A 'yum install CMAN rgmanager-y'

  • Ansible all-M shell-A 'yum install gfs2-utils-y'

  • Ansible all-M shell-A 'yum install iSCSI-initiator-utils-y'

Set Time Synchronization for each node

  • Ansible all-M shell-A 'ntpdate asia.pool.ntp.org'

  • Ansible all-M shell-A 'date'

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/42/88/wKioL1PYXuCQF70qAAFjLkiGTQU366.jpg "Title =" w5mf (chb9k % su1tre {05 $ ~ X.jpg "alt =" wkiol1pyxuc%70qaafjlkigtqu366.jpg "/>

Configure nodes to log on to iSCSI-Server

  • Ansible all-M shell-A 'iscsiadm-M discovery-T sendtargets-P 192.168.18.205'

  • Ansible all-M shell-A 'iscsiadm-M node-T iqn.2014-07.com. tuchao: share1-P 192.168.18.205: 3260-l'

After successful logon, you can use fdisk-L to view the disk SDB shared by iSCSI.

Go to admin1.tuchao.com to configure high availability.

Create a cluster

  • Ccs_tool create gcluster

After the command is successfully executed, a cluster. conf file is generated in the/etc/cluster directory.

Add Node

  • Ccs_tool addnode-N 1 admin1.tuchao.com

  • Ccs_tool addnode-N 2 admin2.tuchao.com

  • Ccs_tool addnode-N 3 admin3.tuchao.com

Synchronize the configuration file to each node

  • SCP cluster. conf admin2:/etc/cluster/

  • SCP cluster. conf admin3:/etc/cluster/

Start CMAN and rgmanager for each node, and set Automatic startup.

  • Ansible all-M shell-a 'service CMAN start'

  • Ansible all-M shell-a 'service rgmanager start'

  • Ansible all-M shell-A 'chkconfig CMAN on'

  • Ansible all-M shell-A 'chkconfig rgmanager on'

Format a shared iSCSI disk partition on the node, and restart the system on each node.

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/42/94/wKiom1PYdPezihSCAAIxdTp-ciI121.jpg "Title =" .png "alt =" wKiom1PYdPezihSCAAIxdTp-ciI121.jpg "/>

Format/dev/sdb1 as gfs2 File System

Command help:

Mkfs. gfs2

-J // specify the log region size

-J // Number of log regions

-P // specify the lock protocol

-T // specify the name of the lock table

  • Mkfs. gfs2-J 3-P lock_dlm-T gcluster: sdb1/dev/sdb1

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/42/95/wKiom1PYd37zOOU3AAIyB32L2wE941.jpg "Title =" cjo'g220.utjd37_huvjd1qrg.jpg "alt =" wkiom1pyd37zoou3aaiyb32l2we941.jpg "/>

Modify fstab to specify the mount point to synchronize files to each node, and start the automatic mounting of gfs2.

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/42/96/wKioL1PYefCQXQRdAAHJ77mQslY166.jpg "Title ="} k01_2 ~~ Jsf-‑a‑kdlhb‑rimo.jpg "alt =" wkiol1pyefcqxqrdaahj77mqsly166.jpg "/>

  • SCP/etc/fstab admin2:/etc/

  • SCP/etc/fstab admin3:/etc/

  • Ansible all-M shell-a 'service gfs2 start'

  • Ansible all-M shell-A 'chkconfig gfs2 on'

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/42/96/wKiom1PYedDz0WMIAAI3hepMV8g846.jpg "Title =" EC ~ H04242jhzwxqrf2e4aur0.jpg "alt =" wkiom1pyeddz0wmiaai3hepmv8g846.jpg "/>

This completes the cluster file system. You can test the results on your own.

Gfs2_grow // re-check the disk boundary and use it with the cluster logical volume extension.

Gfs2_jadd // Add a log area

Gfs2_tool freeze // freeze the device

Gfs2_tool journals // display the log area


Experiment 2: create and expand cluster logical volumes.

First install the supported packages

  • Ansible all-M shell-A 'yum install lvm2-cluster-y'

Modify the logical volume lock type of the three nodes to the cluster type and verify the/etc/LVM. conf file.

  • Ansible all-M shell-A 'lvmconf -- enable-cluster'

  • Ansible all-M shell-a "grep-I '^ [[: Space:] * locking_type'/etc/LVM. conf"

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/42/97/wKioL1PYf8-ANFsnAAIF-wUHQ9M252.jpg "Title =" Z2 @ j5l6i1iau {2 V {@(~} D1_3.jpg "alt =" wKioL1PYf8-ANFsnAAIF-wUHQ9M252.jpg "/>

Start the service

  • Ansible all-M shell-a "service clvmd start"

Create PV, VG, LV, and format mounting.

  • Pvcreate/dev/sdb2

  • Vgcreate CVG/dev/sdb2

  • Mkfs. gfs2-J 2-P lock_dlm-T gcluster: cllv/dev/CVG/cllv

Because we have specified two log regions and only two nodes can be mounted, an error will be reported when the third node is mounted.

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/42/97/wKioL1PYgk7S-NtTAAEmuB33PXY771.jpg "Title =" 2s%66'%f%w(5'p_vz5%v87h.jpg "alt =" wKioL1PYgk7S-NtTAAEmuB33PXY771.jpg "/>

If you add a log area and mount it again, no error will be reported.

  • Gfs2_jadd-J 1/dev/CVG/cllv

Expand the cluster logical volume

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/42/97/wKiom1PYgerwNhG_AAGDtPX5NSQ524.jpg "Title =" uioz66li}%ppcjz4b0%k%ql.jpg "alt =" wkiom1pygerwnhg_aagdtpx5nsq524.jpg "/>

Only 5 GB space is created, and now 5 GB space is added to it.

Lvextend-L + 5g/dev/CVG/cllv

Gfs2_grow/dev/CVG/cllv

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/42/97/wKiom1PYguiibl5RAAG6PA2f7Fc775.jpg "Title =" ixrc ~ PVI $7} 0d ~ E]ke1p4vu.jpg "alt =" wkiom1pyguiibl5raag6pa2f7fc775.jpg "/>


Lab completed

This article is from the blog of "bad guys". For more information, contact the author!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.