RHCs using the Css_tool command to create an HA cluster and create a GFS2 cluster file system

Source: Internet
Author: User

Prepare the Environment

node1:192.168.139.2

node2:192.168.139.4

node4:192.168.139.8

node5:192.168.139.9


Node1 as the target end

Node2 node4 Node5 as initiator end


And the Node2 node4 NODE5 is installed after the Cman+rgmanager is configured as a three-node RHCs high-availability cluster, because GFS2 is a clustered file system, the failed node must be fence with the help of HA cluster, and with the help of message Layer for node information transfer.

Because the target to be found and logged in is made into an integrated file system, it must be installed on the Node2 node4 node5 gfs2-utils


Stop the Cluster service that was originally created with Luci/ricci (a Cluster service I've experimented with, regardless of this experiment)


[Email protected] mnt]# clusvcadm-d Web_service

Local Machine Disabling Service:web_service ...

[Email protected] mnt]# Clustat

Cluster Status for ZXL @ Wed Dec 21 17:55:46 2016

Member status:quorate


Member Name ID Status

------ ----                  ---- ------

Node2.zxl.com 1 Online, Local, Rgmanager

Node4.zxl.com

2 Online, Rgmanager


Service Name Owner (last) state

------- ----        ----- ------      -----

Service:web_service (node2.zxl.com) disabled

[[Email protected] mnt]# service Rgmanager stop

[[Email protected] mnt]# service Cman stop

[[Email protected] mnt]# service Rgmanager stop

[[Email protected] mnt]# service Cman stop

[Email protected] mnt]# rm-rf/etc/cluster/cluster.conf

[Email protected] mnt]# rm-rf/etc/cluster/cluster.conf

Every time the configuration file changes, there will be a backup and erase.

[Email protected] mnt]# ls/etc/cluster/

Cluster.conf.bak CMAN-NOTIFY.D

[Email protected] mnt]# rm-f/etc/cluster/*



If Cman,rgmanager is not installed, execute the following command

[[email protected] mnt] #yum-y install Cman Rgmanager


create a cluster with the Css_tool command, the cluster name Mycluster

[Email protected] mnt]# Ccs_tool Create Mycluster

[Email protected] mnt]# cat/etc/cluster/cluster.conf

<?xml version= "1.0"?>

<cluster name= "Mycluster" config_version= "1" >


<clusternodes>

</clusternodes>


<fencedevices>

</fencedevices>


<rm>

<failoverdomains/>

<resources/>

</rm>

</cluster>


Add fence Device (RHCS cluster must have)

[Email protected] mnt]# ccs_tool addfence meatware fence_manual

[Email protected] mnt]# Ccs_tool lsfence

Name Agent

Meatware fence_manual


-v Specifies the number of votes owned by the node

-n Specifies the node identifier

-F Specify Fence device name


Add three nodes, RHCs cluster must have at least three nodes

[Email protected] mnt]# ccs_tool addnode-v 1-n 1-f meatware node2.zxl.com

[Email protected] mnt]# ccs_tool addnode-v 1-n 2-f meatware node4.zxl.com

[Email protected] mnt]# ccs_tool addnode-v 1-n 3-f meatware node5.zxl.com

View cluster Nodes

[Email protected] mnt]# Ccs_tool Lsnode


Cluster Name:mycluster, Config_version:5


Nodename votes Nodeid Fencetype

Node2.zxl.com 1 1 meatware

Node4.zxl.com 1 2 Meatware

Node5.zxl.com 1 3 Meatware

Copy the configuration file, the RHCS cluster will automatically synchronize with the CSSD process

[Email protected] mnt]# scp/etc/cluster/cluster.conf node4:/etc/cluster/

[Email protected] mnt]# scp/etc/cluster/cluster.conf node5:/etc/cluster/

Each node starts Cman Rgmanager

[[Email protected] mnt]# service Cman start

[[Email protected] mnt]# service Rgmanager start

[[Email protected] mnt]# service Cman start

[[Email protected] mnt]# service Rgmanager start

[[Email protected] mnt]# service Cman start

[[Email protected] mnt]# service Rgmanager start

[Email protected] mnt]# Clustat

Cluster Status for Mycluster @ Wed Dec 21 18:40:26 2016

Member status:quorate


Member Name ID Status

------ ----           ---- ------

Node2.zxl.com 1 Online, Local

Node4.zxl.com 2 Online

Node5.zxl.com 3 Online


[Email protected] mnt]# RPM-QL gfs2-utils

/etc/rc.d/init.d/gfs2

/sbin/fsck.gfs2

/SBIN/MKFS.GFS2 \ \ format to create GFS2 file system

/SBIN/MOUNT.GFS2 \ \ Mount the GFS2 file system

/usr/sbin/gfs2_convert


Use of the MKFS.GFS2 command

-J Specifies the number of log areas that can be mounted by several nodes, because each node must have log records after being formatted as a clustered file system

-j Specify log size, default 128M

-p {Lock_dlm|lock_nolock} Distributed lock Management | no lock

-T <name> specify the name of the lock table

Note: A cluster can have multiple file systems, such as a cluster to share two disks, two disks can be GFS2 and OCFS2 file system, different file system to lock, to use a different lock table to be unique, so each lock must have a lock name

Format of the lock table name

Cluster_Name: Lock Table name

such as: MYCLUSTER:LOCK_SDA

-D displays detailed debug information


Log in to target and format it as a GFS2 file system

[Email protected] mnt]# iscsiadm-m node-t iqn.2016-12.com.zxl:store1.disk1-p 192.168.139.2-l

[Email protected] mnt]# mkfs.gfs2-j 2-p lock_dlm-t mycluster:lock_sde1/dev/sde1

Is you sure want to proceed? [y/n] Y


Device:/dev/sde1

blocksize:4096

Device Size 3.00 GB (787330 blocks)

Filesystem size:3.00 GB (787328 blocks)

Journals:2

Resource groups:13

Locking Protocol: "LOCK_DLM"

Lock Table: "Mycluster:lock_sde1"

Uuid:9ebdc83b-9a61-9a4a-3ba7-9c80e59a0a2d

Format complete, Mount Test

[Email protected] mnt]# mount-t gfs2/dev/sde1/mnt

[Email protected] mnt]# cd/mnt

[email protected] mnt]# LL

Total 0

[Email protected] mnt]# cp/etc/issue.

[email protected] mnt]# LL

Total 8

-rw-r--r--. 1 root root 19:06 issue


OK, change node4

[Email protected] ~]# iscsiadm-m node-t iqn.2016-12.com.zxl:store1.disk1-p 192.168.139.2-l

NODE4 does not have to be formatted again, directly mount

[Email protected] ~]# mount-t gfs2/dev/sdc1/mnt

[Email protected] ~]# cd/mnt

[[email protected] mnt]# ll \ \ can see Node1 copied files

Total 8

-rw-r--r--. 1 root root 19:06 issue

Node4 creates a file A.txt, which is immediately notified to the other nodes to see that this is the benefit of the clustered file system GFS2

[email protected] mnt]# Touch a.txt

[email protected] mnt]# LL

Total 16

-rw-r--r--. 1 root root 0 Dec 19:10 a.txt

-rw-r--r--. 1 root root 19:06 issue


In addition to a node NODE5

[Email protected] ~]# iscsiadm-m node-t iqn.2016-12.com.zxl:store1.disk1-p 192.168.139.2-l

Mount not up, because only two cluster log files are created, several logs can be mounted on several nodes

[Email protected] ~]# mount-t gfs2/dev/sdc1/mnt

Too Many nodes mounting filesystem, no free journals

Add log

[[email protected] mnt]# gfs2_jadd-j 1/dev/sde1 \\-j 1 Add a log

Filesystem:/mnt

Old Journals 2

New Journals 3

[[email protected] mnt]# Gfs2_tool journals/dev/sde1 \ \ This command can view a few logs, each with a default size of 128M

Journal2-128mb

Journal1-128mb

Journal0-128mb

3 Journal (s) found.

[[email protected] ~]# mount-t gfs2/dev/sdc1/mnt \\NODE5 Mount success

[Email protected] ~]# cd/mnt

[email protected] mnt]# Touch b.txt

[email protected] mnt]# LL

Total 24

-rw-r--r--. 1 root root 0 Dec 19:10 a.txt

-rw-r--r--. 1 root root 0 Dec 19:18 b.txt

-rw-r--r--. 1 root root 19:06 issue


GFS2 cluster file system generally support the number of clusters can not exceed 16, after the performance of a straight down


This article is from the "11097124" blog, please be sure to keep this source http://11107124.blog.51cto.com/11097124/1884864

RHCs using the Css_tool command to create an HA cluster and create a GFS2 cluster file system

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.