RHCs using the Css_tool command to create an HA cluster and create a GFS2 cluster file system

Source: Internet
Author: User
Prepare the Environment

node1:192.168.139.2

node2:192.168.139.4

node4:192.168.139.8

node5:192.168.139.9


Node1 as the target end

Node2 node4 Node5 as initiator end


And the Node2 node4 NODE5 is installed after the Cman+rgmanager is configured as a three-node RHCs high-availability cluster, because GFS2 is a clustered file system, the failed node must be fence with the help of HA cluster, and with the help of message Layer for node information transfer.

Because the target to be found and logged in is made into an integrated file system, it must be installed on the Node2 node4 node5 gfs2-utils


Stop the Cluster service that was originally created with Luci/ricci (a Cluster service I've experimented with, regardless of this experiment)


[Root@node2 mnt]# clusvcadm-d Web_service

Local Machine Disabling Service:web_service ...

[Root@node2 mnt]# Clustat

Cluster Status for ZXL @ Wed Dec 21 17:55:46 2016

Member status:quorate


Member Name ID Status

------ ---- ---- ------

Node2.zxl.com 1 Online, Local, Rgmanager

Node4.zxl.com

2 Online, Rgmanager


Service Name Owner (last) state

------- ---- ----- ------ -----

Service:web_service (node2.zxl.com) disabled

[Root@node2 mnt]# service Rgmanager stop

[Root@node2 mnt]# Service Cman Stop

[Root@node4 mnt]# service Rgmanager stop

[Root@node4 mnt]# Service Cman Stop

[Root@node4 mnt]# rm-rf/etc/cluster/cluster.conf

[Root@node4 mnt]# rm-rf/etc/cluster/cluster.conf

Every time the configuration file changes, there will be a backup and erase.

[Root@node2 mnt]# ls/etc/cluster/

Cluster.conf.bak CMAN-NOTIFY.D

[Root@node2 mnt]# rm-f/etc/cluster/*



If Cman,rgmanager is not installed, execute the following command

[Root@node2 mnt] #yum-y install Cman Rgmanager


Create a cluster with the Css_tool command, the cluster name Mycluster

[Root@node2 mnt]# Ccs_tool Create Mycluster

[Root@node2 mnt]# cat/etc/cluster/cluster.conf

<?xml version= "1.0"?>

<cluster name= "Mycluster" config_version= "1" >


<clusternodes>

</clusternodes>


<fencedevices>

</fencedevices>


<rm>

<failoverdomains/>

<resources/>

</rm>

</cluster>


Add fence Device (RHCS cluster must have)

[Root@node2 mnt]# ccs_tool addfence meatware fence_manual

[Root@node2 mnt]# Ccs_tool lsfence

Name Agent

Meatware fence_manual


-v Specifies the number of votes owned by the node

-n Specifies the node identifier

-F Specify Fence device name


Add three nodes, RHCs cluster must have at least three nodes

[Root@node2 mnt]# ccs_tool addnode-v 1-n 1-f meatware node2.zxl.com

[Root@node2 mnt]# ccs_tool addnode-v 1-n 2-f meatware node4.zxl.com

[Root@node2 mnt]# ccs_tool addnode-v 1-n 3-f meatware node5.zxl.com

View cluster Nodes

[Root@node2 mnt]# Ccs_tool Lsnode


Cluster Name:mycluster, Config_version:5


Nodename votes Nodeid Fencetype

Node2.zxl.com 1 1 meatware

Node4.zxl.com 1 2 Meatware

Node5.zxl.com 1 3 Meatware

Copy the configuration file, the RHCS cluster will automatically synchronize with the CSSD process

[Root@node2 mnt]# scp/etc/cluster/cluster.conf node4:/etc/cluster/

[Root@node2 mnt]# scp/etc/cluster/cluster.conf node5:/etc/cluster/

Each node starts Cman Rgmanager

[Root@node2 mnt]# Service Cman Start

[Root@node2 mnt]# Service Rgmanager start

[Root@node4 mnt]# Service Cman Start

[Root@node4 mnt]# Service Rgmanager start

[ROOT@NODE5 mnt]# Service Cman Start

[ROOT@NODE5 mnt]# Service Rgmanager start

[Root@node2 mnt]# Clustat

Cluster Status for Mycluster @ Wed Dec 21 18:40:26 2016

Member status:quorate


Member Name ID Status

------ ---- ---- ------

Node2.zxl.com 1 Online, Local

Node4.zxl.com 2 Online

Node5.zxl.com 3 Online


[Root@node2 mnt]# rpm-ql gfs2-utils

/etc/rc.d/init.d/gfs2

/sbin/fsck.gfs2

/SBIN/MKFS.GFS2 \ \ format to create GFS2 file system

/SBIN/MOUNT.GFS2 \ \ Mount the GFS2 file system

/usr/sbin/gfs2_convert


Use of the MKFS.GFS2 command

-J Specifies the number of log areas that can be mounted by several nodes, because each node must have log records after being formatted as a clustered file system

-j Specify log size, default 128M

-p {Lock_dlm|lock_nolock} Distributed lock Management | no lock

-T <name> specify the name of the lock table

Note: A cluster can have multiple file systems, such as a cluster to share two disks, two disks can be GFS2 and OCFS2 file system, different file system to lock, to use a different lock table to be unique, so each lock must have a lock name

Format of the lock table name

Cluster_Name: Lock Table name

such as: MYCLUSTER:LOCK_SDA

-D displays detailed debug information


Log in to target and format it as a GFS2 file system

[Root@node2 mnt]# iscsiadm-m node-t iqn.2016-12.com.zxl:store1.disk1-p 192.168.139.2-l

[Root@node2 mnt]# mkfs.gfs2-j 2-p lock_dlm-t mycluster:lock_sde1/dev/sde1

Is you sure want to proceed? [y/n] Y


Device:/dev/sde1

blocksize:4096

Device Size 3.00 GB (787330 blocks)

Filesystem size:3.00 GB (787328 blocks)

Journals:2

Resource groups:13

Locking Protocol: "LOCK_DLM"

Lock Table: "Mycluster:lock_sde1"

Uuid:9ebdc83b-9a61-9a4a-3ba7-9c80e59a0a2d

Format complete, Mount Test

[Root@node2 mnt]# mount-t gfs2/dev/sde1/mnt

[Root@node2 mnt]# Cd/mnt

[Root@node2 mnt]# LL

Total 0

[Root@node2 mnt]# cp/etc/issue./

[Root@node2 mnt]# LL

Total 8

-rw-r--r--. 1 root root 19:06 issue


OK, change node4

[Root@node4 ~]# iscsiadm-m node-t iqn.2016-12.com.zxl:store1.disk1-p 192.168.139.2-l

NODE4 does not have to be formatted again, directly mount

[Root@node4 ~]# mount-t gfs2/dev/sdc1/mnt

[Root@node4 ~]# Cd/mnt

[root@node4 mnt]# ll \ \ can see Node1 copied files

Total 8

-rw-r--r--. 1 root root 19:06 issue

Node4 creates a file A.txt, which is immediately notified to the other nodes to see that this is the benefit of the clustered file system GFS2

[Root@node4 mnt]# Touch A.txt

[Root@node2 mnt]# LL

Total 16

-rw-r--r--. 1 root root 0 Dec 19:10 a.txt

-rw-r--r--. 1 root root 19:06 issue


In addition to a node NODE5

[Root@node5 ~]# iscsiadm-m node-t iqn.2016-12.com.zxl:store1.disk1-p 192.168.139.2-l

Mount not up, because only two cluster log files are created, several logs can be mounted on several nodes

[Root@node5 ~]# mount-t gfs2/dev/sdc1/mnt

Too Many nodes mounting filesystem, no free journals

Add log

[Root@node2 mnt]# gfs2_jadd-j 1/dev/sde1 \\-j 1 Add a log

Filesystem:/mnt

Old Journals 2

New Journals 3

[Root@node2 mnt]# gfs2_tool journals/dev/sde1 \ \ This command can view a few logs, each with a default size of 128M

Journal2-128mb

Journal1-128mb

Journal0-128mb

3 Journal (s) found.

[Root@node5 ~]# mount-t gfs2/dev/sdc1/mnt \\NODE5 Mount succeeded

[Root@node5 ~]# Cd/mnt

[Root@node5 mnt]# Touch B.txt

[Root@node4 mnt]# LL

Total 24

-rw-r--r--. 1 root root 0 Dec 19:10 a.txt

-rw-r--r--. 1 root root 0 Dec 19:18 b.txt

-rw-r--r--. 1 root root 19:06 issue


GFS2 cluster file system generally support the number of clusters can not exceed 16, after the performance of a straight down

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.