Linux Operations phase Fifth (ix)ISCSI&CLVM&GFS2
GFS2(Global filesystem, Version2, the CFS cluster file system, using HA 's information layer, to each node Inform yourself of the lock you hold)
CLVM(cluster Logical Volume management, cluster logical volume management, the shared storage into a logical volume, the use of HA's heartbeat transmission mechanism (communication mechanism, for the mechanism of brain fissure processing), each node to start the CLVMD service (start Cman and Rgmanager before starting this service ), so that each node communicates with each other)
prepare four node(node{1,2,3} to use shared storage,Node4 to provide shared storage and as a springboard)
node{1,2,3} Ready yum source, time to synchronize, node name,/etc/hosts,node4 to be able with node{1,2,3} Two-machine mutual trust
(1) preparing shared storage
Node4-side:
[Email protected] ~]# vim/etc/tgt/targets.conf
Default-driver iSCSI
<targetiqn.2015-07.com.magedu:teststore.disk1>
<backing-store/dev/sdb>
vender_id magedu
LUN 1
</backing-store>
Incominguser iSCSI iSCSI
Initiator-address 192.168.41.131
Initiator-address 192.168.41.132
Initiator-address 192.168.41.133
</target>
[Email protected] ~]# service TGTD restart
[Email protected] ~]# NETSTAT-TNLP(3260/tcp,tgtd)
[[email protected] ~]# tgtadm--lld iSCSI--mode Target--op Show
......
Lun:1
......
Account information:
iSCSI
ACL Information:
192.168.41.131
192.168.41.132
192.168.41.133
[[email protected] ~]# alias ha= ' for I in{1..3};d o ssh node$i '
[[email protected] ~]# ha ' rm-rf/var/lib/iscsi/send_targets/* ';d one
Node{1,2,3}-side:
[Email protected] ~]# vim/etc/iscsi/iscsid.conf
Node.session.auth.authmethod = CHAP
Node.session.auth.username = iSCSI
Node.session.auth.password = iSCSI
Node4-side:
[[email protected] ~]# ha ' service iSCSI restart ';d one
[[email protected] ~]# ha ' iscsiadm-m discovery-t st-p 192.168.41.134 ';d one
[[email protected] ~]# ha ' iscsiadm-m node-t iqn.2015-07.com.magedu:teststore.disk1-p 192.168.41.134-l ';d one
[Email protected] ~]# fdisk-l
disk/dev/sdb:10.7 GB, 10737418240 bytes
(2) installation of Cman,rgmanager,gfs2-utils,lvm2-cluster
Node4-side:
[[email protected] ~]# for I in {1..3};d o scp/root/{cman*,rgmanager*,gfs2-utils*,lvm2-cluster*} node$i:/root/;ssh node$ I ' yum-y--nogpgcheck localinstall/root/*.rpm ';d one
Node1-side:
[Email protected] ~]# Ccs_tool Create Tcluster
[Email protected] ~]# ccs_tool addfence meatware fence_manual
[Email protected] ~]# ccs_tool addnode-v 1-n 1-f meatware node1.magedu.com
[Email protected] ~]# ccs_tool addnode-v 1-n 2-f meatware node2.magedu.com
[Email protected] ~]# ccs_tool addnode-v 1-n 3-f meatware node3.magedu.com
[[Email protected] ~]# service Cman start(the first time you start the initialization, it is best to use the tool System-config-cluster to get rid of the multicast address, Do not use the same default multicast address as other clusters, or you will receive synchronization messages from other clusters that do not start properly, or copy the Node1 profile /etc/cluster/cluster.conf to another file before booting node to start again)
[Email protected] ~]# Clustat
Node1.magedu.com 1 Online, Local
Node2.magedu.com 2 Online
Node3.magedu.com 3 Online
Node2-side:
[[Email protected] ~]# service Cman start
Node3-side:
[[Email protected] ~]# service Cman start
(3)cLVM configuration:
Node1-side:
[Email protected] ~]# RPM-QL Lvm2-cluster
/etc/rc.d/init.d/clvmd
/usr/sbin/clvmd
/usr/sbin/lvmconf
[[email protected] ~]# vim/etc/lvm/lvm.conf(each node has to change this profile)
Locking_type = 3(Type 3 uses built-in clustered locking, change this item 1 to 3, 1 forthe local file-based lock Defaults to local file-based locking)
Node4-side:
[[email protected] ~]# ha ' service clvmd start ';d one
Node1-side:
[Email protected] ~]# Pvcreate/dev/sdb
Writing physical volume data to disk "/dev/sdb"
Physical volume "/dev/sdb" successfully created
[[email protected] ~]# PVs ( also visible at other node)
PV VG Fmt Attr PSize pfree
/dev/sdb lvm2a--10.00G 10.00G
[Email protected] ~]# vgcreate Clustervg/dev/sdb
Clustered Volume Group "CLUSTERVG" successfully created
[Email protected] ~]# VGS
VG #PV #LV #SN Attr vsize vfree
CLUSTERVG 1 0 0wz--nc 10.00G 10.00G
[Email protected] ~]# lvcreate-l 5g-n clusterlv CLUSTERVG
Logical volume "Clusterlv" created
[[email protected] ~]# LVS
LV VG Attr lsize Origin snap% Move Log copy% Convert
CLUSTERLV clustervg-wi-a-5.00G
(4)gfs2 configuration:
Node1-side:
[Email protected] ~]# RPM-QL gfs2-utils
/etc/rc.d/init.d/gfs2
/sbin/fsck.gfs2
/sbin/gfs2_convert
/sbin/gfs2_edit
/sbin/gfs2_fsck
/sbin/gfs2_grow
/sbin/gfs2_jadd
/sbin/gfs2_quota
/sbin/gfs2_tool
/sbin/mkfs.gfs2
/sbin/mount.gfs2
/sbin/umount.gfs2
[Email protected] ~]# mkfs.gfs2-h
#mkfs. GFS2 OPTIONS DEVICE
Options:
-B #(blocksize Specify block size, default 4096bytes)
-D (Enable debugging output)
-j numberof journals for GFS2_MKFS to create, specifying thecount of log areas, has several node mounts used to create several, default created 1 each)
-j #(the size of the journals in megabytes, log area, default 128M)
-P name(lock Protoname is the name of the locking protocol touse, lock protocol name, two, usually using lock_dlm, if a nod E with lock_nolock, if only one node uses single-machine FS , no need for cluster FS)
-T name(the Lock table field appropriate to the lock module you ' re using, the lock list name, formatted as Clustername:locktablen AME,clustername is the name of the clusterin which the current node resides, Locktablename is unique within the current cluster, and multiple clustered file systems can be used within a cluster. The lock table name is used to distinguish which node holds the lock on which cluster file system.
[[email protected] ~]# mkfs.gfs2-j 3-p lock_dlm-t tcluster:lktb1/dev/clustervg/clusterlv(formatted cluster file system will be slow)
This would destroy any data on/dev/clustervg/clusterlv.
Is you sure want to proceed? [y/n] Y
Device:/dev/clustervg/clusterlv
blocksize:4096
Device Size 5.00 GB (1310720 blocks)
Filesystem size:5.00 GB (1310718 blocks)
Journals:3
Resource groups:20
Locking Protocol: "LOCK_DLM"
Lock Table: "Tcluster:lktb1"
uuid:d8b10b8f-7ee2-a818-e392-0df218411f2c
[Email protected] ~]# Mkdir/mydata
[Email protected] ~]# mount-t gfs2/dev/clustervg/clusterlv/mydata
Node2-side:
[Email protected] ~]# Mkdir/mydata
[Email protected] ~]# mount-t gfs2/dev/clustervg/clusterlv/mydata
[Email protected] ~]# Ls/mydata
[Email protected] ~]# Touch/mydata/b.txt
[Email protected] ~]# Ls/mydata
B.txt
Node3-side:
[Email protected] ~]# Mkdir/mydata
[Email protected] ~]# mount-t gfs2/dev/clustervg/clusterlv/mydata
[Email protected] ~]# Touch/mydata/c.txt
[Email protected] ~]# Ls/mydata
B.txt C.txt
Node1-side:
[Email protected] ~]# Ls/mydata
B.txt C.txt
Note: Each node -to -CFS operation is immediately synchronized to disk and informs other node, so it can severely affect system performance
(5) Commissioning:
[Email protected] ~]# gsf2_tool-h(interface to GFS2 IOCTL/SYSFS calls)
#gfs2_tool Df|journals|gettune|freeze|unfreeze|getargs Mount_point
#gfs2_tool List
[[email protected] ~]# gfs2_tool list(list the currently mounted GFS2 filesystems)
253:2 TCLUSTER:LKTB1
[Email protected] ~]# gfs2_tool journals/mydata(rint outinformation on the journals in a mounted filesystem )
Journal2-128mb
Journal1-128mb
Journal0-128mb
3 Journal (s) found.
[Email protected] ~]# Gfs2_tool df/mydata
/mydata:
Sblock proto = "LOCK_DLM"
Sblock table = "TCLUSTER:LKTB1"
Sbondisk format = 1801
Sbmultihost format = 1900
Block size = 4096
Journals = 3
Resource Groups = 20
Mounted lock proto = "LOCK_DLM"
Mounted lock table = "TCLUSTER:LKTB1"
Mounted host data = "Jid=0:id=196609:first=1"
Journal Number = 0
Lock Module flags = 0
Local flocks = FALSE
Local caching = FALSE
Type Total Blocks used Blocks free Blocks use%
------------------------------------------------------------------------
Data 1310564 99293 1211271 8%
Inodes 1211294 23 1211271 0%
[[email protected] ~]# gfs2_tool freeze/mydata(Freeze (QUIESCE) a GFS2 cluster, any node to CFS operation will be stuck until unfreeze)
[Email protected] ~]# Gfs2_tool getargs/mydata
Statfs_percent 0
Data 2
Suiddir 0
Quota 0
Posix_acl 0
Upgrade 0
Debug 0
Localflocks 0
Localcaching 0
Ignore_local_fs 0
Spectator 0
Hostdata jid=0:id=196609:first=1
Locktable
Lockproto
[Email protected] ~]# gfs2_tool gettune/mydata(Print outThe current values of the tuning parameters in a Runni ng filesystem, to adjust an item, use Settune, and follow the instructions and values directly after the mount point, such as #gfs2_tool settune/mydata new_files_directio=1 )
New_files_directio = 0
New_files_jdata = 0
Quota_scale = 1.0000 (1, 1)
Logd_secs = 1
Recoverd_secs = 60
Statfs_quantum = 30
Stall_secs = 600
Quota_cache_secs = 300
Quota_simul_sync = 64
Statfs_slow = 0
Complain_secs = 10
Max_readahead = 262144
Quota_quantum = 60
Quota_warn_period = 10
Jindex_refresh_secs = 60
Log_flush_secs = 60
Incore_log_blocks = 1024
[[email protected] ~]# gfs2_jadd-j 1/dev/clustervg/clusterlv(add log area,1 means new number (is not the total number of nodes), if the cluster Node number increased, log area can be increased by gfs2_jadd)
[Email protected] ~]# lvextend-l 8g/dev/clustervg/clusterlv(extend the size of a logical volume
, extending the logical volume size, which can be understood as extending the physical boundary)
Extending logical Volume CLUSTERLV to 8.00 GB
Logical Volume CLUSTERLV successfully resized
[[email protected] ~]# gfs2_grow/dev/clustervg/clusterlv(Expand a GFS2 filesystem, extended Cluster file system, can be understood as extended logical boundaries, Note Be sure to perform this step, important)
Fs:mount Point:/mydata
Fs:device:/dev/mapper/clustervg-clusterlv
fs:size:1310718 (0x13fffe)
Fs:rg size:65533 (0XFFFD)
dev:size:2097152 (0x200000)
The file system grew by 3072MB.
Error fallocating Extra Space:file toolarge
Gfs2_grow complete.
[[email protected] ~]# lvresize-l -3g/dev/clustervg/clusterlv(Decrease logical volume size)
[Email protected] ~]# GFS2_GROW/DEV/CLUSTERVG/CLUSTERLV
[[email protected] ~]# LVS
LV VG Attr lsize Origin snap% Move Log copy% Convert
CLUSTERLV Clustervg-wi-ao 5.00G
The above is a note on the study of the Ma Yun-dimensional course
This article is from the "Linux operation and maintenance of difficult learning notes" blog, declined reprint!
Linux Operations Phase Fifth (ix) ISCSI & CLVM & GFS2