Configure the GFS File System in CentOS6.8 under VMware, centos6.8gfs

Source: Internet
Author: User

Configure the GFS File System in CentOS6.8 under VMware, centos6.8gfs
1. GFS Introduction

GFSIt has two types:

1. Google File System: GFS is a Scalable Distributed File System Implemented by GOOGLE. It is used for large-scale, distributed applications that access a large amount of data. It runs on cheap general hardware, but can provide fault tolerance. It can provide services with high overall performance to a large number of users. For more information, visit: http://baike.baidu.com/item/GFS/1813072

2. Redhat GFS (Global File System)

GFS (Global File System) itself appears as a local File System. Multiple Linux machines share storage devices over the network. Each machine can regard the shared network disk as a local disk. If a machine writes a file, then the machine accessing the file will read the written result. You can deploy different solutions based on performance or scalability or multiple economic principles.

GFSMain components,ClusterVolume management, lock management,ClusterManagement, fencing and recovery, and cluster configuration management.

This article mainly introduces the Redhat GFS system.

REDHAT CLUSTER SUITEWITH GFS:

RHCS (redhat cluster suite) is a CLUSTER tool set that provides high performance, high reliability, load balancing, and high availability, A cluster usually has two or more computers (called "nodes" or "members") to execute a task together.

RHCSMain components:

  • Cluster Architecture:

A basic function is provided to enable nodes to work together as clusters: configuration file management, member relationship management, lock management, and gate devices.

  • High Availability Management:

Provides the node failover service. When a node fails, the service is transferred to another node.

  • Cluster Management tools:

Use configuration and management tools to configure and manage the Red Hat Cluster.

  • Linux Virtual Server (LVS)

LVS provides an IP-based load balancing function. LVS can evenly distribute customer requests to cluster nodes.

OthersRed HatCluster components:

  • Cluster Logical Volume Manager (CLVM)

Provides logical volume Management Cluster Storage.

  • Cluster Manager:

CMAN is a distributed Cluster Manager (DLM) that runs on each cluster node. CMAN provides a quorum by monitoring cluster nodes ), when more than half of the nodes in the cluster are active, the cluster continues to be available because the number of nodes meets the requirement, if only half or less of the nodes are active, the number of nodes cannot be reached, and the entire cluster becomes unavailable. CMAN determines the member relationships of each node by monitoring the nodes in the cluster. When the member relationships in the cluster change, CMAN will make corresponding adjustments through other components in the architecture.

  • DLMLock Management:

The distributed lock manager runs on all cluster nodes. Lock management is a common infrastructure that provides a management mechanism for clusters to share cluster resources, GFS uses the lock manager to synchronously Access File System metadata, and CLVM uses the lock manager to synchronously update data to LVM volumes and volume groups.

  • Complete data assurance:

RHCS uses Fence devices to cut off the I/O of invalid nodes from shared storage to ensure data integrity. After CMAN determines that a node fails, it notifies the failed node (Multicast) in the cluster structure. The fenced process isolates the failed nodes, to ensure that the failure node does not damage the shared data.

REDHATCluster configuration system:

Cluster configuration file(/Etc/cluster. conf) is an XML file used to describe the following cluster features.

Cluster name:Lists the cluster name, cluster configuration file version, and isolation time, and isolates the corresponding time when a new node is added or isolated from the cluster.

Cluster:List each node in the cluster, specify the node name, node ID, number of legal votes, and grid mode.

FenceDevice:Define the fence device.

Manage resources:Define the resources required to create a cluster service. Management resources include failed transfer domains, resources, and services.

2. GFS Construction

The setting environment is as follows:

  • 192.168.10.11 test01
  • 192.168.10.12 test02
  • OS: CentOS6.8 64-bit
  • VMware shared Disk

Unless otherwise specified, the following operations must be performed on all nodes.

2.1 configure the network

Edit the hosts file so that the two nodes can be accessed through the Host Name:

# more /etc/hosts127.0.0.1   localhost localhost.localdomain::1         localhost localhost.localdomain192.168.10.11 test01192.168.10.12 test02
    2.2 install software packages related to the Cluster File System

    Install the software package using yum:

    # yum install cman openais gfs* kmod-gfs lvm2* rgmanager system-config-cluster scsi-target-utils cluster-snmp

    The preceding software package has many dependent packages that will be installed or updated. To avoid unexpected situations, we recommend that you restart the software package after installation.

    2.3 configure iptables

    Allow communication between test01 and test02

    Add test01 configuration file/etc/sysconfig/iptables:

    -A INPUT -s 192.168.10.12 -j ACCEPT

    Add test02 configuration file/etc/sysconfig/iptables:

    -A INPUT -s 192.168.10.11 -j ACCEPT
    2.4 modify configurations

    Modify selinux = disabled in/etc/sysconfig/selinux

    Modify the logical volume configuration of lvm:

    # Vi/etc/lvm. conf

    Change locking_type = 1 to locking_type = 3. enable simultaneous read/write.

    Modify fallback_to_local_locking = 0 to disable write-back and avoid split brain.

    2.5 generate a cluster configuration file
    [root@test02 ~]# ccs_tool create GFSmail[root@test02 ~]# ccs_tool addfence meatware fence_manual[root@test02 ~]# ccs_tool lsfenceName             Agentmeatware         fence_manual[root@test02 ~]# ccs_tool addnode -n 11 -f meatware test01[root@test02 ~]# ccs_tool addnode -n 12 -f meatware test02[root@test02 ~]# ccs_tool lsnodeCluster name: GFSmail, config_version: 4Nodename                        Votes Nodeid Fencetypetest01                      1   11    meatwaretest02                      1   12    meatware[root@test02 ~]#[root@test02 ~]# rsync -avz /etc/cluster/cluster.conf root@test01sending incremental file listcluster.confsent 307 bytes  received 31 bytes  676.00 bytes/sectotal size is 557  speedup is 1.65[root@test02 ~]#
    [root@test02 data]# cat /etc/cluster/cluster.conf<?xml version="1.0"?><cluster name="GFSmail" config_version="4">  <clusternodes>  <clusternode name="test01" votes="1" nodeid="11"><fence><method name="single"><device name="meatware"/></method></fence></clusternode><clusternode name="test02" votes="1" nodeid="12"><fence><method name="single"><device name="meatware"/></method></fence></clusternode></clusternodes>  <fencedevices>  <fencedevice name="meatware" agent="fence_manual"/></fencedevices>  <rm>    <failoverdomains/>    <resources/>  </rm></cluster>

    Then you can run the ccs_config_validate command to check whether the configuration file is legal.

    2.6 create Cluster Storage

    Start the cman and clvmd commands manually to monitor the status of the storage devices in the cluster volume:

    # service cman start# service clvmd start# service rgmanager start

    Perform the following operations on node 1:

    Create physical volume and volume group and logical volume:

    [root@test01 ~]# pvcreate /dev/sdb  Physical volume "/dev/sdb" successfully created[root@test01 ~]# vgcreate mailcluster /dev/sdb  Clustered volume group "mailcluster" successfully created[root@test01 ~]# pvs  PV         VG          Fmt  Attr PSize   PFree  /dev/sda2  vg_mail     lvm2 a--u 199.41g    0  /dev/sdb   mailcluster lvm2 a--u   4.00t 4.00t[root@test01 ~]# lvcreate -n maildata -l 100%FREE mailcluster  Logical volume "maildata" created.[root@test01 ~]# lvs  LV       VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  maildata mailcluster -wi-a-----   4.00t  home     vg_mail     -wi-ao----  80.00g  root     vg_mail     -wi-ao---- 115.41g  swap     vg_mail     -wi-ao----   4.00g[root@test01 ~]#
    2.7 create a gfs2 File System on the new logical volume:
    [root@test01 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t GFSmail:maildata /dev/mapper/mailcluster-maildataThis will destroy any data on /dev/mapper/mailcluster-maildata.It appears to contain: symbolic link to `../dm-3'Are you sure you want to proceed? [y/n] yDevice:                    /dev/mapper/mailcluster-maildataBlocksize:                 4096Device Size                4096.00 GB (1073740800 blocks)Filesystem Size:           4096.00 GB (1073740798 blocks)Journals:                  2Resource Groups:           8192Locking Protocol:          "lock_dlm"Lock Table:                "GFSmail:maildata"UUID:                      50e12acf-6fb0-6881-3064-856c383b51dd[root@test01 ~]#

    For The mkfs. gfs2 command, the parameter functions we use are as follows:

    -P: used to specify the gfs lock mechanism. Generally, lock_dlm is selected;

    -J: specify the number of journal (number of nodes can be added). In general, there should be redundancy; otherwise, it must be adjusted later;

    View journals: # gfs2_tool journals/home/coremail/var

    Add journals: # gfs2_jadd-j 1/home/coremail/var # Add a journals

    -T: Format: ClusterName: FS_Path_Name

    ClusterName: it should be the same as the cluster name specified in the previous cluster. conf (above: GFSmail );

    FS_Path_Name: Path of the mounted block device (maildata );

    The last parameter specifies the detailed path of the logical volume;

    2.8GFS mounting

    Create directory:

    [root@test01 ~]# mkdir /home/coremail/var

    Add the created logical volume to the/etc/fstab file to enable automatic ing upon startup:

    [root@test01 ~]# echo "/dev/mapper/mailcluster-maildata  /home/coremail/var            gfs2    defaults,noatime,nodiratime,noquota        0 0" >> /etc/fstab

    Start gfs2:

    [root@test01 ~]# /etc/init.d/gfs2 start

    Node 2:

    Run the pvs and lvs commands to check whether the physical volume and logical volume information created at Node 1 can be displayed normally. If not, try lvscan first ), it indicates that shared storage is not used, or the configuration is abnormal, and you still need to troubleshoot the problem. After the problem is solved, run the following command.

    [root@test02 ~]# mkdir /home/coremail/var[root@test02 ~]# echo "/dev/mapper/mailcluster-maildata  /home/coremail/var            gfs2    defaults,noatime,nodiratime,noquota        0 0" >> /etc/fstab[root@test02 ~]# /etc/init.d/gfs2 start

    Run # clustat to query the status of each member node.

    [root@test02 ~]# clustatCluster Status for GFSmail @ Thu Nov  3 23:17:24 2016Member Status: Quorate Member Name                                       ID   Status ------ ----                                       ---- ------ test01                                         11 Online test02                                         12 Online, Local[root@test02 ~]#
    2.9 perform operations on all nodes and Configure Automatic startup of the service so that you do not have to worry about server restart:
    # chkconfig --add cman# chkconfig --add clvmd# chkconfig --add gfs2# chkconfig --add rgmanager# chkconfig --level 345 cman on# chkconfig --level 345 clvmd on# chkconfig --level 345 gfs2 on# chkconfig --level 345 rgmanager on

    References: https://access.redhat.com/documentation/zh-CN/Red_Hat_Enterprise_Linux/6/html/Global_File_System_2/ch-overview-GFS2.html

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.