VMware under CentOS6.8 Configuration GFs File system

Source: Internet
Author: User

1.GFS Introduction

GFS briefly, it has two types:

1. Google File system: GFS is implemented by Google as a scalable, distributed file system for large, distributed, large-scale, data-access applications. It runs on inexpensive, common hardware, but can provide fault-tolerant functionality. It can provide a large number of users with high overall performance services. For more information, you can visit: http://baike.baidu.com/item/GFS/1813072

2. Redhat's GFs (Global File System)

The GFS (Global file system) itself appears in the form of a local filesystem. Multiple Linux machines share storage devices over a network, each machine can view a network shared disk as a local disk, and if a machine performs a write operation on a file, the machine that later accesses the file reads the result after it is written. Different scenarios can be deployed based on performance or scalability, or in a number of economic principles.

GFS main components, Cluster volume management, lock management, Cluster management, fencing and recovery, cluster configuration management.

This paper mainly introduces the GFS system of Redhat.

REDHAT CLUSTER suitewith GFS :

RHCs (REDHAT CLUSTER SUITE) is a cluster toolset that provides high performance, high reliability, load balancing, and high availability, and a cluster typically has two or more computers (called "nodes" or "members") that perform a task together.

RHCs Main components:

    • Cluster Architecture :

Provides a basic functionality for nodes to work together as clusters: Profile management, membership management, lock management, and raster devices.

    • High Availability management:

Provides a node failover service that transfers a service to another node after a node fails.

    • Cluster Management tools:

Configure and manage Red Hat clusters with configuration and management tools.

    • Linux Virtual Server (LVS)

LVS provides an IP-based load-balancing feature that allows client requests to be distributed evenly across the cluster nodes through LVS.

other Red Hat Cluster Components:

    • Cluster Logical Volume Manager (CLVM)

Provides logical volume management for cluster storage.

    • Cluster Manager:

Cman is a distributed Cluster Manager (DLM) that runs on each cluster node, Cman provides a quorum node (quorum) by monitoring the cluster nodes, and when more than half of the nodes in the cluster are active, the cluster continues to be available, at this time, by the number of legal nodes When only half or less of the nodes are active, the number of legal nodes is not reached and the entire cluster becomes unavailable. Cman by monitoring the nodes in the cluster to determine the membership of each node, when the membership in the cluster changes, Cman the other components in the architecture to adjust accordingly.

    • DLM Lock Management:

Distributed lock manager, which runs on all cluster nodes, lock management is a public infrastructure that provides a management mechanism for cluster sharing of cluster resources, and GFS uses lock mechanism to synchronize access to file system metadata through lock Manager, CLVM Synchronize update data to LVM volumes and volume groups through lock manager.

    • Complete guarantee of data:

RHCs disconnects the I/O from the failed node from the shared storage through the fence device to ensure the integrity of the data. When Cman determines that a node fails, it advertises the failed node (multicast) in the cluster structure, and the fenced process isolates the failed node to ensure that the failed node does not break the shared data.

REDHAT cluster configuration system:

cluster configuration file : (/etc/cluster/cluster.conf) is an XML file that describes the following cluster features.

Cluster Name : lists the cluster name, the cluster configuration file version, and an isolation time, isolating the time when a new node joins or is isolated from the cluster.

Cluster : lists each node in the cluster, specifying the node name, node ID, quorum number, and grid mode.

Fence Equipment : defines the fence device.

Manage Resources : defines the resources required to create the Cluster service. Management resources include failed transfer domains, resources, and services.

2.GFS Construction

Set the environment as follows, two node operation share file:

    • 192.168.10.11 test01
    • 192.168.10.12 test02
    • os:centos6.8 64-bit
    • VMware shared disk Piece

The following actions, if not specifically noted, need to be performed on all nodes.

2.1 Configuring the Network

Edit the Hosts file so that the host name is accessible between two nodes:

 more/etc/hosts127.0. 0.1    localhost localhost.localdomain::1         localhost localhost.localdomain  192.168. 10.11 test01 192.168. 10.12 test02
2.2 Installing cluster file system-related packages

To install a package using Yum:

Yum Install cman openais gfs* kmod-gfs lvm2* rgmanager system-config-cluster scsi-target-utils cluster-snmp

The package is installed or updated with a large number of dependencies, and it is recommended that the package be restarted after installation to avoid unexpected situations.

2.3 Configuring Iptables

Allow test01 and test02 to communicate with each other

test01 configuration file/etc/sysconfig/iptables add:

192.168. 10.12 -j ACCEPT

test02 configuration file/etc/sysconfig/iptables add:

192.168. 10.11 -j ACCEPT
2.4 Modifying the relevant configuration

Modify/etc/sysconfig/selinux in selinux=disabled

To modify the LVM logical volume configuration:

# vi/etc/lvm/lvm.conf

Change Locking_type = 1 to Locking_type = 3 to enable both read and write.

Modify the fallback_to_local_locking=0 to prohibit write-back and avoid causing splitting of the brain.

2.5 Generating a cluster configuration file
[[Email protected] ~]# Ccs_tool Create gfsmail[[email protected]~]# ccs_tool addfence meatware fence_manual[[email protected]~]# ccs_tool lsfencename agentmeatware fence_manual[[email protected]~]# Ccs_tool Addnode-n One-f meatware test01[[email protected]~]# Ccs_tool Addnode-n A-f meatware test02[[email protected]~]# ccs_tool lsnodecluster name:gfsmail, config_version:4Nodename votes Nodeid Fencetypetest011    Onemeatwaretest021    AMeatware[[email protected]~]#[[Email protected]~]# rsync-avz/etc/cluster/cluster.conf [email protected]sending incrementalfilelistcluster.confsent307Bytes Received tobytes676.00bytes/sectotal size is557Speedup is1.65[[Email protected]~]#
[Email protected] data]#Cat/etc/cluster/cluster.conf<?xml version="1.0"? ><cluster name="Gfsmail"config_version="4"> <clusternodes> <clusternode name="test01"votes="1"Nodeid=" One"><fence><method name=" Single"><device name="Meatware"/></method></fence></clusternode><clusternode name=the test Geneva"votes="1"Nodeid=" A"><fence><method name=" Single"><device name="Meatware"/></method></fence></clusternode></clusternodes> <fencedevices> <fencedevice Name="Meatware"Agent="fence_manual"/></fencedevices> <RM> <failoverdomains/> <resources/> </RM></cluster>

You can then execute the ccs_config_validate command to check whether the configuration file is legitimate.

2.6 Creating a clustered storage

Manually start the Cman and CLVMD commands to monitor the status of the storage devices in the cluster volume:

# service Cman start# service CLVMD start# service Rgmanager start

The following operations are performed on the node 1 side:

To create physical volumes and volume groups and logical volumes:

[Email protected] ~]# pvcreate/dev/SDB Physical Volume"/dev/sdb"successfully created[[email protected]~]# vgcreate mailcluster/dev/SDB Clustered Volume Group"Mailcluster"successfully created[[email protected]~]# PVs PV VG Fmt Attr PSize pfree/dev/sda2 Vg_mail lvm2 A--u199.41g0/dev/sdb Mailcluster lvm2 A--u4.00t4. 00t[[email protected]~]# Lvcreate-n Maildata-l -%Free mailcluster Logical volume"Maildata"created. [[Email protected]~]# LVs LV VG Attr lsize Pool Origin Data% meta% Move Log cpy%Sync Convert maildata mailcluster-WI-A-----4. 00t Home Vg_mail-wi-ao---- the. 00g Root Vg_mail-wi-ao---- the. 41g Swap Vg_mail-wi-ao----4. 00g[[email protected]~]#
2.7 Create the GFS2 file system on the new Logical Volume:
[Email protected] ~]# mkfs.gfs2-j2-P lock_dlm-t gfsmail:maildata/dev/mapper/mailcluster-Maildatathis would destroy any data on/dev/mapper/mailcluster-Maildata. It appears to contain:symbolic link to './dm-3'Is you sure want to proceed? [y/N] Ydevice:/dev/mapper/mailcluster-maildatablocksize:4096Device Size4096.00GB (1073740800blocks) Filesystem Size:4096.00GB (1073740798blocks) Journals:2Resource Groups:8192Locking Protocol:"LOCK_DLM"Lock Table:"Gfsmail:maildata"UUID:50E12ACF-6fb0-6881-3064-856c383b51dd[[email protected]~]#

For the MKFS.GFS2 command, the parameters we use are as follows:

-P: Used to specify the lock mechanism of GFS, LOCK_DLM is generally selected;

-J: Specify the number of journal (number of nodes can be joined), the general situation should be left redundant, or later also have to adjust;

View journals:# Gfs2_tool Journals/home/coremail/var

Add journals:# gfs2_jadd-j 1/home/coremail/var # #增加一个journals

-T: Format clustername:fs_path_name

ClusterName: should be the same as the cluster name specified in the previous cluster.conf (above: gfsmail);

Fs_path_name: The path to mount of this block device (above: maildata);

The last parameter is the detailed path of the specified logical volume;

2.8GFS Mounting

To create a directory:

mkdir /home/coremail/var

Add the logical volume you just created to the/etc/fstab file so that it is automatically mapped:

Echo " /dev/mapper/mailcluster-maildata  /home/coremail/var            gfs2    Defaults,noatime,nodiratime,noquota        0 0 " >>/etc/fstab

To start the GFS2 service:

[[email protected] ~]#/ETC/INIT.D/GFS2 start

Node 2-Side Execution:

Before operation, you can execute the PVS,LVS command to see if it is possible to display the physical volume and logical volume information created by the node 1, if it is not visible (try Lvscan first), then the shared storage is not used, or the configuration has an exception, you still need to troubleshoot, need to resolve the problem, and then execute the following command.

mkdir /home/coremail/echo"/dev/mapper/mailcluster-maildata  /home/ Coremail/var            gfs2    defaults,noatime,nodiratime,noquota        0 0" >>/etc/~]# /ETC/INIT.D/GFS2 start

Execute # clustat can query the state of each member node.

[Email protected] ~ for Gfsmail @ Thu Nov  3 at:£Member status:quorate Member Name                                       ID   ----------                                       ---------- test01                                          Online test02                                         ~]# 
2.9 At all nodes, configure the automatic start of the service so that you do not have to worry about restarting the server:
# chkconfig---------345345345  345 Rgmanager on

Reference: https://access.redhat.com/documentation/zh-CN/Red_Hat_Enterprise_Linux/6/html/Global_File_System_2/ Ch-overview-gfs2.html

This article is from the "YGQYGQ2" blog, make sure to keep this source http://ygqygq2.blog.51cto.com/1009869/1871300

VMware under CentOS6.8 Configuration GFs File system

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.