RedHat6.5 Build Glusterfs whole process

Source: Internet
Author: User

First, preface

In the process of learning KVM, the boss asked me to store the data disk and the system disk separately. As a result, the boss suggested using Glusterfs to do data storage. Find a lot of information to finish, see the following operation it.

Second, install the deployment.

Through a lot of data found that there is Linux system is directly found Glusterfs source code of the website download repo file after Yum installation. At the beginning, I also do this, found all kinds of error, all kinds of dependence, let me annoyed, but, I am still determined to install with Yum source, because with Yum source installation will save a lot of things, such as: Startup scripts, environment variables, and so on.

Installation start:

Find a 163 yum Source:

[Email protected] vm-images]# Cat/etc/yum.repos.d/99bill.repo

[Base]

Name=centos-yum

baseurl=http://mirrors.163.com/centos/6/os/x86_64/

Gpgcheck=1

Gpgkey=file:///etc/pki/rpm-gpg/rpm-gpg-key-centos-6

<------------------------------------------->

Install some dependencies or some useful software:

Yum-y Install libibverbs librdmacm xfsprogs nfs-utils rpcbind libaio Liblvm2app lvm2-devel

cd/etc/yum.repos.d/

Get the source for Glusterfs:

Wget-p/ETC/YUM.REPOS.D Http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/RHEL/glusterfs-epel.repo

MV 99bill.repo 99bill.repo.bak

Yum Clean All

cd/home/

To install the Epel source:

wget http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm

RPM-IVH epel-release-6-8.noarch.rpm

To install a dependency package:

wget ftp://195.220.108.108/linux/epel/6/x86_64/userspace-rcu-0.7.7-1.el6.x86_64.rpm

RPM-IVH userspace-rcu-0.7.7-1.el6.x86_64.rpm

wget ftp://rpmfind.net/linux/fedora/linux/releases/24/Everything/x86_64/os/Packages/p/pyxattr-0.5.3-7.fc24.x86_64.rpm

RPM-IVH pyxattr-0.5.3-7.fc24.x86_64.rpm--force--nodeps

wget ftp://ftp.pbone.net/mirror/ftp.pramberger.at/systems/linux/contrib/rhel6/archive/x86_64/ python-argparse-1.3.0-1.el6.pp.noarch.rpm

RPM-IVH python-argparse-1.3.0-1.el6.pp.noarch.rpm

<-------------------------------------------------------->

Installing the Glusterfs Software

Yum install-y--skip-broken glusterfs glusterfs-api glusterfs-cli glusterfs-client-xlators glusterfs-fuse Glusterfs-li BS Glusterfs-server

Start:/etc/init.d/glusterd restart

<---------------------------------------------------->

Third, use Glusterfs

Now use 4 machines to do Glusterfs server, because, I will do a distribution copy stripe reel !

Distribution copy of the meaning of the Strip volume: 4 machines divided into two parts <ab,cd>, and then copy the data into 2 copies, sent to <AB><CD>, AB and CD data is the same. Then, in AB or CD, the copied data will be cut into 2 copies of the same (almost the same size) of the file, stored in a machine, B machine or C machine, D machine.

Specific Use steps:

[Email protected] vm-images]# cat/etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.55.231 datastorage231

192.168.55.232 datastorage232

192.168.55.233 datastorage233

192.168.55.234 datastorage234

Add the Hostnmae corresponding IP address to each machine.


To add a server to a storage pool:

I'm using the datastorage231 to operate the machine.

Gluster peer probe datastorage231 ==> here will prompt you without adding a storage pool on this computer.

Gluster Peer probe datastorage232

Gluster Peer probe datastorage233

Gluster Peer probe datastorage234

To create a distributed copy of a striped volume command:

[[email protected] opt]# gluster volume create vm-images stripe 2 Replica 2 transport TCP 192.168.55.231:/gfs_data/vm-imag Es 192.168.55.232:/gfs_data/vm-images 192.168.55.233:/gfs_data/vm-images 192.168.55.234:/gfs_data/vm-images

Volume Create:vm-images:success:please start the volume to access data ==> prompt success

[[email protected] opt]# Gluster Volume Info ==> View the volume group information created

Volume name:vm-images

Type:striped-replicate

Volume id:e1dcf250-a1d4-47e8-8f43-328c14f2508c

status:created

Number of Bricks: 1 x 2 x 2 = 4

Transport-type:tcp

Bricks:

Brick1:192.168.55.231:/gfs_data/vm-images

Brick2:192.168.55.232:/gfs_data/vm-images

Brick3:192.168.55.233:/gfs_data/vm-images

Brick4:192.168.55.234:/gfs_data/vm-images

Options reconfigured:

Performance.readdir-ahead:on

[[email protected] opt]# gluster volume start vm-images ==> boot volume

Volume start:vm-images:success

[Email protected] opt]# Gluster volume status All

Status of Volume:vm-images

Gluster process TCP Port RDMA Port Online Pid

------------------------------------------------------------------------------

Brick 192.168.55.231:/gfs_data/vm-images 49152 0 Y 2533

Brick 192.168.55.232:/gfs_data/vm-images 49152 0 Y 3019

Brick 192.168.55.233:/gfs_data/vm-images 49152 0 Y 2987

Brick 192.168.55.234:/gfs_data/vm-images 49152 0 Y 2668

NFS Server on localhost 2049 0 Y 2555

Self-heal Daemon on localhost n/a n/a Y 2560

NFS Server on datastorage233 2049 0 Y 3009

Self-heal Daemon on datastorage233 N/a N/a Y 3015

NFS Server on datastorage234 2049 0 Y 2690

Self-heal Daemon on datastorage234 N/a N/a Y 2695

NFS Server on datastorage232 2049 0 Y 3041

Self-heal Daemon on datastorage232 N/a N/a Y 3046


Task Status of Volume vm-images

------------------------------------------------------------------------------

There is no active volume tasks

The basic installation of the server is complete.

tip : The node will automatically create a gluster.info when the pool is added , with a unique UUID in the file

/var/lib/glusterd/glusterd.info

When the node status is incorrect, you can delete all directory files except glusterd.info in the/var/lib/glusterd/directory and restart the Gluster service


Four, the client uses

Modprobe Fuse

Lsmod |grep Fuse

DMESG | Grep-i Fuse

Yum-y install openssh-server wget fuse fuse-libs openib libibverbs

Yuminstall-y Glusterfs Glusterfs-fuse

Mount Volume:

Mount-t Glusterfs 192.168.55.231:/vm-images/rhel6_gfs_data/

Standby server Mount:

You can specify the following options whenusing the mount-t glusterfs command. Note that you need to separate all optionswith commas.

Backupvolfile-server=server-name

Volfile-max-fetch-attempts=number ofattempts

Log-level=loglevel

Log-file=logfile

Transport=transport-type

Direct-io-mode=[enable|disable]

Use-readdirp=[yes|no]

Mount-t Glusterfs-o backupvolfile-server=volfile_server2,use-readdirp=no,log-level=warning,log-file=/var/log/ Gluster.logserver1:/test-volume/mnt/glusterfs


Mount-t glusterfs-obackupvolfile-server=192.168.55.233,use-readdirp=no,log-level=warning,log-file=/var/log/ Gluster.log192.168.55.231:/vm-images/opt/gfs_temp




This article is from the "Crossing" blog, make sure to keep this source http://crossing.blog.51cto.com/8939692/1826365

RedHat6.5 Build Glusterfs whole process

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.