Glusterfs Install and Mount separately

Source: Internet
Author: User
Tags posix glusterfs gluster

I used the redhat6.4.

Install Glusterfs Direct Yum

# wget-p/ETC/YUM.REPOS.D Http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

Server-side installation:

#yum install-y glusterfs glusterfs-server glusterfs-fuse

Client Side installation:

#Yum install-y glusterfs glusterfs-server

It's over, easy ...


First talk about what the original architecture is, we are the image server, storage is a picture. Both servers share the same directory, such as S1 and S2 two machines, and both Glusterfs configurations provide/var/test/directory sharing, multiple client mounts S1 and S2 shared directories, such as client mounts to the local/test directory, so that When the client side to the/test directory to write data, it will be written to two servers, two server content is the same, play a role in the interaction to prevent the hard drive to break down. Of course, data is automatically backed up to another backup machine every day.

Now that you've added a new project, you need to share the storage. The Glusterfs server also uses S1 and S2, but to create a new directory, let's say/newtest.


Directly on the configuration file

Server:

#vim/etc/glusterfs/glusterd.vol


Volume Brick

Type Storage/posix

Option directory/var/test/

End-volume


Volume locker

Type Features/posix-locks

Subvolumes Brick

End-volume


Volume server

Type Protocol/server

Option Transport-type Tcp/server

Option Listen-port 24000

Subvolumes Locker

Option Auth.addr.brick.allow *

Option Auth.addr.locker.allow *

End-volume


Volume Brick1

Type Storage/posix

Option directory/var/newtest/

End-volume


Volume Locker1

Type Features/posix-locks

Subvolumes Brick1

End-volume


Volume Server1

Type Protocol/server

Option Transport-type Tcp/server

Option Listen-port 24001

Subvolumes Locker1

Option Auth.addr.brick1.allow *

Option Auth.addr.locker1.allow *

End-volume


Start the service:

#/etc/init.d/glusterd restart


Note: first S1 and S2 on the first/var/test and/var/newtest directory, after starting to view the above shared two ports Start No, S1 and S2 are exactly the same


Client:


# Vim/etc/glusterfs/photo.vol

Volume client

Type Protocol/client

Option Transport-type tcp/client

Option Remote-host x.x.x.x #s1的ip

Option Transport.socket.remote-port 24000

Option Remote-subvolume Locker

End-volume


Volume Client2

Type Protocol/client

Option Transport-type tcp/client

Option Remote-host x.x.x.x #s2的ip

Option Transport.socket.remote-port 24000

Option Remote-subvolume Locker

End-volume


Volume Bricks

Type Cluster/replicate

Subvolumes client1 Client2

End-volume


# # # ADD Io-cache Feature

Volume Iocache

Type Performance/io-cache

Option Page-size 8MB

Option Page-count 2

Subvolumes Bricks

End-volume


# # # ADD writeback feature

Volume writeback

Type Performance/write-behind

Option Aggregate-size 8MB

Option Window-size 8MB

Option Flush-behind off

Subvolumes Iocache

End-volume


Mount: Glusterfs-f/etc/glusterfs/photo.vol-l/tmp/photo.log/test


Create files or directories inside the/test, and you can generate the same data in the/var/test directory on the S1 and S2.


Configure the new directory below


New-clinet:


# Vim/etc/glusterfs/photo1.vol


Volume CLIENT1

Type Protocol/client

Option Transport-type tcp/client

Option Remote-host x.x.x.x #s1的ip

Option Transport.socket.remote-port 24001

Option Remote-subvolume Locker1

End-volume


Volume Client2

Type Protocol/client

Option Transport-type tcp/client

Option Remote-host x.x.x.x #s2的ip

Option Transport.socket.remote-port 24001

Option Remote-subvolume Locker1

End-volume


Volume Bricks

Type Cluster/replicate

Subvolumes client1 Client2

End-volume


# # # ADD Io-cache Feature

Volume Iocache

Type Performance/io-cache

Option Page-size 8MB

Option Page-count 2

Subvolumes Bricks

End-volume


# # # ADD writeback feature

Volume writeback

Type Performance/write-behind

Option Aggregate-size 8MB

Option Window-size 8MB

Option Flush-behind off

Subvolumes Iocache

End-volume




Mount: Glusterfs-f/etc/glusterfs/photo1.vol-l/tmp/photo1.log/newtest


Create files or directories inside the/newtest, and you can generate the same data in the/var/newtest directory on the S1 and S2.


This article from "My Blog" blog, declined reprint!

Glusterfs Install and Mount separately

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.