Glusterfs installation/configuration/test

Source: Internet
Author: User
Tags glusterfs gluster

Operating System opensuse 10.2

1. Download

1) download glusterfs

Http://ftp.zresearch.com/pub/gluster/glusterfs/1.3/


Http://europe.gluster.org/glusterfs/1.3/

2) download the fuse patch

Http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/


Http://europe.gluster.org/glusterfs/fuse/

2. Installation

1) install Fuse
First install kernel-Source

# Rpm-IVH kernel-source-2.6.18.2-34.i586.rpm

 

Install fuse again

# Tar-xzf fuse-2.7.2glfs8.tar.gz <br/> # cd fuse-2.7.2glfs8 <br/> #. /configure -- prefix =/usr -- enable-kernel-module <br/> # make install <br/> # ldconfig

 

2) install glusterfs

# Tar-xzf glusterfs-1.3.8pre1.tar.gz <br/> # cd glusterfs-1.3.8pre1 <br/> #./configure -- prefix = <br/> # make install <br/>

 

After -- prefix = is added, the configuration file will be mounted to/etc; otherwise, the configuration file will be mounted to/usr/local/etc /.

The installation is complete.

3) configuration-Take two servers (172.16.0.191 and 172.16.0.192) and one client (172.16.0.193) as an example.

Server1 [172.16.0.191]

# Cat/etc/glusterfs/glusterfs-server.vol <br/> volume brick <br/> type storage/POSIX <br/> Option directory/data/export # Note: Once exported, do not write directly to this directory <br/> end-volume </P> <p> volume brick-ns <br/> type storage/POSIX <br/> Option directory/ data/export-ns <br/> end-volume </P> <p> volume server <br/> type protocol/server <br/> subvolumes Brick-ns <br /> Option transport-type TCP/Server # for TCP/IP transport <br/> Option Auth. IP. brick. allow * <br/> Option Auth. IP. brick-ns.allow * <br/> end-volume </P> <p> # glusterfsd-F/etc/glusterfs/glusterfs-server.vol <br/> # DF-H <br/> filesystem size used avail use % mounted on <br/>/dev/sda2 5.0g 3.5G 1.2G 75%/<br/> udev 506 m 64 K 506 m 1%/dev <br/> /dev/sda3 996 M 18 m 929 M 2%/home <br/>

 

Server2 [172.16.0.192] configuration and operations are the same as above.

Client1 [172.16.0.193] Configuration

# Mkdir/mnt/glusterfs <br/> # Cat/etc/glusterfs/glusterfs-client.vol </P> <p> volume client1-ns <br/> type protocol/client <br/> Option transport-type TCP/client <br/> Option remote-host 172.16.0.191 <br/> Option remote-subvolume brick-ns <br/> end-volume </P> <p> volume Client1 <br/> type protocol/client <br/> Option transport-type TCP/client <br/> Option remote-host 172.16.0.191 <br/> Option remote-subvolume brick <br/> end-volume </P> <p> volume Client2 <br/> type protocol/client <br/> Option transport-type TCP/client <br/> Option remote -host 172.16.0.192 <br/> Option remote-subvolume brick <br/> end-volume </P> <p> volume unify <br/> type cluster/unify <br/> subvolumes Client1 Client2 <br/> Option namespace client1-ns <br/> Option sched1_rr <br/> end-volume </P> <p> # glusterfs-F/etc/glusterfs/ glusterfs-client.vol/mnt/glusterfs <br/> # DF-H <br/> filesystem size used avail use % mounted on <br/>/dev/sda2 5.0g 4.2g 523 M 90% /<br/> udev 247 M 64 K 247 M 1%/dev <br/>/dev/sda3 996 M 18 m 929 M 2%/home <br/> glusterfs 9.9g 8.0G 1.5g 85%/mnt/glusterfs <br/>

 

View network status

# Netstat-anp | grep glusterfs <br/> TCP 0 172.16.0.193: 1016 172.16.0.191: 6996 established 13900/[glusterfs] <br/> TCP 0 0 172.16.0.193: 1017 172.16.0.191: 6996 established 13900/[glusterfs] <br/> TCP 0 172.16.0.193: 1015 172.16.0.192: 6996 established 13900/[GL

 

4. Test

# In client 1

# Cp/root/test.txt/mnt/glusterfs /. <br/> # cp/root/test-1.txt/mnt/glusterfs /. <br/> # cp/root/test-2.txt/mnt/glusterfs /. <br/>

 

# In client 1

# Ll/mnt/glusterfs/<br/> total 12 <br/>-RW-r -- 1 Root 20 Feb 28 test-1.txt <br/>-RW-R -- r -- 1 Root 10 Feb 28 test-2.txt <br/>-RW-r -- 1 Root 20 Feb 28 test.txt <br/>

 

# In Server 1

# Ll/data/export <br/> total 8 <br/>-RW-r -- 1 Root 20 Feb 28 test-1.txt <br/>-RW-r -- r -- 1 Root 10 Feb 28 test-2.txt <br/>

 

# In Server 2

# Ll/data/export/<br/> total 4 <br/>-RW-r -- 1 root 20 Feb 28 23:42 test.txt <br/>

 

Install, configure, and test successfully.

Continue high reliability testing

# In Server 1

# Killall-9 glusterfsd <br/>

 

# In client 1

# Ls/mnt/glusterfs/<br/> ls: cannot access/mnt/glusterfs/: no such file or directory <br/>

 

This indicates that the entire cluster file system cannot be used when the stoage server is down. For details about glusterfs high reliability configuration, refer to the following article:

References:
Install and run glusterfs v1.3 in 10 mins

Performance Optimization:
Guide to optimizing glusterfs

Appendix:

Self-heal
Currently AFR doesn't do active self heal. That is, it won't fix all
The inconsistencies automatically. but instead it fixes
Inconsistencies when a file gets opened. Hence, if one needs to make
Sure all of his AFR 'd copies are in sync, following command may help.

$ Find/mnt/glusterfs-type F-exec head-N 1 {}/; <br/>

 

A faster healing solution cocould be

$ Find/mnt/glusterfs-type F-exec head-C 1 {}/;>/dev/null

 

Post address: http://anotherbug.blog.chinajavaworld.com/entry/4356/0/

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.