Examples of RH236 glusterfs storage configurations in Linux

Source: Internet
Author: User
Tags auth chmod mkdir socket uuid ssh glusterfs gluster
Host planning

The first four nodes are distributed, replicated, distributed + replicated, geo-synchronous, etc. for configuration glusterfs. The first four nodes are used for several types of storage configurations, and the fifth host is used as client access, and geo anomaly disaster tolerance simulations. It should be noted that all of the following configuration, I used are IP, the current network is recommended to use the host name or domain name configuration, this IP changes, no need to modify in the Glusterfs, only in the DNS modified to point to IP.


























Host Name IP Address
Node1 Server2-a.example.com 172.25.2.10
Node2 Server2-b.example.com 172.25.2.11
Node3 Server2-c.example.com 172.25.2.12
Node4 Server2-d.example.com 172.25.2.13
Node5 Server2-e.example.com 172.25.2.14

 
  
  
  1. [Root@server2-a ~]# Gluster
  2. Gluster> Peer probe 172.25.2.10
  3. Peer Probe:success. Probe on localhost not needed
  4. Gluster> Peer probe 172.25.2.11
  5. Peer Probe:success.
  6. Gluster> Peer probe 172.25.2.12
  7. Peer Probe:success.
  8. Gluster> Peer probe 172.25.2.13
  9. Peer Probe:success.
  10. Gluster>

[Root@server2-a ~]# mkdir-p/bricks/test
[Root@server2-a ~]# mkdir-p/bricks/data
[Root@server2-a ~]# VGs
VG #PV #LV #SN Attr vsize vfree
Vg_bricks 1 0 0 wz--n-14.59g 14.59g
[Root@server2-a ~]# lvcreate-l 13g-t vg_bricks/brickspool
Logical volume "lvol0" created
Logical volume "Brickspool" created
[Root@server2-a ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_a1
Logical volume "BRICK_A1" created
[Root@server2-a ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_a2
Logical volume "BRICK_A2" created
[Root@server2-a ~]# mkfs.xfs-i SIZE=512/DEV/VG_BRICKS/BRICK_A1
[Root@server2-a ~]# mkfs.xfs-i SIZE=512/DEV/VG_BRICKS/BRICK_A2
[Root@server2-a ~]# cat/etc/fstab |grep-v \#
UUID=0CAD9910-91E8-4889-8764-FAB83B8497B9/EXT4 Defaults 1 1
Uuid=661c5335-6d03-4a7b-a473-a842b833f995/boot EXT4 Defaults 1 2
UUID=B37F001B-5EF3-4589-B813-C7C26B4AC2AF Swap Defaults 0 0
TMPFS/DEV/SHM TMPFS Defaults 0 0
Devpts/dev/pts devpts gid=5,mode=620 0 0
Sysfs/sys Sysfs Defaults 0 0
PROC/PROC proc Defaults 0 0
/DEV/VG_BRICKS/BRICK_A1/BRICKS/TEST/XFS Defaults 0 0
/DEV/VG_BRICKS/BRICK_A2/BRICKS/DATA/XFS Defaults 0 0
[Root@server2-a ~]# Mount-a
[Root@server2-a ~]# Df-h
[Root@server2-a ~]# mkdir-p/bricks/test/testvol_n1
[Root@server2-a ~]# mkdir-p/bricks/data/datavol_n1

[Root@server2-b ~]# mkdir-p/bricks/test
[Root@server2-b ~]# mkdir-p/bricks/data
[Root@server2-b ~]# lvcreate-l 13g-t vg_bricks/brickspool
[Root@server2-b ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_b1
[Root@server2-b ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_b2
[Root@server2-b ~]# mkfs.xfs-i SIZE=512/DEV/VG_BRICKS/BRICK_B1
[Root@server2-b ~]# mkfs.xfs-i size=512/dev/vg_bricks/brick_b2
/DEV/VG_BRICKS/BRICK_B1/BRICKS/TEST/XFS Defaults 0 0
/DEV/VG_BRICKS/BRICK_B2/BRICKS/DATA/XFS Defaults 0 0
[Root@server2-b ~]# Mount-a
[Root@server2-b ~]# Df-h
[Root@server2-b ~]# mkdir-p/bricks/test/testvol_n2
[Root@server2-b ~]# mkdir-p/bricks/data/datavol_n2


[Root@server2-c ~]# mkdir-p/bricks/safe
[Root@server2-c ~]# mkdir-p/bricks/data
[Root@server2-c ~]# lvcreate-l 13g-t vg_bricks/brickspool
[Root@server2-c ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_c1
[Root@server2-c ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_c2
[Root@server2-c ~]# mkfs.xfs-i SIZE=512/DEV/VG_BRICKS/BRICK_C1
[Root@server2-c ~]# mkfs.xfs-i SIZE=512/DEV/VG_BRICKS/BRICK_C2
/DEV/VG_BRICKS/BRICK_C1/BRICKS/SAFE/XFS Defaults 0 0
/DEV/VG_BRICKS/BRICK_C2/BRICKS/DATA/XFS Defaults 0 0
[Root@server2-c ~]# Mount-a
[Root@server2-c ~]# Df-h
[Root@server2-c ~]# mkdir-p/bricks/safe/safevol_n3
[Root@server2-c ~]# mkdir-p/bricks/data/datavol_n3
Bricks-rep

[root@server2-d ~]# mkdir-p/bricks/safe
[root@server2-d ~]# mkdir-p/bricks/data
[root@server2-d ~]#] Lvcreat E-l 13g-t vg_bricks/brickspool
[root@server2-d ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_d1
[Root@serv ~]# lvcreate-v 3g-t vg_bricks/brickspool-n brick_d2
[root@server2-d ~]# mkfs.xfs-i er2-d size=512/dev/vg_bricks/b RICK_D1
[root@server2-d ~]# mkfs.xfs-i size=512/dev/vg_bricks/brick_d2
/dev/vg_bricks/brick_d1/bricks/safe/   XFS  defaults 0 0
/dev/vg_bricks/brick_d2/bricks/data/  xfs  defaults  0 0
[root@server2-d ~]# mount-a
[root@server2-d ~]# df-h
] [root@server2-d ~]# mkdir-p/bricks/safe/saf Evol_n4
[root@server2-d ~]# mkdir-p/bricks/data/datavol_n4

[Root@server2-a ~]# Gluster
Gluster> Volume Create Testvol 172.25.2.10:/bricks/test/testvol_n1 172.25.2.11:/bricks/test/testvol_n2
Volume Create:testvol:success:please start the volume to access data
Gluster> Volume start Testvol
Volume start:testvol:success
Gluster> Volume set Testvol auth.allow 172.25.2.*
Gluster> Volume list
Gluster> Volume Info Testvol

[Root@server2-a ~]# Gluster
Gluster> Volume Create Safevol replica 2 172.25.2.12:/bricks/safe/safevol_n3
Gluster> Volume start Safevol
Gluster> Volume set Testvol auth.allow 172.25.2.*
Gluster> Volume list
Gluster> Volume Info Safevol

[Root@server2-a ~]# Gluster
Gluster> Volume Create Datavol replica 2 172.25.2.10:/bricks/data/datavol_n4 172.25.2.12:/bricks/data/datavol_n4 172.25.2.13:/bricks/data/datavol_n4
Gluster> Volume start Datavol
Gluster> Volume set Testvol auth.allow 172.25.2.*
Gluster> Volume list
Gluster> Volume Info Datavol
Mount-glusterfs

[Root@server2-e ~]# cat/etc/fstab |grep-v \#
UUID=0CAD9910-91E8-4889-8764-FAB83B8497B9/EXT4 Defaults 1 1
Uuid=661c5335-6d03-4a7b-a473-a842b833f995/boot EXT4 Defaults 1 2
UUID=B37F001B-5EF3-4589-B813-C7C26B4AC2AF Swap Defaults 0 0
TMPFS/DEV/SHM TMPFS Defaults 0 0
Devpts/dev/pts devpts gid=5,mode=620 0 0
Sysfs/sys Sysfs Defaults 0 0
PROC/PROC proc Defaults 0 0
172.25.2.10:/testvol/test glusterfs _netdev,acl 0 0
172.25.2.10:/safevol/safe NFS _netdev 0 0
172.25.2.10:/datavol/data glusterfs _netdev 0 0
[Root@server2-e ~]# Mkdir/test/safe/data
[Root@server2-e ~]# Mount-a
[Root@server2-e ~]# Df-h

[Root@server2-e ~]# mkdir-p/test/confidential
[Root@server2-e ~]# groupadd Admins
[Root@server2-e ~]# chgrp admins/test/confidential/
[Root@server2-e ~]# Useradd Suresh
[Root@server2-e ~]# cd/test/
[Root@server2-e test]# setfacl-m u:suresh:rwx/test/confidential/
[Root@server2-e test]# setfacl-m d:u:suresh:rwx/test/confidential/
[Root@server2-e test]# Useradd Anita
[Root@server2-e test]# setfacl-m u:anita:rx/test/confidential/
[Root@server2-e test]# setfacl-m d:u:anita:rx/test/confidential/
[Root@server2-e test]# chmod o-rx/test/confidential/
Glusterfs-quota

[Root@server2-e ~]# mkdir-p/safe/mailspool
[Root@server2-a ~]# Gluster
Gluster> Volume
Gluster> Volume Quota Safevol Enable
Gluster> Volume quota Safevol limit-usage/mailspool 192MB
Gluster> Volume Quota Safevol list
[Root@server2-e ~]# chmod O+w/safe/mailspool
Geo-replication

[Root@server2-e ~]# lvcreate-l 13g-t vg_bricks/brickspool
[Root@server2-e ~]# lvcreate-v 8g-t vg_bricks/brickspool-n slavebrick1
[Root@server2-e ~]# mkfs.xfs-i Size=512/dev/vg_bricks/slavebrick1
[Root@server2-e ~]# mkdir-p/bricks/slavebrick1
[Root@server2-e ~]# Vim/etc/fstab
/dev/vg_bricks/slavebrick1/bricks/slavebrick1 XFS Defaults 0 0
[Root@server2-e ~]# Mount-a
[Root@server2-e ~]# mkdir-p/bricks/slavebrick1/brick
[Root@server2-e ~]# gluster Volume Create Testrep 172.25.2.14:/bricks/slavebrick1/brick/
[Root@server2-e ~]# gluster Volume start Testrep
[Root@server2-e ~]# Groupadd Repgrp
[Root@server2-e ~]# useradd georep-g repgrp
[Root@server2-e ~]# passwd Georep
[Root@server2-e ~]# mkdir-p/var/mountbroker-root
[Root@server2-e ~]# chmod 0711/var/mountbroker-root/
[Root@server2-e ~]# Cat/etc/glusterfs/glusterd.vol
Volume management
Type Mgmt/glusterd
Option Working-directory/var/lib/glusterd
Option Transport-type SOCKET,RDMA
Option Transport.socket.keepalive-time 10
Option Transport.socket.keepalive-interval 2
Option Transport.socket.read-fail-log off
Option Ping-timeout 0
# option Base-port 49152
Option Rpc-auth-allow-insecure on
Option mountbroker-root/var/mountbroker-root/
Option Mountbroker-geo-replication.georep Testrep
Option Geo-replication-log-group REPGRP
End-volume
[Root@server2-e ~]#/etc/init.d/glusterd Restart
[Root@server2-a ~]# Ssh-keygen
[Root@server2-a ~]# Ssh-copy-id georep@172.25.2.14
[Root@server2-a ~]# ssh georep@172.25.2.14
[Root@server2-a ~]# gluster System:: Execute Gsec_create
[Root@server2-a ~]# gluster volume geo-replication testvol georep@172.25.2.14::testrep Create Push-pem
[Root@server2-e ~]# sh/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh georep testvol testrep
[Root@server2-a ~]# gluster volume geo-replication testvol start
[Root@server2-a ~]# gluster volume geo-replication testvol georep@172.25.2.14::testrep status
Glusterfs-snapshot

[Root@server2-a ~]# gluster Snapshot Create Safe-snap Safevol

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.