Iptables rules
Iptables-a input-m State--state new-m tcp-p TCP--dport 24007:24047-j ACCEPT
Iptables-a input-m State--state new-m tcp-p TCP--dport 111-j ACCEPT
Iptables-a input-m State--state new-m udp-p UDP--dport 111-j ACCEPT
Iptables-a input-m State--state new-m tcp-p TCP--dport 38465:38467-j ACCEPT
Glusterfs Service:/etc/init.d/glusterd {Start|stop|status|restart}
Storage pool:
To add a server to a storage pool:
Gluster peer Probe SERVER (port 24007)
To view the status of a storage pool:
Gluster Peer Status
Remove the server from the storage pool
Gluster Peer Detach SERVER
Gluster Volume Create distribute 10.28.1.12:/glusterfs
Server Volume:
To create a volume:
Gluster Volume Create Replica-volume server3:/exp3 SERVER4:/EXP4
Volume Type:
1 distributed (may result in some data loss because there is no copy)
Gluster Volume Create Replica-volume server1:/exp1 SERVER2:/EXP2
Gluster Volume Create Replica-volume transport RDMA SERVER1:/EXP1
Gluster Volume set Replica-volume auth.allow 10.*
Gluster Volume Info
2) replicated
Gluster Volume Create Replica-volume Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2
3) Striped (high concurrent environment access to large files)
Gluster volume Create Replica-volume Stripe 2 Transport TCP SERVER1:/EXP1
Server2:/exp2
4) distributed replicated (can improve read performance)
Gluster Volume Create Replica-volume Replica 2 transport TCP SERVER1:/EXP1
SERVER2:/EXP2 Server3:/exp3 SERVER4:/EXP4
5) Distributed striped
Gluster Volume Create Replica-volume Stripe 4 Transport TCP SERVER1:/EXP1
SERVER2:/EXP2 Server3:/exp3 SERVER4:/EXP4
Volume service should start before volume is created and then boot volume
Gluster Volume start Replica-volume
Gluster Volume Stop Replica-volume
Gluster Volume Delete Replica-volume
Volume Mount to client:
1) Manual mount: mount-t glusterfs 10.28.1.96:/replica-volume/mnt/replica
2) automatic mount (Mount at system startup):
Server1:/replica-volume/mnt/glusterfs glusterfs Defaults,_netdev 0 0
To test a mounted volume:
Mount
Server1:/replica-volume On/mnt/glusterfs Type Fuse.glusterfs
(rw,allow_other,default_permissions,max_read=131072)
Adjust volume options:
Gluster Volume set Replica-volume performance.cache-size 256MB
Expansion Volume:
Gluster Peer probe Server4
Gluster Volume Add-brick Replica-volume server4:/exp4
Shrink Volume:
Gluster Volume Remove-brick Replica-volume server2:/exp2
Migration Volume (fuse):
Gluster volume Replace-brick Replica-volume server3:/exp3 SERVER5:EXP5
To pause a migration:
Gluster volume Replace-brick Replica-volume server3:/exp3 SERVER5:EXP5
Abort migration:
Gluster volume Replace-brick Replica-volume server3:/exp3 SERVER5:EXP5
Check migration Status:
Gluster volume Replace-brick replica-volume server3:/exp3 server5:/exp5 Status
Submit Migration:
Gluster volume Replace-brick Replica-volume server3:/exp3 SERVER5:/EXP5 Commit
To view the results of the migration: Gluster Volume info Replica-volume
To rebalance a volume:
Gluster Volume rebalance Replica-volume status
Stop rebalancing:
Gluster Volume rebalance Replica-volume stop
Move some data to a newly added storage block
Gluster Volume rebalance Replica-volume fix-layout start
Run on client:
Find <gluster-mount>-noleaf-print0 | Xargs--null Stat >/dev/null
Geographical Replication (geo-replication) (Backup, disaster preparedness)
GLUSTERFS geographic replication provides a continuous, asynchronous and incremental Replication (rsync) service
A wide area network (WAN) from one site to another local area network (LAN), and spanning
Internet.
Copy
Gluster Volume geo-replication Volume1 example.com:/data/remote_dir
View the status after replication
Gluster volume geo-replication Volume1 example.com:/data/remote_dir status
Start
Stop replication
Gluster Volume geo-replication Volume1 example.com:/data/remote_dir
Stop
Run Replication commands manually
RSYNC-PAVHS--xattrs--ignore-existing/data/remote_dir/
Client:/mnt/gluster
Directory quota management (you can control the size of storage in a directory or volume)
Gluster Volume Quota Replica-volume Enable
Gluster Volume Quota Replica-volume Disable
Gluster Volume set Replica-volume features.quota-timeout 5
Mount-o Acl/dev/sda1/export1
Mount-t glusterfs-o ACL 198.192.198.234:glustervolume/mnt/gluster
To set the log directory:
Gluster volume log filename replica-volume/var/log/replica-volume/
Verify the location of the log directory:
Gluster Volume Log Locate Replica-volume
If the replica is 2, add at least 2 when adding equipment to the volume, otherwise the error:
[Root@node97 ~]# gluster Volume Add-brick Replica-volume
Incorrect number of bricks supplied 1 for type REPLICATE with Count 2
[Root@node97 ~]# gluster volume Add-brick replica-volume 10.28.1.96:/mnt/replica1 10.28.1.96:/MNT/REPLICA2
ADD Brick Successful
If the replica is 2, add at least 2*n when adding the device to the volume, otherwise the error:
[Root@node97 glusterfs]# gluster volume Add-brick replica-volume 10.28.1.97:/mnt/replica1 10.28.1.96:/MNT/REPLICA3 10.28.1.96:/mnt/replcia4
Incorrect number of bricks supplied 3 for type stripe with Count 2
[Root@node97 glusterfs]# Mount/glusterfs/replica2/mnt/replica2-o Loop
[Root@node97 glusterfs]# gluster volume Add-brick replica-volume 10.28.1.97:/mnt/replica1 10.28.1.96:/MNT/REPLICA3 10.28.1.96:/mnt/replcia4 10.28.1.97:/MNT/REPLICA2
ADD Brick Successful
If a device has joined a logical volume, it can no longer join other logical volumes
Client Responsibilities:
Data volume management;
I/O scheduling;
File positioning (Cluster/distribute);
data caching;
Pre-read read-ahead;
Delay writing write-behind;
Monitoring
Hash Algorithm classification
1) rejoin the device, the system's hash value mapping space may be changed,
In order to obtain the corresponding data in the corresponding machine directory, the file directory in the cluster needs to be distributed again.
Move files to the correct storage server
2 using a consistent hashing algorithm, modify the hash mapping space of new nodes and adjacent nodes,
Only some of the data on the adjacent node needs to be moved to the new node, which has a relatively small impact.
However, this brings another problem, namely the system overall load is unbalanced.
3 The directory as the basic unit, the file's parent directory using extended attributes to record the child volume mapping information,
The next face file directory is distributed in the parent directory-owned storage server.
Because the file directory stores the distribution information beforehand, the new node does not affect the existing file storage distribution.
It will begin to participate in the storage distribution schedule from the newly created directory thereafter.
Glusterfs considered the problem in its design,
When you create a new file, you prioritize the nodes that have the lightest capacity.
Create a file link on the target storage node to the node where the file is actually stored.
Gluster involves some port 1020, 1021,1022,1023
Port 24007 will wait for other servers to join the storage pool when Gluster is started
When a server joins a server to the storage pool, a port is started to connect to the server
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.