Glusterfs3.2.4/5 supports five volume types: distribute volume, stripe volume, replica volume, distribute stripe volume, and distribute replica volume. These five volumes can meet the needs of different applications for high performance and high availability.
(1)Distribute volume: Distributed volumes are distributed to the brick server through the hash algorithm. These volumes are the basis and biggest feature of glusterfs;
(2)Stripe volume: Strip volume, similar to raid0, number of strip = brick server quantity. Data blocks are divided into tables and distributed to the brick server in the round robin mode. The concurrency granularity is data blocks, and the performance of large files is high;
(3)Replica volume: Image volume, similar to raid1, the number of images = the number of brick servers, so the file data on the brick server is the same, forming an N-way image with high availability;
(4)Distribute stripe volume: Distributed strip volumes. The number of brick servers is a multiple of the number of strip, which has the characteristics of distrip and stripe volumes;
(5) dIstriw.replica volume: Distributed Image volume. The number of brick servers is a multiple of the number of images, which has the characteristics of distribute and replica volumes;
Stripe volume and replica volume are equivalent to raid0 and raid1 respectively. The former can achieve higher concurrency and performance, and the latter can achieve higher availability. If you can achieve high performance and high availability at the same time, like raid10/raid01, it should be more perfect. Glusterfs uses a stack-based technical architecture. Based on distribute, stripe, and replica volumes, composite volumes can be built using stacked wood. Therefore, building a raid 10/raid01 is completely possible. Unfortunately, currently, glusterfs3.2.4/3.2.5 management tool gluster does not support creating stripe replica
Volume and distribute stripe replica volume. The recently released 3.2.5 management manual says yes, but the management tool does not support synchronization. 3.3beta management tools are supported, but they are not successfully created after being used.
In fact, we can modify the glusterfs volume configuration file to implement distributed raid 10, that is, distribute stripe replica volume. Stripe replica volume is a special distributed raid10 volume, that is, stripe count * replica COUNT = the number of birck servers. This method bypasses glusterd/MGMT for management and can implement functions, but it is not convenient to modify. We recommend that you use it with caution. The procedure is as follows:
(1) Use the gluster tool to create a distribute stripe volume;
(2) Stop glusterd services on all related brick servers,/etc/init. d/glusterd stop;
(3) manually modify the volume configuration information on a brick server and synchronize it to all related brick servers;
(4) restart the glusterd service on all related brick servers and/etc/init. d/glusterd start.
The following is an example of how to simplify operations on a server.
(1)Create a distribute stripe volume
gluster volume create raid10 stripe 2 192.168.75.129:/opt/raid10-1 192.168.75.129:/opt/raid10-2 192.168.75.129:/opt/raid10-3 192.168.75.129:/opt/raid10-4 192.168.75.129:/opt/raid10-5 192.168.75.129:/opt/raid10-6 192.168.75.129:/opt/raid10-7 192.168.75.129:/opt/raid10-8
The contents of the obtained/etc/glusterd/Vols/raid10/raid10-fuse.vol configuration file are as follows:
volume raid10-client-0 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-1 option transport-type tcpend-volumevolume raid10-client-1 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-2 option transport-type tcpend-volumevolume raid10-client-2 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-3 option transport-type tcpend-volumevolume raid10-client-3 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-4 option transport-type tcpend-volumevolume raid10-client-4 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-5 option transport-type tcpend-volumevolume raid10-client-5 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-6 option transport-type tcpend-volumevolume raid10-client-6 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-7 option transport-type tcpend-volumevolume raid10-client-7 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-8 option transport-type tcpend-volumevolume raid10-stripe-0 type cluster/stripe subvolumes raid10-client-0 raid10-client-1end-volumevolume raid10-stripe-1 type cluster/stripe subvolumes raid10-client-2 raid10-client-3end-volumevolume raid10-stripe-2 type cluster/stripe subvolumes raid10-client-4 raid10-client-5end-volumevolume raid10-stripe-3 type cluster/stripe subvolumes raid10-client-6 raid10-client-7end-volumevolume raid10-dht type cluster/distribute subvolumes raid10-stripe-0 raid10-stripe-1 raid10-stripe-2 raid10-stripe-3end-volumevolume raid10-write-behind type performance/write-behind subvolumes raid10-dhtend-volumevolume raid10-read-ahead type performance/read-ahead subvolumes raid10-write-behindend-volumevolume raid10-io-cache type performance/io-cache subvolumes raid10-read-aheadend-volumevolume raid10-quick-read type performance/quick-read subvolumes raid10-io-cacheend-volumevolume raid10-stat-prefetch type performance/stat-prefetch subvolumes raid10-quick-readend-volumevolume raid10 type debug/io-stats option latency-measurement off option count-fop-hits off subvolumes raid10-stat-prefetchend-volume
(2)Service glusterd stop
(3)Edit/etc/glusterd/Vols/raid10/raid10-fuse.vol. The modified content is as follows:
volume raid10-client-0 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-1 option transport-type tcpend-volumevolume raid10-client-1 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-2 option transport-type tcpend-volumevolume raid10-client-2 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-3 option transport-type tcpend-volumevolume raid10-client-3 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-4 option transport-type tcpend-volumevolume raid10-client-4 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-5 option transport-type tcpend-volumevolume raid10-client-5 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-6 option transport-type tcpend-volumevolume raid10-client-6 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-7 option transport-type tcpend-volumevolume raid10-client-7 type protocol/client option remote-host 192.168.75.129 option remote-subvolume /opt/raid10-8 option transport-type tcpend-volumevolume raid10-afr-0 type cluster/replicate subvolumes raid10-client-0 raid10-client-1end-volumevolume raid10-afr-1 type cluster/replicate subvolumes raid10-client-2 raid10-client-3end-volumevolume raid10-afr-2 type cluster/replicate subvolumes raid10-client-4 raid10-client-5end-volumevolume raid10-afr-3 type cluster/replicate subvolumes raid10-client-6 raid10-client-7end-volumevolume raid10-stripe-0 type cluster/stripe subvolumes raid10-afr-0 raid10-afr-1end-volumevolume raid10-stripe-1 type cluster/stripe subvolumes raid10-afr-2 raid10-afr-3end-volumevolume raid10-dht type cluster/distribute subvolumes raid10-stripe-0 raid10-stripe-1end-volumevolume raid10-write-behind type performance/write-behind subvolumes raid10-dhtend-volumevolume raid10-read-ahead type performance/read-ahead subvolumes raid10-write-behindend-volumevolume raid10-io-cache type performance/io-cache subvolumes raid10-read-aheadend-volumevolume raid10-quick-read type performance/quick-read subvolumes raid10-io-cacheend-volumevolume raid10-stat-prefetch type performance/stat-prefetch subvolumes raid10-quick-readend-volumevolume raid10 type debug/io-stats option latency-measurement off option count-fop-hits off subvolumes raid10-stat-prefetchend-volume
(4)Service glusterd start; gluster volume start raid10
(5)Gluster volume info
Volume Name: raid10Type: Distributed-StripeStatus: StartedNumber of Bricks: 4 x 2 = 8Transport-type: tcpBricks:Brick1: 192.168.75.129:/opt/raid10-1Brick2: 192.168.75.129:/opt/raid10-2Brick3: 192.168.75.129:/opt/raid10-3Brick4: 192.168.75.129:/opt/raid10-4Brick5: 192.168.75.129:/opt/raid10-5Brick6: 192.168.75.129:/opt/raid10-6Brick7: 192.168.75.129:/opt/raid10-7
So far, the glusterfs distributed raid10 volume has been created. Start testing and application!