Red Hat Storage management for the management of trusted storage pools and brick

Source: Internet
Author: User
Tags gluster



Red Hat Storage Management 1



One, the management of the trusted storage pool



A storage pool is a collection of storage servers, and when a server turns on the Glusterd service, the trusted storage pool is only itself, so how do we add other servers to the trusted storage pool? Command # Gluster peer probe [Server], as long as the other server must also turn on the Glusterd service, [server] can be an IP address or a server name, provided that the server name must be able to be parsed.


[[email protected] ~]# gluster peer probe rhs1
peer probe: success.
[[email protected] ~]# gluster peer probe rhs2
peer probe: success.


View Storage Pool Status


[[email protected] ~]# gluster peel status
Number of Peers: 2
Hostname: rhs1
Uuid: 149365ef-448e-421f-8971-e81183240c49
State: Peer in Cluster (Connected)
Hostname: rhs2
Uuid: 7a131594-9b15-4e4d-8a50-86ec4126e89c
State: Peer in Cluster (Connected)


Or you can use the command to see which servers are in the storage pool


[[email protected] ~]# gluster pool list
UUID                                   Hostname        State
149365ef-448e-421f-8971-e81183240c49    rhs1            Connected
7a131594-9b15-4e4d-8a50-86ec4126e89c    rhs2            Connected
db270220-90e7-4368-805e-7c7d39db37c8    localhost       Connected





Of course, the storage pool can be joined to the server, or the server can be excluded, command the following



# Gluster Peer detach [Server]



Servers are server names, and the premise is to be able to be parsed.


[[email protected] ~]# gluster peer detach rhs2
peer detach: success
[[email protected] ~]# gluster pool list
UUID                                   Hostname        State
149365ef-448e-421f-8971-e81183240c49    rhs1            Connected
db270220-90e7-4368-805e-7c7d39db37c8    localhost       Connected





Expand:



If you are creating volume with a storage pool, all servers in the storage pool must have RDMA devices (RDMA or RDMA over TCP). So what exactly is an RDMA device?



The full name of RDMA (Remote direct Memory access) is "remotely directed data access", which allows the computer to directly access the memory of other computers without the need for processor-time transmission.



RDMA is a feature that enables one computer to transfer data directly to another computer's memory, moving data from one system to a remote system memory without any impact on the operating system, by eliminating external memory duplication and text exchange operations, This frees up bus space and CPU cycles to improve application performance, reducing the need for bandwidth and processor overhead, significantly reducing latency.



RDMA is a network card technology that enables one computer to put information directly into the memory of another computer. Reduce latency by minimizing the overhead and bandwidth requirements of the processing process. RDMA achieves this by solidifying the reliable transport protocol on the NIC on the hardware and supporting the 0 Replication network technology and Kernel memory bypass technology. (Extension data from Baidu Library Http://wenku.baidu.com/link?url=vSklCvlJfBUTTDaYq7707SghZt7WB1z_ VOZGAD0HDVNYTHEB16JX58KYMMTGXZEFZBLVBH7EBUZVZCMZMSHOJGXU7I2VVEVMMZM4BPF-TAC)






Second, the creation and management of brick



Brick is the basic unit of Red Hat storage, the physical space where the server provides storage services, the real data store, and the ability to create many brick on a single server, but only if you want to control the risk of data loss from downtime.



The methods for creating brick and creating logical volumes are basically the same.



We created a logical partition/DEV/VDA5 when we built the Red Hat Storage learning environment (see First Blog: Environment building)


Device Boot      Start         End      Blocks  Id  System
/dev/vda1  *           3         506      253952  83  Linux
Partition 1 does not end on cylinderboundary.
/dev/vda2             506        6602    3072000   83  Linux
Partition 2 does not end on cylinderboundary.
/dev/vda3            6602        7618      512000  82  Linux swap / Solaris
Partition 3 does not end on cylinderboundary.
/dev/vda4            7618       20805    6646744    5  Extended
/dev/vda5            7618       20805    6646712+  8e  Linux LVM


Create PV (phsical Volume physical volume)


[[Email protected] ~] #pvcreate/dev/vda5 (there is an option here, in detail later)


Create VG (volume Group volume groups)


[[Email protected] ~] #vgcreate Vg_bricks/dev/vda5 (there is an option here, also later to speak)


Next we look at the VG


[[email protected] ~]# vgs
 VG        #PV #LV #SN Attr   VSize VFree
 vg_bricks   1   0   0wz--n- 6.34g 6.34g


Below we can create the brick in the VG



Create thin pools (thinly pool)


[[email protected] ~]# lvcreate -L 1G -T vg_bricks/thinlypool (–T:瘦池thinlypool为瘦池的名字)
 Logical volume "lvol0" created
 Logical volume "thinlypool" created


Create a brick named Brick0 in the thin pool


[[email protected] ~]# lvcreate -V 1G -T vg_bricks/thinlypool -n brick0   (-n:名字)
 Logical volume "brick0" created


View LVs


[[email protected] ~]# lvs
 LV         VG        Attr       LSize Pool       Origin Data%  Move Log Cpy%Sync Convert
 brick0     vg_bricks Vwi-a-tz-- 1.00gthinlypool          0.00                         
 thinlypool  vg_bricks twi-a-tz--1.00g                   0.00





Expand:



thinly-provisioned LVs was introduced as a technology preview in rhel6.3, in rhel6.5 and Rhel7 Department Branch LVM technology that holds



Working principle:



When creating a thin "thin" volume, a virtual logical volume size is pre-allocated, but only physical space is allocated to the volume when the data is actually written. This allows us to easily create multiple "thin volumes" with total logical capacity beyond the physical disk space, without having to "pay" upfront for the amount of data that might be available in the future. We also have the flexibility to adjust the size of the volume online when the data generated by the application does require additional drives.



Thin function of provisioned. The greatest feature of the Thin provisioned is the ability to dynamically allocate storage resources on demand, i.e., the storage is managed by virtualization. For example, a user requests a resource from a server administrator to allocate 10TB. While it may be true that 10TB of physical storage capacity is required, it is sufficient to allocate 2TB based on current usage. Therefore, the system administrator prepares the physical storage for 2TB and assigns a 10TB virtual volume to the server. The server can start running based on an existing physical disk pool that represents only 1/5 of the virtual volume capacity. This "start-to-small" scenario enables more efficient utilization of storage capacity.



Disk space in a standard logical volume takes up space on the volume group when it is created, but only when writing on a thin (thin) volume consumes space in the storage Pool "THINPOOLLV". A thin logical volume must be created before creating a thinpoollv, a thinpoollv consists of two parts: a large datalv (data volume) used to store data blocks, and a METADATELV (meta-data volume). The owning relationship of each block data in the thin volume is recorded in the metadata. (say the simple point is the metadata in the storage index, the data stores the real data, when you access the information, the first access to the data through the index, because you first access every time is not the real data, all like the C language linked list, theoretically stored data can be infinitely large, and dynamic Adjustable) (expanded data from Baidu Library http://wenku.baidu.com/link?url= Ieaktaij4iw8ee5nu16fuayk7onilxv203cdywpkhaqpv7tnf03h2nxkybsahszp7og0f8xevwxqofm1ddyc7tkbo80aaz9ko7aiiblrtu_)






About thin pools I'm here to demonstrate an example



Create a 1G thin pool where you can create a brick larger than 1G in a thin pool


[[Email protected] ~]# lvcreate -l 1g -t vg_bricks/spool  logical  volume  "lvol0" created  logical volume  "spool" created[[email protected]  ~]# lvcreate -v 2g -t vg_bricks/spool -n brick0  logical  volume  "Brick0" created[[email protected] ~]# lvs  lv      vg       attr       lsize  Pool  Origin Data% Move Log Cpy%Sync Convert  brick0  vg_bricks vwi-a-tz-- 2.00g spool           0.00                            spool  vg_bricks twi-a-tz--  1.00g                 0.00 








Brick is created to be formatted and mounted on the local


[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/brick0meta-data=/dev/ vg_bricks/brick0  isize=512   agcount=8, agsize=32768 blks         =                       sectsz=512   attr=2,  projid32bit=0data    =                       bsize=4096   blocks=262144, imaxpct=25        =                        sunit=0       swidth=0 blksnaming  =version 2              bsize=4096   ascii-ci=0log     =internal log           bsize=4096   blocks=2560,version=2         =                       sectsz=512   sunit=0  blks,lazy-count=1realtime =none                    extsz=4096   blocks=0, rtextents=0


Here is an option-I, which means that the inode size must be 512 bytes



To create a mount point


[Email protected] ~]# mkdir/bricks/brick0-p


You must set up auto mount on boot


[[Email protected] ~]# echo]/dev/vg_bricks/brick0 /bricks/brick0 xfs defaults  0 0 " >>/etc/fstab[[email protected] ~]# mount –a[[email  protected] ~]# df -thfilesystem            Type   Size Used Avail Use% Mounted on/dev/vda2             ext4   2.9G 1.6G   1.2g  58% /tmpfs                 tmpfs  499m    0  499m   0% /dev /shm/dev/vda1            ext4    241m  30m  199m  13% /boot/dev/mapper/vg_bricks-brick0                      xfs   1014m   33m  982m   4% /bricks/brick0


Our brick is ready.



Before Red Hat Storage Management Server 3, version volume is directly using the Mount location/bricks/brick0, and the latest version is that you must create one or more subdirectories under the Mount directory/bricks/brick0, volume can use either of these subdirectories, So here we have to create a subdirectory under the/bricks/brick0, which is named Brick


[Email protected] ~]# Mkdir/bricks/brick0/brick





In order to operate quickly and easily, here I wrote the creation of brick a script, easy to operate later


[[email protected] ~]# cat createbricks.sh#!/bin/bashlvcreate-l 1g-tvg_bricks/thinlypool_ "$"; Lvcreate-v 1g-tvg_ Bricks/thinlypool_ "$"-N Brick "$"; Mkfs.xfs-i Size=512/dev/vg_bricks/brick "$"-f;mkdir/bricks/brick "$-p;echo"/ Dev/vg_bricks/brick "$"/bricks/brick "$" XFS defaults 0 0 ">>/etc/fstab;mount-a;mkdir/bricks/brick" $ "/brick; [Email protected] ~]# chmod a+x createbricks.sh


And then publish it down.


[Email protected] ~]#./distributefiles.sh/root/createbricks.sh/root/createbricks.sh


Rhs0 inside the brick number is ...



RHS1 inside the brick number is 11,12,13 ...



RHS2 inside the brick number is 21,22,23 ...



...





The

Script usage is #./createbricks.sh [number] such as  


[[email protected]~]# ./createbricks.sh 1 logical volume  "Lvol0"  created  Logical volume  "Thinlypool_1"  created Logical volume  "Brick1"   createdmeta-data=/dev/vg_bricks/brick1  isize=512   agcount=8, agsize=32768  blks        =                       sectsz=512    attr=2, projid32bit=0data    =                       bsize=4096    blocks=262144,imaxpct=25        =                       sunit=0       swidth=0 blksnaming  =version 2               bsize=4096   ascii-ci=0log     =internal  Log          bsize=4096   blocks=2560,version =2        =                       sectsz=512    sunit=0 blks,lazy-count=1realtime =none                    extsz=4096   blocks=0, rtextents=0[[ email protected] ~]# df -thfilesystem            type   size used avail use% mounted on/dev/vda2             ext4   2.9g 1.6g  1.2g  58%  /tmpfs                 Tmpfs  499m    0  499m   0% /dev/shm/dev/vda1             ext4   241M   30m  199m  13% /boot/dev/mapper/vg_bricks-brick1                      xfs    1014m  33m  982m   4% /bricks/brick1





Then the method of removing the brick is the same as removing the logical volume



First step, first uninstall


[Email protected] bricks]# Umount/bricks/brick1


The second step is to delete the entries in the/etc/fstab



Step three, delete the mount point you created


[Email protected] bricks]# Rm-rf/bricks/brick1


Fourth step, delete the logical volume


[Email protected] bricks]# Lvremove vg_bricks





This article is from the "0 basic Linux" blog, so be sure to keep this source http://huangmh77.blog.51cto.com/10041435/1660789



Red Hat Storage management for the management of trusted storage pools and brick


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.