Note: The operation of any disk may damage the data inside, please back up the data in advance, remember! Remember! Remember!
0x00: Preface.
The total space of the VG volume group is limited by the physical disk size, this tutorial has 3 physical disks are 100M, 200M, 300M, then the VG volume group total space is 600M, and the VG under all LV Cumulative total space is limited to VG.
In the process of using the LV will be used up sooner or later, then the above/LVM mount point can only mount one, and some programs can not support two files directory, such as the web only one/, and MySQL.
0X01: Expands VG volume group, shrinks VG volume group.
1) Expand the VG volume group.
In the previous chapter, I saw the addition of 3 physical disks, the first block (/DEV/SDB1) has been added to the VG group, this time to add the second block (/DEV/SDC1) to the previous vgdata's VG group.
First, format the disk, and create the disk partition/DEV/SDC1. Here's the picture, as before.
# FDISK/DEV/SDC//Display create disk and format, omit some commands
Slightly.
# PVCREATE/DEV/SDC1//Normal disk converted to PV # PVS//view VG Group Info # vgextend VGDATA/DEV/SDC1//Join VG Group, Vgdata to join VG group name,/DEV/SDC1 new PV
In the diagram you can see that the first PVS in the disk has only one PV, added, the second PVS will have a more PV.
650) this.width=650; "Src=" Https://s4.51cto.com/oss/201711/11/3b62ddb109d68dd5a4e25525941958e1.jpg-wh_500x0-wm_3 -wmp_4-s_3786925783.jpg "title=" lvm_20.jpg "alt=" 3b62ddb109d68dd5a4e25525941958e1.jpg-wh_ "/>
# Vgdisplay//To view the VG volume group details, the total VG space has been changed from 100M to 300M
650) this.width=650; "Src=" Https://s5.51cto.com/oss/201711/11/ed8178d8ae4f853b6e4cfc2c35269e8f.jpg-wh_500x0-wm_3 -wmp_4-s_448981433.jpg "title=" lvm_21.jpg "alt=" Ed8178d8ae4f853b6e4cfc2c35269e8f.jpg-wh_ "/>
2). Reduce the volume group.
In practice, we need to remove a PV physical volume from the VG volume group because of disk corruption or space allocation problems. The used PV physical volume will certainly be written to the data, then we need to migrate the data before removing the PV.
First, use the Pvdisplay command to see how much data is written on the PE block above, if/DEV/SDB1 is the physical hard disk to be removed.
Note that the destination free space is greater than/DEV/SDB1.
We have prepared/DEV/SDD1 to store the migrated data.
650) this.width=650; "Src=" Https://s1.51cto.com/oss/201711/11/ba4e0d7ce5339c208cee8044e6f3494d.jpg-wh_500x0-wm_3 -wmp_4-s_1926314776.jpg "title=" lvm_32.jpg "alt=" Ba4e0d7ce5339c208cee8044e6f3494d.jpg-wh_ "/>
It can be seen that sdb1 total space is 100M, then idle is 0M, that is, using 100M.
# pvmove-i 1/DEV/SDB1/DEV/SDD1//SDB1 data is moved to SDD1, the-I 1 is the progress of the data migration reported every 1 seconds. # vgreduce VGDATA/DEV/SDB1//Remove/DEV/SDB1 from the Vgdata volume Group # PVREMOVE/DEV/SDB1//Remove/dev/sdb1 out of PV
650) this.width=650; "src=" https://s1.51cto.com/oss/201711/11/8d84605abd9f6fdb7daf173a85a50233.jpg "title=" lvm_ 33.jpg "alt=" 8d84605abd9f6fdb7daf173a85a50233.jpg "/>
Result: No data is lost.
650) this.width=650; "src=" https://s4.51cto.com/oss/201711/11/d9f8075055262fc13fdc4fb85de0b780.jpg "title=" lvm_ 34.jpg "alt=" D9f8075055262fc13fdc4fb85de0b780.jpg "/>
0x02: extends LV, reduces LV logical volume.
1) The VG Volume group has a new space, and the Lvdata (/DEV/SDB1) You just created has been used, so let's expand the LV.
# lvextend–l +50m/dev/vgdata/lvdata or # lvextend–l 150m/dev/vgdata/lvdata
The-l and-l differences, the-l post is how much you want to increase, and the-L is the total number of writes you want to add.
After executing the command we find that the LV size becomes 150M.
650) this.width=650; "Src=" Https://s4.51cto.com/oss/201711/11/b637d4c5c21dc7de38c569db40a4417c.jpg-wh_500x0-wm_3 -wmp_4-s_3203597205.jpg "title=" lvm_22.jpg "alt=" B637d4c5c21dc7de38c569db40a4417c.jpg-wh_ "/>
LV expansion system has not been recognized, need to use RESIZE2FS to update, the system can be recognized.
# Resize2fs/dev/vgdata/lvdata
650) this.width=650; "src=" https://s2.51cto.com/oss/201711/11/ffb4a8e67996aa51d7bcf5b97fd91bd2.jpg "title=" lvm_ 23.jpg "width=" height= "363" border= "0" hspace= "0" vspace= "0" style= "width:700px;height:363px;" alt= " Ffb4a8e67996aa51d7bcf5b97fd91bd2.jpg "/>
2) Reduce the LV (logical volume) space.
In the process of using the LV space is always allocated unreasonable or large or small, if the allocation is large, do not create space waste, the LV (logical volume) is now reduced to allocate to other LV (logical volume) for use.
We previously created a LV (logical volume) that was mounted under/LVM.
650) this.width=650; "Src=" Https://s3.51cto.com/oss/201711/11/1d01229eeae72774366893849b30231f.jpg-wh_500x0-wm_3 -wmp_4-s_3122728653.jpg "title=" lvm_24.jpg "alt=" 1d01229eeae72774366893849b30231f.jpg-wh_ "/>
Adjust the money we have to unload the mount point first, because the mount is used after the status is not allowed to operate. , the uninstallation succeeds.
# UMOUNT/LVM
650) this.width=650; "src=" https://s4.51cto.com/oss/201711/11/ce3e2178c86ce07f6536578526e854a9.jpg "title=" lvm_ 25.jpg "alt=" Ce3e2178c86ce07f6536578526e854a9.jpg "/>
Check if the file system has a bad block, this check must be done.
# e2fsck-f/dev/mapper/vgdata-lvdata
650) this.width=650; "src=" https://s3.51cto.com/oss/201711/11/e4ba18c2f706fd3394bf461035e80b01.jpg "title=" lvm_ 26.jpg "width=" height= "123" border= "0" hspace= "0" vspace= "0" style= "width:700px;height:123px;" alt= " E4ba18c2f706fd3394bf461035e80b01.jpg "/>
From the above df-h know that the total space 287M, using 128M, the remaining 146M space. We narrowed the total space to 250M and did the demo.
Note: Before narrowing the LV, the first to narrow pv,resize2fs before, lvreduce in the order can not be wrong.
# Resize2fs/dev/mapper/vgdata-lvdata 250M//Zoom Out PV
650) this.width=650; "src=" https://s5.51cto.com/oss/201711/11/eac4a09909edc9dc07a4b4532bcbb8a3.jpg "style=" float: none; "title=" lvm_28.jpg "alt=" Eac4a09909edc9dc07a4b4532bcbb8a3.jpg "/>
# lvreduce-l 250m/dev/mapper/vgdata-lvdata or # lvreduce-l -50m/dev/mapper/vgdata-lvdata//-50M, subtract 50M space on the original basis.
650) this.width=650; "src=" https://s5.51cto.com/oss/201711/11/0ff9f7080a79d5e1f7493cc5a6abbc10.jpg "title=" lvm_3. JPG "width=" height= "border=" 0 "hspace=" 0 "vspace=" 0 "style=" width:700px;height:95px; "alt=" 0ff9f7080a79d5e1f7493cc5a6abbc10.jpg "/>
All right, cut it out, mount it.
# mount-a
Or
# MOUNT/DEV/VGDATA/LVDATA/LVM
Because the PE default 4M, we give the 250M PE is not necessarily to it, so the system will be new to calculate the PE multiples to it, the number of display will be less than the number we write.
650) this.width=650; "src=" https://s5.51cto.com/oss/201711/11/b4a28b201ce575ec0023d8ffd666d79e.jpg "style=" float: none; "title=" lvm_29.jpg "alt=" B4a28b201ce575ec0023d8ffd666d79e.jpg "/>
Note:/dev/mapper/vgdata-lvdata and/dev/vgdata/lvdata are the same, pointing to the same path.
See how much free space PV has. Displays more than 48M of free space.
# PVS
650) this.width=650; "src=" https://s5.51cto.com/oss/201711/11/41928d2cf3ea21db94bf912e9297a88f.jpg "title=" lvm_ 31.jpg "alt=" 41928d2cf3ea21db94bf912e9297a88f.jpg "/>
Reference article:
http://dreamfire.blog.51cto.com/418026/1084729/
Http://www.linuxidc.com/Linux/2016-06/132709.htm
Https://www.cnblogs.com/mchina/p/linux-centos-logical-volume-manager-lvm.html
Http://www.linuxidc.com/Linux/2017-05/143774.htm
This article is from the "Enlightened Grocery store" blog, please be sure to keep this source http://wutou.blog.51cto.com/615096/1980891
Linux two disk mount pointing to a folder LVM Disk Management (II)