(1) device reduction[Root @ localhost ~] # Umount/westos/[root @ localhost ~] # E2fsck-f [root @ localhost ~] # Resize2fs/dev/vg0/lv0 100Mresize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on/dev/vg0/lv0 to 102400 (1 k) blocks. the filesystem on/dev/vg0/lv0 is now 102400 blocks long. /dev/vg0/lv0 # perform forced file check [root @ localhost ~] # Lvreduce-L 100 M/dev/vg0/lv0 WARNING: powering active logical volume to 100.00 MiB this may destroy your data (filesystem etc.) Do you really want to reduce lv0? [Y/n]: Y cing logical volume lv0 to 100.00 MiB Logical volume lv0 successfully resized [root @ localhost ~] # Lvs lv vg Attr LSize Pool Origin Data % Move Log Cpy % Sync Convert lv0 vg0-wi-a ----- 100.00 m [root @ localhost ~] # Mount/dev/vg0/lv0/westos/
The monitoring results show that lv0 has been reduced to 100 MB.
(2) reduction of the volume groupTo remove or reduce a volume group without data, you can directly remove it. However, to remove a volume group with data, you must first migrate the data to another volume group. Here, we only demonstrate the process of removing the volume group with data:
# It can be seen from the above that/dev/vdb2 is not enough to put down the capacity above vdb1, so a vdb3 [root @ localhost ~] is added # Pvmove/dev/vdb1/dev/vdb3/dev/vdb1: Moved: 7.9%/dev/vdb1: Moved: 65.8%/dev/vdb1: Moved: 100.0% [root @ localhost ~] # Vgreduce vg0/dev/vdb1 # obtain vdb1 from vg0 first Removed "/dev/vdb1" from volume group "vg0" [root @ localhost ~] # Pvremove/dev/vdb1 # Run pvremove Labels on physical volume "/dev/vdb1" successfully wiped.
As shown in the monitoring results, we can see that vdb1 has been successfully removed. In the same step, you can also remove vdb2
4. Use lvm snapshotslvcreate -L 100M -n lv0backup -s /dev/vg0/lv0
You can create an lvm snapshot with a size of 100 MB through the existing/dev/vg0/lv0 and name it lv0backup. You do not need to perform formatting or other operations. When using a snapshot, you read lv0, which is the same as a VM snapshot. When something in the snapshot is deleted by mistake, you can unmount the snapshot first and remove the corrupted snapshot, resnapshot
[Root @ localhost ~] # Cd/westos/# at this time, lv0 is mounted to the/westos directory [root @ localhost westos] # ls [root @ localhost westos] # touch file {1 .. 5} [root @ localhost ~] # Lvcreate-L 100 M-n lv0backup-s/dev/vg0/lv0 Logical volume "lv0backup" created [root @ localhost ~] # Umount/westos/[root @ localhost ~] # Mount/dev/vg0/lv0backup/westos [root @ localhost ~] # Cd/westos/[root @ localhost westos] # lsfile1 file2 file3 file4 file5
The monitoring results show that the snapshot lv0backup is mounted at this time.
5. Delete lvmNote that the lvm deletion sequence is the opposite to that of the lvm.
[root@localhost ~]# umount /westos/[root@localhost ~]# lvremove /dev/vg0/lv0Do you really want to remove active logical volume lv0backup? [y/n]: y Logical volume "lv0backup" successfully removedDo you really want to remove active logical volume lv0? [y/n]: y Logical volume "lv0" successfully removed[root@localhost ~]# lvremove /dev/vg0/lv1Do you really want to remove active logical volume lv1? [y/n]: y Logical volume "lv1" successfully removed[root@localhost ~]# vgremove vg0 Volume group "vg0" successfully removed[root@localhost ~]# pvremove /dev/vdb3 Labels on physical volume "/dev/vdb3" successfully wiped
The monitoring results show that all partitions have been deleted. After deletion, delete the hard disk partition:
The final result is:
[root@localhost ~]# cat /proc/partitionsmajor minor #blocks name 253 0 10485760 vda 253 1 10484142 vda1 253 16 10485760 vdb
When the device is idle, hard disk partitions are not deleted step by step. However, if you manually synchronize the partition table, an error is reported. solution:vgreduce vg0 --removemissing