Logical volume Management in linux-lvm

Source: Internet
Author: User

Logical volume Management in linux-lvm
I. Theoretical Principles

  • LVM (Logic Volume Manager) logical Volume management is a mechanism for managing disk partitions in linux.
  • LVM is a logical layer built on disks and partitions to improve the flexibility of disk partition management.
1. lvm Glossary
  1. The physical media: hard disk, which is the storage unit at the lowest layer of the storage system.
  2. Physical Volume (PV): A physical Volume refers to a device with the same features as a hard disk partition or A Logical Disk Partition. It is the basic storage Logical Block of LVM, however, compared with physical storage media, it contains management parameters related to LVM.
  3. Volume Group (VG): A Volume Group of LVM is similar to a physical hard disk of a non-LVM system and consists of physical volumes. You can create one or more lvm partitions (logical volumes) in the volume group ).
  4. Logical Volume (LV): similar to Hard Disk Partitions In a non-LVM system, a file system can be created on a Logical Volume.
  5. PE (physical extent, PE): Each physical volume is divided into basic units called PES. pes with unique numbers are the smallest units that can be addressed by LVM, the PE size can be configured. The default value is 4 MB.
2. lvm principle diagram

3. Summary of the meaning of each command
1) physical volume: pvcreate # create a physical volume pvremove # Data erasure pvmove # Move the data on the pv containing the data to another pv resize2fs # expand the physical volume 2) volume group: vgcreate # create a volume group vgextend # Extend volume group data vgreduce # reduce volume group data 3) logical volume: lvcreate # create logical volume lvextend # Extend logical volume data lvreduce # reduce logical volume data, however, the logical volume cannot be reduced online.
Ii. Practice

To see the changes in real time, you should use the monitor to conveniently view the changes. The monitoring command is as follows:

[root@localhost ~]# watch -n 1 "pvs;echo ====================;\> vgs;echo ====================;\> lvs;echo ====================;\> df -h /westos "
1. Establish lvm

Before creating an lvm, you should first use fdisk to divide physical partitions and change the label to lvm. The specific method for dividing physical partitions is described in detail in the previous article. Starting from creating a physical volume

[Root @ localhost ~] # Pvcreate/dev/vdb1 # create a physical volume WARNING for the created physical partition: xfs signature detected on/dev/vdb1 at offset 0. Wipe it? [Y/n] y Wiping xfs signature on/dev/vdb1. Physical volume "/dev/vdb1" successfully created [root @ localhost ~] # Vgcreate vg0/dev/vdb1 # create a physical Volume group vg0 Volume group "vg0" successfully created [root @ localhost ~] # Lvcreate-L 100 M-n lv0 vg0 # create a logical volume named lv0 with a size of M from vg0: xfs signature detected on/dev/vg0/lv0 at offset 0. wipe it? [Y/n] y Wiping xfs signature on/dev/vg0/lv0.logical volume "lv0" created [root @ localhost ~] # Mkfs. xfs/dev/vg0/lv0 # format the created logical volume meta-data =/dev/vg0/lv0 isize = 256 agcount = 4, agsize = 6400 blks = sectsz = 512 attr = 2, projid32bit = 1 = crc = 0 data = bsize = 4096 blocks = 25600, imaxpct = 25 = sunit = 0 swidth = 0 blksnaming = version 2 bsize = 4096 ascii-ci = 0 ftype = 0log = internal log bsize = 4096 blocks = 853, version = 2 = sectsz = 512 sunit = 0 blks, lazy-count = 1 realtime = none extsz = 4096 blocks = 0, rtextents = 0 [root @ local Host ~] # Mount/dev/vg0/lv0/westos # mount a logical volume

Because you have run the monitoring command before, you can obtain the result of completing logical volume creation, for example:

2. lvm Extension

From the schematic diagram, we can see that lv is taken from the physical volume group vg. Therefore, if lv is extended, there will be two situations:1. the vg has sufficient capacity for expansion. In this case, you can directly use commands for expansion.2. If the capacity in vg is insufficient to support expansion, you must create a new physical partition to expand the vg and then expand the lv as follows:

1) vg has sufficient capacity
[Root @ localhost ~] # Lvextend-L 160 M/dev/vg0/lv0 # scale the lv0 capacity to 160 MExtending logical volume lv0 to 160.00 MiBLogical volume lv0 successfully resized [root @ localhost ~] # Xfs_growfs/dev/vg0/lv0 # update meta-data =/dev/mapper/vg0-lv0 isize = 256 agcount = 4, agsize = 6400 blks = sectsz = 512 attr = 2, projid32bit = 1 = crc = 0 data = bsize = 4096 blocks = 25600, imaxpct = 25 = sunit = 0 swidth = 0 blksnaming = version 2 bsize = 4096 ascii-ci = 0 ftype = 0log = internal bsize = 4096 blocks = 853, version = 2 = sectsz = 512 sunit = 0 blks, lazy-count = 1 realtime = none extsz = 4096 blocks = 0, rtextents = 0 data blocks changed from 25600 to 40960

2) insufficient capacity in vg

From the previous step, we can see that if you want to extend the lv0 capacity to MB, the capacity of vg0 is insufficient.

[root@localhost ~]# lvextend -L 240M /dev/vg0/lv0  Extending logical volume lv0 to 240.00 MiB  Insufficient free space: 20 extents needed, but only 9 available

Therefore, you should create a new physical partition and add the new physical partition to vg0 to continue expansion.

[Root @ localhost ~] # Fdisk/dev/vdb... 8e... # creating a physical partition # note that when the new partition is saved and exited, The vdb device is prompted to be busier. Therefore, run the partprobe command to manually update the partition table, make the system recognize the new partition. [Root @ localhost ~] # Vgextend vg0/dev/vdb2 Physical volume "/dev/vdb2" successfully created Volume group "vg0" successfully extended [root @ localhost ~] # Lvextend-L 250 M/dev/vg0/lv0 Rounding size to boundary between physical extents: 252.00 MiB Extending logical volume lv0 to 252.00 MiB Logical volume lv0 successfully resized [root @ localhost ~] # Xfs_growfs/dev/vg0/lv0meta-data =/dev/mapper/vg0-lv0 isize = 256 agcount = 7, agsize = 6400 blks = sectsz = 512 attr = 2, projid32bit = 1 = crc = 0 data = bsize = 4096 blocks = 40960, imaxpct = 25 = sunit = 0 swidth = 0 blksnaming = version 2 bsize = 4096 ascii-ci = 0 ftype = 0log = internal bsize = 4096 blocks = 853, version = 2 = sectsz = 512 sunit = 0 blks, lazy-count = 1 realtime = none extsz = 4096 blocks = 0, rtextents = 0 data blocks changed from 40960 to 64512 # In the ext4 file system, if you want a new extension, note: mkfs. ext4/dev/vg0/lv0 extensions: lvextend-L 400 M/dev/vg0/lv0resize2fs/dev/vg0/lv0

The monitoring results after running are as follows:

3. lvm reduction

Note: lvm cannot be scaled online. You must uninstall it before downgrading it.

(1) device reduction
[Root @ localhost ~] # Umount/westos/[root @ localhost ~] # E2fsck-f [root @ localhost ~] # Resize2fs/dev/vg0/lv0 100Mresize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on/dev/vg0/lv0 to 102400 (1 k) blocks. the filesystem on/dev/vg0/lv0 is now 102400 blocks long. /dev/vg0/lv0 # perform forced file check [root @ localhost ~] # Lvreduce-L 100 M/dev/vg0/lv0 WARNING: powering active logical volume to 100.00 MiB this may destroy your data (filesystem etc.) Do you really want to reduce lv0? [Y/n]: Y cing logical volume lv0 to 100.00 MiB Logical volume lv0 successfully resized [root @ localhost ~] # Lvs lv vg Attr LSize Pool Origin Data % Move Log Cpy % Sync Convert lv0 vg0-wi-a ----- 100.00 m [root @ localhost ~] # Mount/dev/vg0/lv0/westos/

The monitoring results show that lv0 has been reduced to 100 MB.

(2) reduction of the volume group

To remove or reduce a volume group without data, you can directly remove it. However, to remove a volume group with data, you must first migrate the data to another volume group. Here, we only demonstrate the process of removing the volume group with data:

# It can be seen from the above that/dev/vdb2 is not enough to put down the capacity above vdb1, so a vdb3 [root @ localhost ~] is added # Pvmove/dev/vdb1/dev/vdb3/dev/vdb1: Moved: 7.9%/dev/vdb1: Moved: 65.8%/dev/vdb1: Moved: 100.0% [root @ localhost ~] # Vgreduce vg0/dev/vdb1 # obtain vdb1 from vg0 first Removed "/dev/vdb1" from volume group "vg0" [root @ localhost ~] # Pvremove/dev/vdb1 # Run pvremove Labels on physical volume "/dev/vdb1" successfully wiped.

As shown in the monitoring results, we can see that vdb1 has been successfully removed. In the same step, you can also remove vdb2

4. Use lvm snapshots

lvcreate -L 100M -n lv0backup -s /dev/vg0/lv0You can create an lvm snapshot with a size of 100 MB through the existing/dev/vg0/lv0 and name it lv0backup. You do not need to perform formatting or other operations. When using a snapshot, you read lv0, which is the same as a VM snapshot. When something in the snapshot is deleted by mistake, you can unmount the snapshot first and remove the corrupted snapshot, resnapshot

[Root @ localhost ~] # Cd/westos/# at this time, lv0 is mounted to the/westos directory [root @ localhost westos] # ls [root @ localhost westos] # touch file {1 .. 5} [root @ localhost ~] # Lvcreate-L 100 M-n lv0backup-s/dev/vg0/lv0 Logical volume "lv0backup" created [root @ localhost ~] # Umount/westos/[root @ localhost ~] # Mount/dev/vg0/lv0backup/westos [root @ localhost ~] # Cd/westos/[root @ localhost westos] # lsfile1 file2 file3 file4 file5

The monitoring results show that the snapshot lv0backup is mounted at this time.

5. Delete lvm

Note that the lvm deletion sequence is the opposite to that of the lvm.

[root@localhost ~]# umount /westos/[root@localhost ~]# lvremove /dev/vg0/lv0Do you really want to remove active logical volume lv0backup? [y/n]: y Logical volume "lv0backup" successfully removedDo you really want to remove active logical volume lv0? [y/n]: y Logical volume "lv0" successfully removed[root@localhost ~]# lvremove /dev/vg0/lv1Do you really want to remove active logical volume lv1? [y/n]: y Logical volume "lv1" successfully removed[root@localhost ~]# vgremove vg0 Volume group "vg0" successfully removed[root@localhost ~]# pvremove /dev/vdb3 Labels on physical volume "/dev/vdb3" successfully wiped

The monitoring results show that all partitions have been deleted. After deletion, delete the hard disk partition:

The final result is:

[root@localhost ~]# cat /proc/partitionsmajor minor #blocks name 253 0 10485760 vda 253 1 10484142 vda1 253 16 10485760 vdb

When the device is idle, hard disk partitions are not deleted step by step. However, if you manually synchronize the partition table, an error is reported. solution:vgreduce vg0 --removemissing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.