How to dynamically adjust LVM disk capacity

Source: Internet
Author: User
LVM: logicalvolumemanager (logical volume management). LVM shields the underlying disk layout and facilitates dynamic adjustment of disk capacity. 1. steps for creating a logical volume: 1) use the fdisk tool to convert the disk to a linux partition 2) use the pvcreate command to convert the linux partition to a physical volume (PV); 3) use the vgcreate command to process the created physical volume

LVM: logical volumemanager (logical volume management); LVM shields the underlying disk layout and facilitates dynamic adjustment of disk capacity.

1. steps for creating a logical volume:

1) use the fdisk tool to convert a disk to a linux partition

2) use the pvcreate command to convert a linux partition to a physical volume (PV );

3) use the vgcreate command to process the created physical volume into a volume Group (VG );

4) Use the lvcreate command to group the volume into several logical volumes (LV );

5) format, mount, and dynamically adjust the logical volume. this operation does not affect the data on the logical volume (Lv.

II. procedure for creating and managing a physical volume (PV:

1) first view linux partitions and convert unused space into physical volumes (first use fdisk to create a common partition)

[Root @ RHEL5 ~] # Fdisk-l/dev/sdb # View linux partitions

Disk/dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065*512 = 8225280 bytes

Device Boot Start End Blocks Id System

/Dev/sdb1 1 500 4016218 + 83 Linux

/Dev/sdb2 501 1000 4016250 83 Linux

/Dev/sdb3 1001 1500 4016250 83 Linux

/Dev/sdb4 1501 2610 8916075 5 Extended

/Dev/sdb5 1501 2610 8916043 + 83 Linux

Note:/dev/sdb is a new disk with no data or mounting

2) convert linux physical partitions into physical volumes

[Root @ RHEL5 ~] # Pvcreate/dev/sdb {1, 2} # Convert the physical partition/dev/sdb {1, 2} into a physical volume

Physical volume "/dev/sdb1" successfully created

Physical volume "/dev/sdb2" successfully created

3) # use Pvscan to view physical volume information

[Root @ RHEL5 ~] # Pvscan # View physical volume information. all physical volume information is displayed.

PV/dev/sda2 VG VolGroup00 lvm2 [39.88 GB/0 free]

PV/dev/sdb1 lvm2 [3.83 GB]

PV/dev/sdb2 lvm2[ 3.83 GB]

Total: 3 [47.54 GB]/in use: 1 [39.88 GB]/in no VG: 2 [7.66 GB]

4) use pvdisplay to view detailed parameters of each physical volume

[Root @ RHEL5 ~] # Pvdisplay # View detailed parameters of each physical volume

--- Physical volume ---

PV Name/dev/sda2

VG Name VolGroup00

PV Size 39.90 GB/not usable 20.79 MB

Allocatable yes (but full)

PE Size (kbytes) 32768

Total PE 1276

Free PE 0

Allocated PE 1276.

Pvuuid aJlaad-NHPT-Cgg3-7yu4-a2RJ-kJJ1-qxSFgD

--- NEW Physical volume ---

PV Name/dev/sdb1

VG Name

PVs Size 3.83 GB

Allocatable NO

PE Size (KByte) 0

Total PE 0

Free PE 0

Allocated PE 0

Pvuuid v2VajD-yS53-SiQA-yTzu-KOiD-RyT3-p0wTvt

--- NEW Physical volume ---

PV Name/dev/sdb2

VG Name

PVs Size 3.83 GB

Allocatable NO

PE Size (KByte) 0

Total PE 0

Free PE 0

Allocated PE 0

Pvuuid iOoK3V-yuww-ZlLF-cRLq-v7hC-CL7c-0bQU1x

----------------------------------------------------------------------

You can delete a physical volume when it is not used.

[Root @ RHEL5/] # pvremove/dev/sdb2 # delete a physical volume,

Labels on physical volume "/dev/sdb2" successfully wiped

----------------------------------------------------------------------

III. procedure for creating and managing a volume Group (VG:

1) use vgcreate to convert a physical volume to a volume Group

[Root @ RHEL5/] # vgcreate vg01/dev/sdb {} # Convert A/dev/sdb that is already a physical volume to a volume group named vg01

Volume group "vg01" successfully created

Note: If no parameter is added, the default size of the extended block (PE) is 4 MB. if you use vgcreate-s 8 M vg01/dev/sdb {1, 2 }, the size of the extended block is 8 MB.

2) use vgdisplay to view details of all volume groups

[Root @ RHEL5/] # vgdisplay # View details of all volume groups

--- Volume group ---

VG Name vg01

System ID

Format lvm2

Metadata Areas 2

Metadata Sequence No 1

VG Access read/write

VG Status resizable

Max lv 0

Cur LV 0

Open LV 0

Max PV 0

Cur PV 2

Act PV 2

VG Size 7.66 GB

PE Size 4.00 MB

Total PE 1960

Alloc PE/Size 0/0

Free PE/Size 1960/7 .66 GB

Vg uuid 1g8QL0-0cGM-TJji-Q98P-LJ3f-PhDN-2ouSM3

--- Volume group ---

VG Name VolGroup00

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 3

VG Access read/write

VG Status resizable

Max lv 0

Cur LV 2

Open LV 2

Max PV 0

Cur PV 1

Act PV 1

VG Size 39.88 GB

PE Size 32.00 MB

Total PE 1276

Alloc PE/Size 1276/39 .88 GB

Free PE/Size 0/0

Vg uuid AhhisY-vDrc-s4jx-XIsn-QmCp-wMiT-2v01YZ

Note: You can also use [root @ RHEL5/] # vgdisplay-v/dev/vg01 to view details of a specific volume group.

3) view the volume group information

[Root @ RHEL5/] # vgscan # View volume group information

Reading all physical volumes. This may take a while...

Found volume group "vg01" using metadata type lvm2

Found volume group "VolGroup00" using metadata type lvm2

4) extend the volume group vgextend and add a physical volume to an existing volume group.

[Root @ RHEL5/] # pvcreate/dev/sdb3 # Create a new physical volume

Physical volume "/dev/sdb3" successfully created

[Root @ RHEL5/] # vgextend vg01/dev/sdb3 # add the new physical volume to the vg01 volume Group

Volume group "vg01" successfully extended

-----------------------------------------------------------------------

Use vgremove to delete a volume Group

[Root @ RHEL5/] # vgremove/dev/vg01

Volume group "vg01" successfully removed

-----------------------------------------------------------------------

IV. procedure for creating and managing logical volume (LV:

1) create a logical volume of 6 GB named data, which is generated from vg01

[Root @ RHEL5/] # lvcreate-L 6G-n data vg01 # Divide 6 GB space from the volume group vg01 into logical volume data

Logical volume "data" created

2) format the divided logical volume

[Root @ RHEL5/] # mkfs-t ext3/dev/vg01/data # format the logical volume with an ext3 file

Mke2fs 1.39 (29-may-2006)

Filesystem label =

OS type: Linux

Block size = 4096 (log = 2)

Fragment size = 4096 (log = 2)

786432 inodes, 1572864 blocks

78643 blocks (5.00%) reserved for the super user

First data block = 0

Maximum filesystem blocks = 1610612736

48 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768,983 04, 163840,229 376, 294912,819 200, 884736

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mountsor

180 days, whichever comes first. Use tune2fs-c or-I tooverride.

Note: You can also use [root @ RHEL5/] # mkfs. ext3/dev/vg01/data to format

3) useLvsAcn: View logical volume information

[Root @ RHEL5/] # lvscan # View logical volume information

ACTIVE '/dev/vg01/data' [6.00 GB] inherit

ACTIVE '/dev/VolGroup00/logvol00' [38.88 GB] inherit

ACTIVE '/dev/VolGroup00/logvol01' [1.00 GB] inherit

4) use lvdisplay to view specific logical volume parameters:

[Root @ RHEL5/] # lvdisplay # View specific logical volume parameters

--- Logical volume ---

LV Name/dev/vg01/data

VG Name vg01

Lvuuid QUmuTB-ofgI-9BbG-1DvN-gWzo-7Vqb-Twmf45

LV Write Access read/write

LV Status available

# Open 0

LV Size 6.00 GB

Current LE 1536

Segments 2

Allocation inherit

Read ahead sectors 0

Block device 253: 2

--- Logical volume ---

LV Name/dev/VolGroup00/LogVol00

VG Name VolGroup00

Lvuuid SrNP2L-bOWm-4clq-22Lh-Fg10-ydeg-7dNpdH

LV Write Access read/write

LV Status available

# Open 1

LV Size 38.88 GB

Current LE 1244

Segments 1

Allocation inherit

Read ahead sectors 0

Block device 253: 0

--- Logical volume ---

LV Name/dev/VolGroup00/LogVol01

VG Name VolGroup00

Lvuuid e7u6Wx-MXhq-Nc2o-lrF9-yea1-Hia5-Cv7d7e

LV Write Access read/write

LV Status available

# Open 1

LV Size 1.00 GB

Current LE 32

Segments 1

Allocation inherit

Read ahead sectors 0

Blocks device 253: 1

Note: You can also use [root @ RHEL5/] # lvdisplay-v/dev/vg01/data to view detailed parameters of a logical volume.

5) use lvextend to increase the logical volume size and expand the volume online.

[Root @ RHEL5/] # lvextend-L + 1G/dev/vg01/data # resize the logical volume/dev/vg01/data from the volume group vg01, logical Volume size changed to 7 GB

Extending logical volume data to 7.00 GB

Logical volume data successfully resized

6) use the resize2fs command to update the file system size recognized by the system. The value takes effect immediately.

[Root @ RHEL5/] # resize2fs/dev/vg01/data # make the size of the added logical volume take effect immediately

Resize2fs 1.39 (29-May-2006)

Resizing the filesystem on/dev/vg01/data to 1835008 (4 k) blocks.

The filesystem on/dev/vg01/data is now 1835008 blockslong.

7) Use lvreduce to reduce the logical volume size. the logical volume must be offline (that is, detach the file system first)

[Root @ RHEL5/] # lvreduce-L-1G/dev/vg01/data # reduce the capacity of the logical volume/dev/vg01/data by 1 GB

/Dev/cdrom: open failed: Read-only file system

WARNING: switching active logical volume to 6.00 GB

This may destroy your data (filesystem etc .)

Do you really want to reduce data? [Y/n]: y

Elastic Cing logical volume data to 6.00 GB

Logical volume data successfully resized

[Root @ RHEL5/] # resize2fs/dev/vg01/data # make the size of the reduced logical volume take effect immediately

Resize2fs 1.39 (29-May-2006)

Resizing the filesystem on/dev/vg01/data to 1572864 (4 k) blocks.

Resize2fs: Can't read an block bitmap while trying to resize/dev/vg01/data

Note: To narrow down a logical volume, you must first uninstall the file system, and the space size must be greater than or equal to the current capacity occupied by the file. improper operations may cause data loss. exercise caution.

[Root @ RHEL5/] # lvscan # check that the logical volume size has changed to 6 GB

ACTIVE '/dev/vg01/data' [6.00 GB] inherit

ACTIVE '/dev/VolGroup00/logvol00' [38.88 GB] inherit

ACTIVE '/dev/VolGroup00/logvol01' [1.00 GB] inherit

Note:

--------------------------------------------------------------------

Delete logical volume

[Root @ RHEL5/] # lvremove/dev/vg01/data

--------------------------------------------------------------------

5. Mount logical volumes

1) mount the logical volume to the/quota Directory

[Root @ RHEL5/] # mount/dev/vg01/data/quota/# mount the logical volume to/quota

[Root @ RHEL5/] # df-hT

Filesystem Type Size Used Avail Use % Mounted on

/Dev/mapper/VolGroup00-LogVol00

Ext3 38G 11G 26G 29%/

/Dev/sda1 ext3 99 M 12 M 82 M 13%/boot

Tmpfs 233 M 0 233 M 0%/dev/shm

/Dev/hdc iso9660 224 M 224 M 0 100%/media/cdrom

/Dev/mapper/vg01-data

Ext3 6.9G 142 M 6.5G 3%/quota

2) set automatic mounting upon startup

[Root @ RHEL5/] # vi/etc/fstab # set automatic mounting upon startup

/Dev/VolGroup00/LogVol00/ext3 defaults 1 1

LABEL =/boot ext3 defaults 1 2

Devpts/dev/pts devpts gid = 5, mode = 620 0 0

Tmpfs/dev/shm tmpfs defaults 0 0

Proc/proc defaults 0 0

Sysfs/sys sysfs defaults 0 0

/Dev/VolGroup00/LogVol01 swap defaults 0 0

/Dev/vg01/data/quota ext3 defaults 0 0

6. The Logical Volume Snapshot management function can freeze the data in the volume, similar to taking a photo of the data, which can be permanently saved and created at that time.

1) create a volume snapshot

[Root @ RHEL5 ~] # Lvcreate-L 1G-s-n snaplv1/dev/vg01/data # lvcreate-L 15%-20%-s-n snapshot of the original logical volume size Source logical volume name

Logical volume "snaplv1" created

Note: It is similar to creating a logical volume, but the parameter-s is added.

[Root @ RHEL5 ~] # Lvscan # The status is snapshot.

ACTIVE Original '/dev/vg01/data' [6.00 GB] inherit

ACTIVE Snapshot '/dev/vg01/snaplv1' [1.00 GB] inherit

ACTIVE '/dev/VolGroup00/logvol00' [38.88 GB] inherit

ACTIVE '/dev/VolGroup00/logvol01' [1.00 GB] inherit

2) after a snapshot is created, a Mount point is also required.

[Root @ RHEL5 ~] # Mkdir/snap # Create a snapshot mount point

[Root @ RHEL5 ~] # Mount/dev/vg01/snaplv1/snap # mount a snapshot to/snap

Note: files in snap are the same as those in/quota. even if files are added or deleted under/quota,/snap remains unchanged. you can back up/snap.

3) each time a volume snapshot is generated, it occupies part of the volume group Space. Therefore, the more volume snapshots are generated, the less space available for the volume group. Therefore, after the backup is completed, you can delete the snapshot.

[Root @ RHEL5 quota] # umount/snap/# detach a snapshot

[Root @ RHEL5 quota] # lvremove/dev/vg01/snaplv1 # delete a snapshot

/Dev/cdrom: open failed: Read-only file system

Do you really want to remove active logical volume "snaplv1 "? [Y/n]: y

Logical volume "snaplv1" successfully removed

7. what should I do if one day the partition of the physical disk fails and the hard disk must be replaced? LVM provides the pvmove tool to transfer data from one physical volume to another.

1) add a new physical volume to the volume Group

[Root @ RHEL5/] # pvcreate/dev/sdc1 # Convert a linux partition to a physical volume

Physical volume "/dev/sdc1" successfully created

[Root @ RHEL5/] # vgextend vg01/dev/sdc1 # add the new physical volume to the vg01 volume Group

Volume group "vg01" successfully extended

2) move the data on the physical volume to the new physical volume

[Root @ RHEL5 ~] # Pvmove/dev/sdb1/dev/sdc1 # Move/dev/sdb1 data to/dev/sdc1

/Dev/sdb1: Moved: 41.7%

/Dev/sdb1: Moved: 84.2%

/Dev/sdb1: Moved: 100.0%

Note: You can use pvscan to view the changes.

3) detach the old physical volume from the volume Group

[Root @ RHEL5 ~] # Vgreduce vg01/dev/sdb1 # use vgreduce to separate/dev/sdb1 from the volume group vg01

Removed "/dev/sdb1" from volume group "vg01"

[Root @ RHEL5 ~] # Pvremove/dev/sdb1 # Delete the physical volume to remove the physical disk for repair. if the physical disk is divided into multiple physical volumes, you must delete all physical volumes.

Labels on physical volume "/dev/sdb1" successfully wiped

8. to migrate the entire LVM disk to another computer one day, follow these steps:

1) export the volume group on the original computer

[Root @ RHEL5 ~] # Umount/dev/vg01/data # unmount all logical volumes in the volume group before exporting the volume Group

[Root @ RHEL5 ~] # Vgchange-a n vg01 # use vgchange to change the volume group to an inactive configuration

[Root @ RHEL5 ~] # Vgexport vg01 # use vgexport to export a volume Group

2) install the LVM disk on the target computer

3) import the volume group to the target computer

[Root @ RHEL5 ~] # Pvscan # use pvscan to scan all physical volumes so that linux can drive these physical volumes

[Root @ RHEL5 ~] # Vgimport vg01 # import volume Group

[Root @ RHEL5 ~] # Vgchange-a y vg01 # change the volume group to an active configuration

4) mount a logical volume

[Root @ RHEL5 ~] # Mount/devv/vg01/data/quota # mount the logical volume to the file system

9. the above is the LVM division for new disks. Generally, when installing the system, LVM is used to divide disks and adjust the disk space as needed. One day, if you find that the space of a file system is insufficient, you need to resize it.

For example, you need to install weblogic92 under/weblogic. However, there is not enough space. In this way, you need to mount the new logical volume to/weblogic.

1) use df to view the size of each file system

[Root @ tydic4f20/] # df-hT # view the size of each file system

Filesystem Type Size Used Avail Use % Mounted on

/Dev/mapper/vg00-LogVol00

Ext3 3.0G 2.0G 820 M 71%/

/Dev/mapper/vg00-lvopt

Ext3 3.0G 69 M 2.7G 3%/opt

/Dev/mapper/vg00-lvusr

Ext3 6.8G 4.3G 2.2G 67%/usr

/Dev/mapper/vg00-lvhome

Ext3 3.0G 75 M 2.7G 3%/home

/Dev/mapper/vg00-lvpublic

Ext3 20G 1.8G 17G 10%/public

/Dev/mapper/vg00-lvtmp

Ext3 3.0G 70 M 2.7G 3%/tmp

/Dev/mapper/vg00-lvvar

Ext3 3.0G 177 M 2.6G 7%/var

/Dev/sda1 ext3 99 M 20 MB 75 M 21%/boot

Tmpfs 7.9G 0 7.9G 0%/dev/shm

2) use lvscan to check whether the volume group name is vg00.

[Root @ tydic4f20/] # lvscan # use lvscan to view

ACTIVE '/dev/vg00/logvol00' [3.00 GB] inherit

ACTIVE '/dev/vg00/lvopt' [3.00 GB] inherit

ACTIVE '/dev/vg00/lvusr' [7.00 GB] inherit

ACTIVE '/dev/vg00/lvhome' [3.00 GB] inherit

ACTIVE '/dev/vg00/lvpublic' [20.00 GB] inherit

ACTIVE '/dev/vg00/lvtmp' [3.00 GB] inherit

ACTIVE '/dev/vg00/lvvar' [3.00 GB] inherit

ACTIVE '/dev/vg00/LogVol01' [17.62 GB] inherit

3) [root @ tydic4f20/] # lvcreate-L 20G-n lvweblogic vg00 # divide 20G space from the volume group vg00 into logical volume lvweblogic

Logical volume "lvweblogic" created

4) format the divided logical volume

[Root @ tydic4f20/] # mkfs-t ext3/dev/vg00/lvweblogic # format the logical volume with an ext3 file

Note: The next step is mounting. to enable automatic mounting upon startup, you need to modify/etc/fstab. refer to the above logical volume creation method.

10. logical volume creation and deletion recommendation steps

Sequence of logical volume creation (LV): Linux partition --- physical volume (PV) --- volume Group (VG) --- logical volume (LV) --- mount to file system

Sequence of deleting logical volume (LV): detaching the file system-logical volume (LV)-Volume Group (VG)-physical volume (PV)-Linux partition

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.