Configure LVM in CentOS 6.3

Source: Internet
Author: User
Tags oracle vm virtualbox vm virtualbox

Configure LVM in CentOS 6.3

I. Introduction

LVM is short for Logical Volume Manager. It is a mechanism for managing disk partitions in Linux, LVM is a logical layer built on hard disks and partitions to improve the flexibility of disk partition management.

The operating principle of LVM is actually very simple. It encapsulates the underlying physical hard disk abstraction and presents it to upper-layer applications in the form of logical volumes. In the traditional disk management mechanism, our upper-layer applications directly access the file system to read the underlying physical hard disk. In LVM, it encapsulates the underlying hard disk. When we operate the underlying physical hard disk, it is no longer for partitioning, instead, you can manage the underlying disk of a logical volume. For example, if I add a physical hard disk, the upper-layer service will not feel it at this time, because the upper-layer service is presented in a logical volume.

The biggest feature of LVM is its ability to dynamically manage disks. Because the logical volume size can be dynamically adjusted without losing existing data. If we add a new hard disk, it will not change the existing upper-layer logical volume. As a dynamic disk management mechanism, logical volume technology greatly improves the flexibility of disk management.

Basic logical volume management concepts:

PV (Physical Volume)-Physical Volume
The physical volume is at the bottom of the logical volume management. It can be an actual physical hard disk.Partition, or the wholeThe physical hard disk can also beRaid device.

VG (Volumne Group)-volume Group
A volume is created on top of a physical volume. A volume group must contain at least one physical volume. After a volume group is created, the physical volume can be dynamically added to the volume group. A logical volume Management System project can have only one or more volume groups.

LV (Logical Volume)-Logical Volume
The logical volume is created on the volume group. unallocated space in the volume group can be used to create new logical volumes. After the logical volume is created, the space can be dynamically expanded and reduced. Multiple logical volumes in the system can belong to the same volume group or different volume groups.

The diagram is as follows:

PE (Physical Extent)-Physical block

LVM uses a 4 mb pe block by default, while LVM's LV can contain up to 65534 PES (lvm1 format ), therefore, the maximum LV capacity of the default LVM is 4 M * 65534/(1024 M/G) = 256G. PE is the smallest storage block of the entire LVM. That is to say, our data is written into PE for processing. Simply put, this PE is a bit like the block size in the file system. So adjusting PE will affect the maximum capacity of LVM! However, after CentOS 6.x, this restriction does not exist because lvm2 format functions are directly used.

Ii. System Environment

Lab environment: Oracle VM VirtualBox

System Platform: CentOS release 6.3 (Final)

Mdadm version: mdadm-v3.2.6-25th October 2012

LVM version: lvm2-2.02.100-8.el6.i686

Device Type: partition, physical hard disk, raid device

Iii. Disk preparation

In this article, we will simulate three types of devices: RAID 5, partitioning, and physical hard disk to create VG. RAID 5 requires four hard disks, one partition and one hard disk, there is also a need for at least one hard disk during resizing, so Add eight hard disks to the virtual machine, each of which is 5 GB.

4. Install LVM management tools

4.1 check whether LVM management tools are installed in the system

# Rpm-qa | grep lvm

4.2 if not, use yum to install

# Yum install lvm *

# Rpm-qa | grep lvm

5. Create a RAID 5 Device

Perform soft raid simulation using four physical hard disks:/dev/sdb,/dev/sdc,/dev/sdd, And/dev/sde.

# Mdadm-C/dev/md5-ayes-l5-n3-x1/dev/sd [B, c, d, e]

Write the RAID configuration file/etc/mdadm. conf and make necessary modifications.

# Echo DEVICE/dev/sd {B, c, d, e}>/etc/mdadm. conf

# Mdadm-Ds>/etc/mdadm. conf

For more information, see the previous article: http://www.cnblogs.com/mchina/p/linux-centos-disk-array-software_raid.html

6. Create a partition

Use/dev/sdf to simulate partitions.

# Fdisk/dev/sdf

# Fdisk-l/dev/sdf

Prepare for the LVM. We will use three devices:/dev/md5,/dev/sdf1, And/dev/sdg to complete the LVM experiment.

VII. Create a PV

# Pvcreate/dev/md5/dev/sdf1/dev/sdg

View PV

# Pvdisplay

You can also use the command pvs and pvscan to view the simple information.

# Pvs

# Pvscan

8. Create VG

# Vgcreate vg0/dev/md5/dev/sdf1/dev/sdg

Note: vg0 is the name of the created VG device, which can be obtained at will. The following three devices are connected, that is, the three devices are combined into a vg0.

View VG

# Vgdisplay

Note:

VG Name

VG Size the total Size of VG

PE Size. The default value is 4 MB.

Total PE quantity, 5114x4 MB = 19.98 GB

Free PE/Size remaining space

You can also use the command vgs and vgscan to view the information.

# Vgs

# Vgscan

9. Create LV

# Lvcreate-L 5G-n lv1 vg0

Note:

-L specify the size of the created LV
-L number of PES for the created LV
-N LV name
The preceding command indicates that 5g space is allocated from vg0 to lv1.

View LV Information

# Lvdisplay

Note:

LV Path, full name

LV Name

VG to which the VG Name belongs

LV Size

Let's take a look at the VG information.

# Vgs

VFree is reduced from 19.98g to 14.98g, and the other 5g is allocated to lv1.

10. Format LV

# Mkfs. ext4/dev/vg0/lv1

11. Mount and use

# Mkdir/mnt/lv1

# Mount/dev/vg0/lv1/mnt/lv1/

# Df-TH

Write mounting information to/etc/fstab

12. Add Test Data

Next we will resize and scale down the LVM, so write test data to/mnt/lv1 to verify the dynamic disk management of the LVM.

# Touch/mnt/lv1/test_lvm_dynamic.disk

# Touch/mnt/lv1/test_lvm_dynamic.disk2

# Touch/mnt/lv1/test_lvm_dynamic.disk3

# Ll/mnt/lv1/

13. LVM resizing

The biggest advantage of LVM is that it can dynamically manage Disks without losing existing data.

If one day the lv1 usage reaches 80% and needs to be expanded, what should we do?

Because vg0 still has a lot of available space, we can allocate point space from vg0 to lv1.

13.1 LV resizing

Check the remaining vg0 capacity, and 14.98 GB is available.

Scale up lv1.

# Lvextend-L + 1G/dev/vg0/lv1

Note: 1G is added based on the original lv1.

Check the remaining capacity of vg0 and reduce it by 1 GB.

Check the capacity of lv1, from 5G to 6G.

Run the df-TH command to view the actual disk capacity.

The actual capacity has not changed, because our system does not know the file system of the Newly Added Disk, so we need to resize the file system.

# Resize2fs/dev/vg0/lv1

# Df-TH

The current available capacity has increased to 5.9 GB.

View Test Data

The data is normal and the online dynamic resizing of lv1 is completed.

In another case, what should we do if our vg0 space is insufficient? In this case, we need to resize the VG.

13.2 VG resizing

You can use either of the following methods to scale up VG:

A. Create a pv and use/dev/sdh to create a pv.

B. Resize VG

The current vg0 capacity is 19.98 GB.

# Vgextend vg0/dev/sdh

# Vgs

At present, the capacity of vg0 is 24.97 GB, which is increased by 5 GB, that is, the capacity of a physical hard disk. VG is successfully resized.

The second method is to expand the VG indirectly by expanding the capacity of the RAID device. This method is introduced in the previous article. We will not repeat it here. Note that after the size of/dev/md5 changes, you need to adjust the PV size as follows:

# Pvresize/dev/md5

14. LVM reduction

The scale-down operation must be processed offline.

Reduction of 14.1 LV

A. umount File System

B. Reduce the File System

# Resize2fs/dev/vg0/lv1 4G

The system prompts you to run the disk check first.

C. Check the disk

# E2fsck-f/dev/vg0/lv1

D. Perform the scale-down operation again.

The file system is successfully reduced, and the LV size is reduced below.

E. LV reduction

# Lvreduce/dev/vg0/lv1-L 4G

Note: The sizes of Step E and Step D must be the same. Here, 4G is the size reduced. If "-4G" is used ", it indicates the amount of capacity to be reduced.

F. View mounting information

LV scale-down successful.

G. View Test Data

The data is normal.

14.2 VG reduction

A. umount File System

B. view the current PV details

C. Remove/dev/sdg from vg0

# Vgreduce vg0/dev/sdg

D. View PV again

/Dev/sdg is no longer in vg0.

E. View vg0 status

The vg0 size is reduced by 5 GB.

VG scale-down successful.

15. Delete LVM

If you want to completely remove the LVM, You need to reverse the creation steps.

15.1 umount File System

15.2 remove LV

# Lvremove/dev/vg0/lv1

15.3 remove VG

# Vgremove vg0

15.4 remove PV

# Pvremove/dev/md5/dev/sdf1/dev/sdg/dev/sdh

LVM removed successfully.

16. LVM snapshot (snapshot)

Snapshots are used to record the system information at that time, just like taking photos. If any data changes in the futureThe original data will be moved to the snapshot area, and the region that has not been changed will be shared by the snapshot area and the file system.

Backup of the snapshot area of the LVM system (the dotted line is the file system, and the long dotted line is the snapshot Area)

The figure on the left shows the initial creation of the System Snapshot zone. LVM reserves an area (three PE blocks on the left of the left) as the data storage location. At this time, there is no data in the snapshot zone, and the snapshot zone shares all PE data with the system zone. Therefore, you will see that the content in the snapshot zone is exactly the same as that in the file system. After the system is running for A while, if the data in area A is changed (as shown in the figure above), the system will move the data in the region to the snapshot area before the change, therefore, a pe is occupied in the snapshot area on the right to become A, while other blocks from B to I are still shared with the file system!

The Snapshot zone and the snapshot LV must be in the same VG.

16.1 create LV

# Lvcreate-L 100 M-n lv1 vg0

# Mkfs. ext4/dev/vg0/lv1

# Mount/dev/vg0/lv1/mnt/lv1/

16.2 write Test Data

# Touch/mnt/lv1/test_lvm_snapshot_1

# Touch/mnt/lv1/test_lvm_snapshot_2

# Cp-a/etc // mnt/lv1/

# Cp-a/boot // mnt/lv1/

16.3 create a snapshot

# Lvcreate-L 80 M-s-n lv1snap/dev/vg0/lv1

Note: Create a snapshot named "/dev/vg0/lv1" and its size is 80 mb and its name is "lv1snap.

# Lvdisplay

The LV Size of/dev/vg0/lv1snap is 100 MB, and the usage is 0.01%.

16.4 mount the created snapshot to view

/Mnt/lv1 and/mnt/snapshot are identical.

16.5 modify an archive

16.6 view again

The snapshot usage is 10.36%, and the original data has been changed.

16.7 pack and back up the data in snapshot to prepare for restoration

16.8 uninstall and remove snapshot

16.9 uninstall and format/mnt/lv1 and clear data

16.10 restore Data

We can see that the raw data has been successfully restored.

The LVM snapshot experiment is successful.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.