Configuring LVM under CentOS 6.3 (Logical volume management)

Source: Internet
Author: User
Tags dashed line oracle vm virtualbox vm virtualbox

Configuring LVM under CentOS 6.3 (Logical volume management)

First, Introduction

LVM is the short name of the Logical Disk volume management (Logical Volume Manager), which is a mechanism for managing disk partitions in a Linux environment, and LVM is a logical layer on top of hard disks and partitions to improve the flexibility of disk partition management.

LVM works very simply by encapsulating the underlying physical hard disk abstraction and then presenting it to the upper-level application as a logical volume. In the traditional disk Management mechanism, our upper-level application is to directly access the file system, so that the underlying physical hard disk read, and in LVM, it by the underlying hard disk encapsulation, when we operate on the underlying physical hard disk, it is no longer for the partition to operate, Instead, it uses something called a logical volume to perform the underlying disk management operations. For example, I add a physical hard disk, this time the upper level of service is not felt, because the presentation to the upper layer of the service is a logical volume of the way.

The most important feature of LVM is the ability to dynamically manage disks. Because the size of the logical volume can be dynamically adjusted, and no existing data is lost. If we add a new hard disk, it will not change the existing upper-level logical volumes. As a dynamic disk management mechanism, logical volume technology greatly improves the flexibility of disk Management.

Basic Logical Volume Management concepts:

PV (physical Volume)-Physical volume
The physical volume is at the very bottom of the logical volume management, which can be a partition on the actual physical hard disk, or it can be an entire physical hard disk or a raid device .

VG (Volumne Group)-volume group
A volume group is built on a physical volume, and a volume group must include at least one physical volume, which can be dynamically added to the volume group after the volume group is established. A logical volume management system can have only one volume group, or multiple volume groups.

LV (Logical Volume)-Logical Volume
The logical volume is built on top of the volume group, and unallocated space in the volume group can be used to create new logical volumes that can be dynamically scaled and scaled down after the logical volume is established. Multiple logical volumes in a system can belong to the same volume group, or they can belong to different volume groups.

The diagram is as follows:

PE (physical Extent)-Physical block

LVM uses a 4MB PE block by default, while the LV of LVM can contain up to 65,534 PE (LVM1 format), so the default LVM's LV maximum capacity is 4m*65534/(1024m/g) =256g. PE is the smallest storage block of the entire LVM, that is to say, our data are processed by writing to PE. To put it simply, this PE is a bit like the block size of the file system. So adjusting PE will affect the maximum capacity of LVM! However, after the CentOS 6.x, because of the direct use of LVM2 's various formatting features, so this limit no longer exists.

Second, the system environment

Lab Environment: Oracle VM VirtualBox

System Platform: CentOS Release 6.3 (Final)

mdadm Version:mdadm-v3.2.6-25th October 2012

LVM version:lvm2-2.02.100-8.el6.i686

Device type: partition, physical hard disk, RAID device

Three, disk preparation

In this article, we will simulate the RAID5, partition, physical hard disk three types of devices created VG,RAID5 require four hard disks, partitions and physical hard disk each piece of hard disk, and the expansion requires at least one hard disk, so in the virtual machine to add eight hard drives, each block 5GB.

Iv. installation of LVM management tools

4.1 Check if the LVM management tool is installed in the system

# Rpm-qa|grep LVM

4.2 If it is not installed, install it using the Yum method

# yum Install lvm*

# Rpm-qa|grep LVM

Five, a new RAID5 equipment

Use/dev/sdb,/DEV/SDC,/DEV/SDD,/dev/sde four physical hard disks to do soft raid emulation.

# mdadm-c/dev/md5-ayes-l5-n3-x1/dev/sd[b,c,d,e]

Write the RAID configuration file/etc/mdadm.conf and make the appropriate modifications.

# echo Device/dev/sd{b,c,d,e} >>/etc/mdadm.conf

# Mdadm–ds >>/etc/mdadm.conf

For more information, please refer to the previous article: http://www.cnblogs.com/mchina/p/linux-centos-disk-array-software_raid.html

VI. Create a new partition

Use/DEV/SDF to simulate partitions.

# FDISK/DEV/SDF

# fdisk-l/DEV/SDF

Ready to work, we use three devices/dev/md5,/DEV/SDF1,/DEV/SDG to complete the LVM experiment.

Vii. Creation of PV

# PVCREATE/DEV/MD5/DEV/SDF1/DEV/SDG

View PV

# Pvdisplay

You can also use the command PVs and Pvscan to view the brief information.

# PVS

# Pvscan

Viii. creation of VG

# vgcreate VG0/DEV/MD5/DEV/SDF1/DEV/SDG

Description:vg0 is the name of the VG device created, can be arbitrarily taken, followed by the above three devices, that is to combine three devices into a vg0.

View VG

# Vgdisplay

Description

The name of the VG name VG

Total size of VG size VG

Size of PE size PE, default is 4MB

Total PE pe number, 5114 x 4MB = 19.98GB

Free Pe/size remaining space size

You can also use commands VGs and vgscan to view them.

# VGS

# Vgscan

Ix. creation of the LV

# lvcreate-l 5g-n lv1 vg0

Description

-l Specifies the size of the LV created
-l Specifies the number of PE created by the LV
The name of the-n LV
The above command means: 5G of space from the vg0 to LV1 use

View information about the LV

# Lvdisplay

Description

Path of the LV path LV, full name

Name of the LV name LV

VG Name belongs to VG

Size of the LV size LV

And look at the VG information.

# VGS

Vfree was reduced from 19.98g to 14.98g, and the other 5g was assigned to LV1.

Ten, formatted LV

# MKFS.EXT4/DEV/VG0/LV1

Xi. Use of Mount

# MKDIR/MNT/LV1

# mount/dev/vg0/lv1/mnt/lv1/

# df–th

Writes mount information to/etc/fstab

12. Add test data

Below we will expand and reduce LVM, so write test data to/MNT/LV1 to verify the dynamic management of the LVM disk.

# Touch/mnt/lv1/test_lvm_dynamic.disk

# Touch/mnt/lv1/test_lvm_dynamic.disk2

# TOUCH/MNT/LV1/TEST_LVM_DYNAMIC.DISK3

# ll/mnt/lv1/

13, the expansion of LVM operation

The greatest benefit of LVM is the ability to dynamically manage disks without losing existing data.

If one day, the use of LV1 reached 80%, need to expand, then what should we do?

Because there is a lot of space left in the vg0, we can allocate some space from vg0 to LV1.

13.1 LV Expansion

View the remaining capacity of the vg0, and 14.98g is available.

To expand the LV1.

# lvextend-l +1G/DEV/VG0/LV1

Description: added 1G on the basis of the original LV1.

View the remaining capacity of the vg0 now, reducing the 1G.

Then look at the capacity of the lv1, increased from 5G to 6G.

Use the df–th command to view the actual disk capacity.

The actual capacity is not changed because our system does not yet recognize the file system of the disk that has just been added, so the file system needs to be scaled up.

# RESIZE2FS/DEV/VG0/LV1

# df–th

Now the usable capacity has been increased to 5.9G.

Viewing test data

The data is normal and the online dynamic capacity of LV1 is completed.

There is another situation, that is, if our vg0 space is not enough, how to do? In this case, we need to expand the VG.

Expansion of 13.2 VG

The expansion of VG can be done in two ways, the first method is achieved by adding PV, the operation is as follows:

A. Create PV and use/DEV/SDH to create a PV.

B. Expansion of VG

The current vg0 capacity is 19.98g.

# Vgextend VG0/DEV/SDH

# VGS

Now the capacity of the vg0 is 24.97g, increase 5GB, that is, the capacity of a physical hard disk, VG expansion success.

The second method is to extend the capacity of the raid device indirectly to the VG. This method in the previous article has introduced, here no longer repeat, need to pay attention to the place is,/dev/md5 size changes, need to adjust the size of PV, operation as follows:

# PVRESIZE/DEV/MD5

14. Reduced operation of LVM

Reduced operations require offline processing.

Reduction of 14.1 LV

A. umount File System

B. Shrinking the file system

# RESIZE2FS/DEV/VG0/LV1 4G

Tip You need to run a disk check first.

C. Check the disk

# E2FSCK–F/DEV/VG0/LV1

D. Perform a reduction operation again

Shrinking the file system succeeds, reducing the size of the LV below.

E. Reduction of LV

# lvreduce/dev/vg0/lv1–l 4G

Note: The size of step E and step D reduction must be consistent, where 4 G is reduced to size, and if " -4g" is used, it means that the capacity is reduced.

F. Mount View

LV reduction succeeded.

G. Viewing test data

The data is OK.

Reduction of 14.2 VG

A. umount File System

B. View the current PV details

C. Removing the/DEV/SDG from the VG0

# Vgreduce VG0/DEV/SDG

D. Review the PV situation again

/DEV/SDG has not belonged to Vg0 anymore.

E. Viewing the situation of VG0

The size of the vg0 is reduced by 5GB.

VG reduced success.

XV, delete LVM

If you want to completely remove the LVM, you need to reverse the steps you created.

15.1 Umount File System

15.2 removing the LV

# LVREMOVE/DEV/VG0/LV1

15.3 Removing the VG

# Vgremove Vg0

15.4 Removing PV

# PVREMOVE/DEV/MD5/DEV/SDF1/DEV/SDG/DEV/SDH

LVM removal succeeded.

16. LVM Snapshot (snapshot)

The snapshot is to record the system information at that time, just like a photograph, if any data changes in the future, the original data will be moved to the snapshot area, the area is not changed by the snapshot area and the file system shared .

Backup of the LVM system snapshot area (Dash is file system, long dashed line is snapshot area)

The image on the left is the initial setting of the system snapshot area, and LVM reserves an area (three PE blocks on the left side of the graph) as the data store. There is no data in the snapshot area, and the snapshot area shares all the PE data with the system area, so you will see that the contents of the snapshot area are identical to the file system. After the system has been running for a while, assuming that the data for area A is changed (pictured above), the system will move the data to the snapshot area before the change, so the snapshot area on the right is occupied by a piece of PE as a, and the other B-to-I chunks are shared with the file system!

The snapshot area and the LV to be snapped must be in the same VG.

16.1 Establishment of the LV

# lvcreate-l 100m-n lv1 vg0

# MKFS.EXT4/DEV/VG0/LV1

# mount/dev/vg0/lv1/mnt/lv1/

16.2 Writing test data

# touch/mnt/lv1/test_lvm_snapshot_1

# touch/mnt/lv1/test_lvm_snapshot_2

# cp-a/etc//mnt/lv1/

# cp-a/boot//mnt/lv1/

16.3 Creating a Snapshot

# lvcreate-l 80m-s-N lv1snap/dev/vg0/lv1

Description: Create a snapshot for/dev/vg0/lv1 with a size of 80M and a name of Lv1snap.

# Lvdisplay

The LV Size of the/dev/vg0/lv1snap is 100MB and the usage is 0.01%.

16.4 mount the snapshot you just created to view

/mnt/lv1 and/mnt/snapshot are identical.

16.5 making the file modification operation

16.6 View again

The usage of snapshot is 10.36% and the original data is changed.

16.7 snapshot the data in the package backup, ready to restore

16.8 Uninstalling and Removing snapshot

16.9 uninstalling and formatting the/MNT/LV1, emptying the data

16.10 Recovering data

As you can see, the original data has been successfully restored.

The LVM snapshot experiment was successful.

Note: The LV1 can not be modified to more than the size of the snapshot, because the original data will be moved to the snapshot area, if your snapshot area is not large enough, if the original data is changed more than the actual volume of the snapshot area, then of course, the snapshot area can not accommodate the snapshot function will be invalid Oh!

Resources

    • Brother Bird's Linux private cuisine: HTTP://LINUX.VBIRD.ORG/LINUX_BASIC/0420QUOTA.PHP#LVM
    • Feather Fly Blog: http://www.opsers.org/base/one-day-a-little-learning-linux-logical-volume-manager-lvm-on-the-rhel6.html

Configuring soft RAID (software RAID) under CentOS 6.3

Http://www.cnblogs.com/mchina/p/linux-centos-disk-array-software_raid.html

Configuring LVM under CentOS 6.3 (Logical volume management)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.