Next we will talk about how to expand, delete, and manage it. :) use lvextend-LsizeLVNAME to increase the size of a logical volume. next we will talk about how to expand, delete, and manage it :)
Use lvextend-L size LVNAME to increase the size of a logical volume.
[Root @ NEWLFS mnt] # lvextend-L + 1G/dev/lvm_vg/lv_home
Extending logical volume lv_home to 3.00 GB
Logical volume lv_home successfully resized
[Root @ NEWLFSmnt] #
-L + 1G adds 1G space to lv_home. Or use this format:
[Root @ NEWLFS mnt] # lvextend-L 3G/dev/lvm_vg/lv_home
Specify the changed lv_home size, with the same effect.
After changing the logical volume size, you should also change the file system size to be consistent:
[Root @ NEWLFS mnt] # resize_reiserfs-f/dev/lvm_vg/lv_home
Resize_reiserfs 3.6.19 (2003 www.namesys.com)
ReiserFS report:
Blocsize 4096
Blockcount 786432 (524288)
Freeblocks 778197 (516061)
Bitmap block count 24 (16)
Syncing... done
Resize_reiserfs: Resizing finished successfully.
[Root @ NEWLFS mnt] df
Filesystem Size Used Avail Use % Mounted on
/Dev/md0 5.4G 2.8G 2.7G 51%/
/Dev/hda1 6.4 GB 4.0G 2.5G 62%/mnt/C
/Dev/hda6 25G 22G 3.6G 86%/mnt/E
/Dev/hda7 9.7G 3.7 GB 5.6G 40%/mnt/lfs
/Dev/mapper/lvm_vg-lv_home
3.0G 33 M 3.0G 2%/mnt/lvm_home
[Root @ NEWLFS mnt] #
The lvm_home is successfully extended to 3 GB, not restarted, and the file system is not uninstalled.
ReiserFS is really easy to use! Pai_^
Of course, ReiserFS also allows you to detach the file system and then resize it. use the following command:
[Root @ NEWLFS mnt] # umount/dev/lvm_vg/lv_home
Detach a file system
[Root @ NEWLFS mnt] # resize_reiserfs/dev/lvm_vg/lv_home
Adjust the size. The difference is that there is no-f parameter.
[Root @ NEWLFS mnt] # mount-t reiserfs/dev/lvm_vg/lv_homelvm_home/
Remount.
EXT2/3 installation, extension:
[Root @ NEWLFS mnt] # lvcreate-L 2G-n lv_opt lvm_vg
Logical volume "lv_opt" created
Separate a logical volume named lv_opt
[Root @ NEWLFS mnt] # mke2fs-j/dev/lvm_vg/lv_opt
Mke2fs 1.35 (28-Feb-2004)
Filesystem label =
OS type: Linux
Block size = 4096 (log = 2)
Fragment size = 4096 (log = 2)
262144 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block = 0
16 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768,983 04, 163840,229376, 294912
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 35 mountsor
180 days, whichever comes first. Use tune2fs-c or-I tooverride.
Create an ext3 file system
[Root @ NEWLFS mnt] # mkdir lvm_opt
[Root @ NEWLFS mnt] # mount-t ext3/dev/lvm_vg/lv_opt lvm_opt/
Create a mount point and mount it.
[Root @ NEWLFS mnt] # df
Filesystem Size Used Avail Use % Mounted
..............
/Dev/mapper/lvm_vg-lv_home
3.0G 33 M 3.0G 2%/mnt/lvm_home
/Dev/mapper/lvm_vg-lv_opt
2.0G 33 M 1.9G 2%/mnt/lvm_opt
[Root @ NEWLFS mnt] # lvextend-L + 1G/dev/lvm_vg/lv_opt
Extending logical volume lv_opt to 3.00 GB
Logical volume lv_opt successfully resized
Add a GB space for lv_opt.
[Root @ NEWLFS mnt] # umount lvm_opt/
To adjust the ext2/3 file system, you need to uninstall it first and then adjust it. Unlike ReiserFS, you do not need to uninstall it.
[Root @ NEWLFS mnt] # resize2fs/dev/lvm_vg/lv_opt
Resize2fs 1.35 (28-Feb-2004)
Please run 'e2fsck-f/dev/lvm_vg/lv_opt 'First.
If you are prompted to run e2fsck-f first, check it :)
The resize2fs-f parameter does not need e2fsck. it is also good to check.
[Root @ NEWLFS mnt] # e2fsck-f/dev/lvm_vg/lv_opt
E2fscks 1.35 (28-Feb-2004)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/Dev/lvm_vg/lv_opt: 11/262144 files (0.0% non-contiguous), 16443/524288 blocks
[Root @ NEWLFS mnt] # resize2fs/dev/lvm_vg/lv_opt
Resize2fs 1.35 (28-Feb-2004)
Resizing the filesystem on/dev/lvm_vg/lv_opt to 786432 (4 k) blocks.
The filesystem on/dev/lvm_vg/lv_opt is now 786432 blocks long.
Change lv_opt to 3 GB and mount it again!
[Root @ NEWLFS mnt] # mount-t ext3/dev/lvm_vg/lv_opt lvm_opt/
[Root @ NEWLFS mnt] # df
Filesystem Size Used Avail Use % Mounted on
.............
/Dev/mapper/lvm_vg-lv_home
3.0G 33 M 3.0G 2%/mnt/lvm_home
/Dev/mapper/lvm_vg-lv_opt
3.0G 33 M 2.9G 2%/mnt/lvm_opt
[Root @ NEWLFS mnt] #
OK. LVM is really convenient ^_^
Let's take a look at how to reduce the partition size:
Note: before reducing the logical volume size, reduce the file system size. otherwise, data may be lost.
ReiserFS:
[Root @ NEWLFS mnt] # umount lvm_home/
First, uninstall the ReiserFS file system.
[Root @ NEWLFS mnt] # resize_reiserfs-s-1G/dev/lvm_vg/lv_home
Resize_reiserfs 3.6.19 (2003 www.namesys.com)
You are running BETA version of reiserfs shrinker.
This version is only for testing or very careful use.
Backup of you data is recommended.
Do you want to continue? [Y/N]: y
Processing the tree: 0%... 20%... 40%... 60%... 80% left 0, 0/sec
Nodes processed (moved ):
Int 0 (0 ),
Leaves 1 (0 ),
Unfm 0 (0 ),
Total 1 (0 ).
Check for used blocks in truncated region
ReiserFS report:
Blocsize 4096
Blockcount 524288 (786432)
Freeblocks 516061 (778197)
Bitmap block count 16 (24)
Syncing... done
Resize_reiserfs: Resizing finished successfully.
First reduce the size of the file system,-s-1G minus 1G
[Root @ NEWLFS mnt] # lvreduce-L-1G/dev/lvm_vg/lv_home
WARNING: switching active logical volume to 2.00 GB
This may destroy your data (filesystem etc .)
Do you really want to reduce lv_home? [Y/n]: y
Elastic Cing logical volume lv_home to 2.00 GB
Logical volume lv_home successfully resized
Then the LV size is reduced, and-L-1G minus 1G is consistent with that of the file system.
[Root @ NEWLFS mnt] # mount-t reiserfs/dev/lvm_vg/lv_homelvm_home/
[Root @ NEWLFS mnt] # df
Filesystem Size Used Avail Use % Mounted on
/Dev/md0 5.4G 2.8G 2.7G 51%/
/Dev/hda1 6.4 GB 4.0G 2.5G 62%/mnt/C
/Dev/hda6 25G 22G 3.6G 86%/mnt/E
/Dev/hda7 9.7G 3.7 GB 5.6G 40%/mnt/lfs
/Dev/mapper/lvm_vg-lv_opt
3.0G 33 M 2.9G 2%/mnt/lvm_opt
/Dev/mapper/lvm_vg-lv_home
2.0G 33 M 2.0G 2%/mnt/lvm_home
OK, a G is successfully reduced. from the above output, we can see that the reduction is more dangerous than the increase.
Therefore, we should try to avoid reducing the partition size and backing up important data as much as possible :)
EXT2/3:
In LVM1, you can use the e2fsadm program to conveniently reduce the size of an ext2/3.
This program is unavailable in LVM2.
Therefore, it is troublesome to reduce the size of EXT2/3 in LVM2, because you must know the number of blocks after the decrease.
[Root @ NEWLFS ~] # Umount/mnt/lvm_opt/
First, uninstall the file system.
[Root @ NEWLFS ~] # Mke2fs-n/dev/lvm_vg/lv_opt
Mke2fs 1.35 (28-Feb-2004)
Filesystem label =
OS type: Linux
Block size = 4096 (log = 2)
Fragment size = 4096 (log = 2)
262144 inodes, 786432 blocks
26214 blocks (5.00%) reserved for the super user
First data block = 0
16 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768,983 04, 163840,229376, 294912
[Root @ NEWLFS ~] #
Because you must know the number of blocks after the volume is reduced, you can use mke2fs-n to list the block size.
Note: the-n parameter does not enable mke2fs to create a file system, but only list information about the file system.
Do not drop-n. otherwise, all data will be lost .. :(
The block size is 4096 (4 KB). Currently, there are 786432 blocks, which are intended to reduce the space of one GB.
A total of 262144 blocks are occupied by 1 GB, so the number of blocks after reduction should be 524288.
Reduce the FS size:
[Root @ NEWLFS ~] # Resize2fs/dev/lvm_vg/lv_opt 524288
Resize2fs 1.35 (28-Feb-2004)
Resizing the filesystem on/dev/lvm_vg/lv_var to 524288 (1 k) blocks.
The filesystem on/dev/lvm_vg/lv_var is now 524288 blocks long.
Reduce LV size:
[Root @ NEWLFS mnt] # lvreduce-L-1G/dev/lvm_vg/lv_var
WARNING: switching active logical volume to 2.00 GB
This may destroy your data (filesystem etc .)
Do you really want to reduce lv_var? [Y/n]: y
Elastic Cing logical volume lv_var to 2.00 GB
Logical volume lv_var successfully resized
OK. The size is reduced by 1 GB. mount the file again.
[Root @ NEWLFS mnt] # mount/dev/lvm_vg/lv_var lvm_var/
[Root @ NEWLFS mnt] # df
..........
/Dev/mapper/lvm_vg-lv_var
2.0G 33 M 1.9G 2%/mnt/lvm_var
Next let's take a look at how to delete LV and VG:
Delete LV:
[Root @ NEWLFS mnt] # umount/dev/lvm_vg/lv_opt
You must disable LV before deleting it.
[Root @ NEWLFS mnt] # lvremove/dev/lvm_vg/lv_opt
Do you really want to remove active logical volume "lv_opt "? [Y/n]: y
Logical volume "lv_opt" successfully removed
[Root @ NEWLFS mnt]
Use the lvremove command to delete a logical volume. lv_opt is deleted here.
Delete VG:
You must ensure that no logical volume exists in VG before deleting a VG.
I have used lvremove to delete lv_usr lv_home.
[Root @ NEWLFS mnt] # lvdisplay
[Root @ NEWLFS mnt] #
No output indicates that no lv is available.
[Root @ NEWLFS mnt] # vgchange-a n lvm_vg
0 logical volume (s) in volume group "lvm_vg" now active
Use vgchange-a n to disable lvm_vg before deleting it.
[Root @ NEWLFS mnt] # vgremove lvm_vg
Volume group "lvm_vg" successfully removed
Killed... the world is quiet. ^_^
[Root @ NEWLFS mnt] # vgdisplay
[Root @ NEWLFSmnt] #
No echo indicates that no VG is available.
Add/delete PV to/from VG:
As mentioned above, VG can be composed of multiple PVS (hda1, hda3, hda5 ....)
It can be used together with non-adjacent partitions. (Same as linear RAID)
To see how to add PV to VG:
The loop device is used because I have no redundant physical devices for demonstration.
First, I pass
[Root @ NEWLFS ~] # Dd if =/dev/zero of =/root/lvm bs = 4096 count = 32768
Create a M file in the/root directory (/root/lvm)
Then, use the losetup command to mount/dev/loop0, so that my loop0 is like/dev/hdaX.
Is an available block device. The loop device is really good. it's good to do experiments, for example, try reiserfs.
Raid, LVM... practice, and practice on real partitions.
See how I use it :)
[Root @ NEWLFS ~] # Losetup/dev/loop0 lvm
Mount the created lvm file to/dev/loop0, so that loop0 can be used as other physical devices.
Initialize loop0 to PV:
[Root @ NEWLFS ~] # Pvcreate/dev/loop0
Physical volume "/dev/loop0" successfully created
[Root @ NEWLFS ~] #
Use the vgextend command to add/dev/loop0 to lvm_vg:
[Root @ NEWLFS ~] # Vgextend lvm_vg/dev/loop0
Volume group "lvm_vg" successfully extended
[Root @ NEWLFS ~] #
/Dev/loop0 is successfully added to lvm_vg. Then you can create LV.
[Root @ NEWLFS ~] # Lvcreate-L 128 M-n loop_lv lvm_vg/dev/loop0
Insufficient allocatable logical extents (31) for logicalvolume loop_lv: 32 required
When/dev/loop0 is added, this LV is specified only in/dev/loop0. it may be unable to allocate 128 M because LE occupies space.
[Root @ NEWLFS ~] # Lvcreate-L 100 M-n loop_lv lvm_vg/dev/loop0
Logical volume "loop_lv" created
[Root @ NEWLFS ~] #
MB can be successfully created. it can be seen that the LV loop_lv is completely on/dev/loop0.
Delete PV:
Make sure that the PV to be deleted is not used by any LV.
Use pvdisplay/dev/loop0 for viewing (assuming loop0 is the pv to be deleted)
[Root @ NEWLFS ~] # Pvdisplay/dev/loop0
--- Physical volume ---
PVName/dev/loop0
VGName lvm_vg
PVSize 124.00 MB/not usable 0
Allocatable yes
PE Size (kbytes) 4096
TotalPE 31
FreePE 31 ----> FreePE = TotalPE
AllocatedPE 0 ------> The occupied PE is 0, indicating that there is no lv on the loop0
PVUUID K38G8y-G6b7-81O0-SFz5-HZii-Rp6t-sHq4ou
[Root @ NEWLFS ~] #
If the physical volume is still used by the LV, you can use the pvmove command to transfer it to another PV.
Then run the vgreduce command to delete the PV:
[Root @ NEWLFS ~] # Vgreduce lvm_vg/dev/loop0
Removed "/dev/loop0" from volume group "lvm_vg"
[Root @ NEWLFS ~] #
/Dev/loop0 is successfully deleted from lvm_vg. it's easy. ^_^
When the system starts, the LVM partition is automatically loaded:
Modify/etc/fstab to add LVM partitions, Mount points, FS types, etc ....
/Dev/lvm_vg/lv_home/mnt/lvm_home reiserfs defaults 0 0
/Dev/lvm_vg/lv_opt/mnt/lvm_opt ext3 defaults 0 0
Then add the LVM activation command to the system startup script.
In addition, you must activate LVM before mounting the file system in/etc/fstab. Otherwise, how can you mount it. Pai_^
I wrote it in the/etc/rc. d/init. d/mountfs script. Its role is
After fsck checks each partition, it remounts/partition and other partitions to read-write
/Sbin/vgscan --------> two lines to be added
/Sbin/vgchange-a y
The main content of the mountfs script is: (pass the start parameter)
Echo "Remounting root file system in read-write mode ..."
Mount-n-o remount, rw/
Echo "Recording existing mounts in/etc/mtab ..."
>/Etc/mtab
Mount-f/| failed = 1
Mount-f/proc | failed = 1
If grep-q '[[: space:] sysfs'/proc/mounts; then
Mount-f/sys | failed = 1
Fi
Echo "Mounting remaining file systems ..."
############# LVM ##############
/Sbin/vgscan # -------> before mounting other file systems
/Sbin/vgchange-a y # ----> The LVM is activated and then mounted.
Mount-a-O no_netdev # ---> mount FS according to/etc/fstab
In this way, the LVM is activated and mounted automatically when the system is started.
In an article about LVM, the two commands are added to the file system check script, but I added them
Failed, probably because the system is read-only during checkfs, and vgchange-a y
Data needs to be written but cannot be written, so no.
The udev service is started before checkfs and is run through mount-n-t ramfs/dev
Mount/dev to ramfs, which is fully active in the memory. no matter whether the root partition is writable or not,/dev is writable.
Does vgchange-a y write data to other locations? To be studied :)
In other distributions, such as Red Hat and Mandrake, check the root file system according to/etc/fstab.
Mount various file systems and enable SWAP partitions... and so on. these are all done through/etc/rc. d/rc. sysinit.
In LFS, this script is divided into several small scripts and then runs them one by one.
In my opinion, this gives you a more intuitive understanding of the system startup process and facilitates modification.
To activate lvm in other releases, you must write it to/etc/rc. d/rc. sysinit.
It must be activated before mounting the file system in/etc/fstab. Note that the system is rw during activation.
Finally, I added the statement to disable VG in mountfs.
Automatically shut down VG when the system is shut down/restarted.
/Sbin/vgchange-a n ------> add this sentence
When the system is shut down/restarted, mountfs will uninstall all file systems according to/etc/fstab:
Mountfs script content: (pass the stop parameter)
Echo "Unmounting all other currently mounted filesystems ..."
Umount-a-d-r-t noramfs
/Sbin/vgchange-a n # ---> added here
The VG can be closed only after the LV volume is detached. it won't work if I add it to the upper line of umount. It indicates that LV is still in use.
In other releases, disabling VG is much easier than activating VG. You only need to add it to the halt/reboot service script.
In fact, you only need to uninstall the file system and use the service scripts. Pai_^