Linux storage experiment 2: LVM operations

Source: Internet
Author: User
Tags nfsd

Linux storage experiment 2: LVM operation Linux storage experiment 2: RAID operation http://www.bkjia.com/ OS /201303/195811.html (1) Create LVM Step 1: create another 5 10 mb scsi hard disks. This step is the same as creating a raid in the previous step. For more information, see Step 2 in the previous step: use four hard disks for raid 5 + 1 hostspare mdadm -- create -- auto = yes/dev/md1 -- level = 5 -- raid-devices = 4 -- spare-device = 1/dev/sdc {5, 6, 7, 8, 9} Note: This time we create raid named/dev/md1 (last/dev/md0) Step 3: view RAID Composition [root @ compute-0 mnt] # mdadm -- detail/dev/md * mdadm:/dev/md does not appear to be Md device/dev/md0: # Version: 1.2 Creation Time: Fri Mar 22 04:41:39 2013 RAID Level: raid5 Array Size: 480768 (469.58 MiB 492.31 MB) created in the last Raid Experiment) used Dev Size: 160256 (156.53 MiB 164.10 MB) Raid Devices: 4 Total Devices: 5 Persistence: Superblock is persistent Update Time: Fri Mar 22 05:44:28 2013 State: clean Active Devices: 4 Working Devices: 5 Failed Devices: 0 Spare Devices: 1 La Yout: left-outer Ric Chunk Size: 512 K Name: compute-0: 0 (local to host compute-0) UUID: d81cdfce: 03d33c89: ffe54067: 44e19b55 Events: 18 Number Major Minor RaidDevice State 0 8 21 0 active sync/dev/sdb5 1 8 22 1 active sync/dev/sdb6 2 8 23 2 active sync/dev/sdb7 5 8 24 3 active sync/dev/sdb8 4 8 25-spare/dev/sdb9/dev/md1: # The New Version: 1.2 Creation Time: Fri Mar 22 04:42:44 2013 R Aid Level: raid5 Array Size: 480768 (469.58 MiB 492.31 MB) Used Dev Size: 160256 (156.53 MiB 164.10 MB) Raid Devices: 4 Total Devices: 5 Persistence: superblock is persistent Update Time: Fri Mar 22 05:44:31 2013 State: clean Active Devices: 4 Working Devices: 5 Failed Devices: 0 Spare Devices: 1 Layout: left-connected Ric Chunk Size: 512 K Name: compute-0: 1 (local to host compute-0) UUID: Da3b02e6: b77652b6: 3688ad25: 9c34dabc Events: 18 Number Major Minor RaidDevice State 0 8 37 0 active sync/dev/sdc5 1 8 38 1 active sync/dev/sdc6 2 8 39 2 active sync/dev/sdc7 5 8 40 3 active sync/dev/sdc8 4 8 41-spare/dev/sdc9 Step 4: convert RAID to physical volume (I .e. PV) [root @ compute-0 mnt] # pvscan # first scan for existing physical volumes No matching physical volumes found [root @ compute-0 mnt] # pvcreate/dev/md1 # create a physical volume Writing physic Al volume data to disk "/dev/md1" Physical volume "/dev/md1" successfully created [root @ compute-0 mnt] # pvscan # view PV/dev/md1 lvm2 [469.50 MiB] Total: 1 [469.50 MiB]/in use: 0 [0]/in no VG: 1 [469.50 MiB] [root @ compute-0 mnt] # Step 5: create a logical volume group (that is, VG) [root @ compute-0 mnt] # vgscan Reading all physical volumes. this may take a while... no volume groups found [root @ compute-0 mnt] # vgcreate-s 16 M hou Qdvg/dev/md1 # Note: -s indicates the PE size houqdvg indicates the name of the created logical Volume group "houqdvg" successfully created [root @ compute-0 mnt] # vgscan Reading all physical volumes. this may take a while... found volume group "houqdvg" using metadata type lvm2 [root @ compute-0 mnt] # vgdisplay --- Volume group --- VG Name houqdvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable Max lv 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 464.00 MiB PE Size 16.00 MiB # PE Size Total PE 29 # PE in the logical volume group total Alloc PE/Size 0/0 Free PE/Size 29/464 .00 MiB vg uuid vBlcd2-qt6Y-Bt1D-v63K-oIJv-a3Hm-tPzoue [root @ compute-0 mnt] # Step 6: create a logical volume (that is, LV) and a file system [root @ compute-0 mnt] # lvcreate-l 29-n houqdlv houqdvg note: -l is followed by the number of PES.-L is followed by the Logical volume name specified after-n is created based on the size. Logical volume "hou Qdlv "created [root @ compute-0 mnt] # lvdisplay --- Logical volume --- LV Name/dev/houqdvg/houqdlv # Note: later use this logical volume must use this/dev/houqdvg/houqdlv full Name VG Name houqdvg lv uuid T0L4cu-dqNC-PRHz-QQ3u-0ubi-VAWe-YquY7t LV Write Access read/write LV Status available # open 0 LV Size 464.00 MiB Current LE 29 segments 1 Allocation inherit Read ahead sectors auto-currently set to 6144 Block device 253: 0 [root @ compute -0 mnt] # mkfs-t ext3/dev/houqdvg/houqdlv # create a file system mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: linuxBlock size = 1024 (log = 0) Fragment size = 1024 (log = 0) Stride = 512 blocks, Stripe width = 1536 blocks118784 inodes, 475136 blocks23756 blocks (5.00%) reserved for the super userFirst data block = 1 Maximum filesystem blocks = 6763315258 block groups8192 blocks per group, 8192 fragments per group2048 ino Des per groupSuperblock backups stored on blocks: 8193,245 77, 40961,573 45, 73729,204 801, 221185,401 409 Writing inode tables: done Creating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 21 mounts or180 days, whichever comes first. use tune2fs-c or-I to override. [root @ compute-0 mnt] # Step 7: mount the logical volume with moun T command mounting, then define [root @ compute-0 mnt] # mkdir-p/mnt/lvm [root @ compute-0 mnt] # mount/dev/houqdvg/houqdlv in/etc/fstab. /mnt/lvm/# mount [root @ compute-0 mnt] # mount # view all mounts/dev/sda2 on/type ext4 (rw) proc on/proc type proc (rw) sysfs on/sys type sysfs (rw) devpts on/dev/pts type devpts (rw, gid = 5, mode = 620) tmpfs on/dev/shm type tmpfs (rw)/dev/sda1 on/boot type ext4 (rw) none on/proc/sys/fs/binfmt_misc type Binfmt_misc (rw) vmware-vmblock on/var/run/vmblock-fuse type fuse. vmware-vmblock (rw, nosuid, nodev, default_permissions, allow_other) sunrpc on/var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) nfsd on/proc/fs/nfsd type nfsd (rw)/dev/mapper/houqdvg-houqdlv on/mnt/lvm type ext3 (rw) # [root @ compute-0 mnt] # vi/etc/fstab # Set automatic mounting at each boot #/etc/fstab # Created by anaconda on Wed Feb 27 13:44:14 2013 # # Acce Ssible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab (5), findfs (8), mount (8) and/or blkid (8) for more info # UUID = users/ext4 defaults 1 1 UUID = users/boot ext4 defaults 1 2 UUID = a90215c9-be26-4884-bd0e-3799c7ddba7b swap defaults 0 0 tmpfs/dev/shm tmpfs defaults 0 0 devpts/ dev/pts devpts gid = 5, mode = 6 20 0 0 sysfs/sys sysfs defaults 0 0 proc/proc defaults 0 0 0/dev/houqdvg/houqdlv/mnt/lvm ext3 defaults 1 2 (2) LVM expansion Step 1: convert the first RAID to a physical volume [root @ compute-0 mnt] # pvcreate/dev/md0 Writing Physical volume data to disk "/dev/md0" physical volume "/dev/ md0 "successfully created [root @ compute-0 mnt] # Step 2: add the physical volume to the existing logical volume group [root @ compute-0 mnt] # vgextend houqdvg/dev/md0 # Add/dev/md0 to the houqdvg volume group. volume Step 3 of group "houqdvg" successfully extended: view logical Volume group space size [root @ compute-0 mnt] # vgdisplay --- Volume group --- VG Name houqdvg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG status resizable max lv 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 928.00 MiB PE Size 16.00 MiB Total PE 58 # Note: here, the total number of PES is doubled. Alloc PE/Size 29/464 .00 MiB Free PE/Size 29/4 64.00 MiB # Note: unallocated there are 29 pe vg uuid vBlcd2-qt6Y-Bt1D-v63K-oIJv-a3Hm-tPzoue Step 4: increase Logical volume space by MB and view Logical volume space size [root @ compute-0 mnt] # lvdisplay --- Logical volume --- LV Name/dev/houqdvg/houqdlv VG Name houqdvg lv uuid T0L4cu-dqNC-PRHz-QQ3u-0ubi-VAWe-YquY7t LV Write Access read/write LV Status available # open 1 LV Size 464.00 MiB # This is the Current LE 29 Segments 1 Allocation inherit Read ahead sectors auto -Currently set to 6144 Block device 253: 0 [root @ compute-0 mnt] # lvresize-L + 200 M/dev/houqdvg/houqdlv # Note: -L directly specify the size following, alternatively, you can specify the Rounding up size to full physical extent 208.00 MiB Extending logical volume houqdlv to 672.00 MiB Logical volume houqdlv successfully resized [root @ compute-0 mnt] # lvdisplay --- Logical volume --- LV Name/dev/houqdvg/houqdlv VG Name houqdvg lvuuid T0L4cu-dqN C-PRHz-QQ3u-0ubi-VAWe-YquY7t LV Write Access read/write LV Status available # open 1 LV Size 672.00 MiB # You can see that the logical volume does increase by 200 M Current LE 42 Segments 2 Allocation inherit Read ahead sectors auto- currently set to 6144 Block device 253: step 5: increase the file system by 200 MB [root @ compute-0 mnt] # df/mnt/lvm/Filesystem 1K-blocks Used Available Use % Mounted on/dev/mapper/houqdvg-houqdlv 460144 10543 3%/mnt/lvm # Note: We have expanded the logical volume/dev/houqdvg/houqdlv, but here it is still more than 400 mb [root @ compute-0 mnt] # resize2fs/dev/houqdvg/houqdlv # Note: here is the real file system resizing. The actual block size will not increase, but the number of groups will increase, and the response space will be larger than resize2fs 1.41.12 (17-May-2010) filesystem at/dev/houqdvg/houqdlv is mounted on/mnt/lvm; on-line resizing requiredold desc_blocks = 2, new_desc_blocks = 3 specify Ming an on-line resize of/dev/houqdvg/houqdlv to 688128 (1 k) blocks. the filesystem on/de V/houqdvg/houqdlv is now 688128 blocks long. [root @ compute-0 mnt] # df/mnt/lvm/Filesystem 1K-blocks Used Available Use % Mounted on/dev/mapper/houqdvg-houqdlv 666415 10789 621236 2%/mnt/lvm # note: now, check whether the system has been expanded. This week's experiment will come here first. Next week, we may talk about the knowledge about system snapshots based on the two-week experiment.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.