Linux storage experiment 2: LVM operations Linux storage experiment 2: RAID operations http://www.2cto.com/os/201303/195811.html (1) step 1 of LVM creation: create another five 10 mbscsi hard disks. This step is the same as the raid creation operation in the previous step... linux storage experiment 2: LVM operations Linux storage experiment 2: RAID operations http://www.2cto.com/os/201303/195811.html (1) step 1 of LVM creation: create another 5 10 mb scsi hard disks. This step is the same as creating a raid in the previous step. See step 2 in the previous step: use four hard disks for raid 5 + 1 hostspare mdadm -- create -- auto = yes/dev/md1 -- level = 5 -- raid-devices = 4 -- spare-device = 1/dev/sdc {5, 6, 7, 8, 9} Note: This time we create raid named/dev/md1 (last/dev/md0) Step 3: view RAID composition [root @ compute-0 mnt] # mdadm -- detail/dev/md * mdadm:/dev/md does not appear to be an md device/dev/md0: # Version: 1.2 Creation Time: Fri Mar 22 04: created in the last RAID experiment: 41: 39 2013 Raid Level: raid5 Array Size: 480768 (469.58 MiB 492.31 MB) Used Dev Size: 160256 (156.53 MiB 164.10 MB) Raid Devices: 4 Total Devices: 5 Persistence: superblock is persistent Update Time: Fri Mar 22 05:44:28 2013 State: clean Active Devices: 4 Working Devices: 5 Failed Devices: 0 Spare Devices: 1 Layout: left-connected Ric Chunk Size: 512 K Name: compute-0: 0 (local to host com Pute-0) UUID: d81cdfce: 03d33c89: ffe54067: 44e19b55 Events: 18 Number Major Minor RaidDevice State 0 8 21 0 active sync/dev/sdb5 1 8 22 1 active sync/dev/sdb6 2 8 23 2 active sync/dev/sdb7 5 8 24 3 active sync/dev/sdb8 4 8 25-spare/dev/sdb9/dev/md1: # The new Version: 1.2 Creation Time: Fri Mar 22 04:42:44 2013 Raid Level: raid5 Array Size: 480768 (469.58 MiB 492.31 MB) Used Dev Size: 160256 (156.53 MiB 164.10 MB) Raid Devices: 4 Total Devices: 5 Persistence: Superblock is persistent Update Time: Fri Mar 22 05:44:31 2013 State: clean Active Devices: 4 Working Devices: 5 Failed Devices: 0 Spare Devices: 1 Layout: left-lateral Ric Chunk Size: 512 K Name: compute-0: 1 (local to host compute-0) UUID: da3b02e6: b77652b6: registrad25: 9c34dabc Events: 18 Number Major Minor RaidDev Ice State 0 8 37 0 active sync/dev/sdc5 1 8 38 1 active sync/dev/sdc6 2 8 39 2 active sync/dev/sdc7 5 8 40 3 active sync/dev /sdc8 4 8 41-spare/dev/sdc9 Step 4: convert RAID to physical volume (I .e. PV) [root @ compute-0 mnt] # pvscan # First scan for existing physical volumes No matching physical volumes found [root @ compute-0 mnt] # pvcreate/dev/md1 # create a physical volume Writing Physical volume data to disk "/dev/md1" physical volume "/dev/md1" successfully Cres Ated [root @ compute-0 mnt] # pvscan # View PV/dev/md1 lvm2 [469.50 MiB] Total: 1 [469.50 MiB]/in use: 0 [0]/in no VG: 1 [469.50 MiB] [root @ compute-0 mnt] # Step 5: create a logical volume group (that is, VG) [root @ compute-0 mnt] # vgscan Reading all physical volumes. this may take a while... no volume groups found [root @ compute-0 mnt] # vgcreate-s 16 M houqdvg/dev/md1 # note: -s indicates the PE size houqdvg indicates the name of the created logical Volume group "houqdvg" succe Ssfully created [root @ compute-0 mnt] # vgscan Reading all physical volumes. this may take a while... found volume group "houqdvg" using metadata type lvm2 [root @ compute-0 mnt] # vgdisplay --- Volume group --- VG Name houqdvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable max lv 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 464.00 MiB PE Size 16.00 MiB # PE Size Total PE 29 # Total PE in the logical volume group Alloc PE/Size 0/0 Free PE/Size 29/464 .00 MiB vg uuid vBlcd2-qt6Y-Bt1D-v63K-oIJv-a3Hm-tPzoue [root @ compute-0 mnt] # Step 6: create a logical volume (that is, LV) and a file system [root @ compute-0 mnt] # lvcreate-l 29-n houqdlv houqdvg note: -l is followed by the number of PEs.-L is followed by the Logical volume name specified after-n is created based on the size. Logical volume "houqdlv" created [root @ compute-0 mnt] # lvdisplay --- Logical volume --- LV Name/ Dev/houqdvg/houqdlv # note: later use this logical volume must use this/dev/houqdvg/houqdlv full Name VG Name houqdvg lv uuid T0L4cu-dqNC-PRHz-QQ3u-0ubi-VAWe-YquY7t LV Write Access read/write LV Status available # open 0 LV Size 464.00 MiB Current LE 29 segments 1 Allocation inherit Read ahead sectors auto-currently set to 6144 Block device 253: 0 [root @ compute-0 mnt] # mkfs-t ext3/dev/houqdvg/houqdlv # Create a file system mke2fs 1.41.12 (17-May-201 0) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) Fragment size = 1024 (log = 0) Stride = 512 blocks, Stripe width = 1536 blocks118784 inodes, 475136 blocks23756 blocks (5.00%) reserved for the super userFirst data block = 1 Maximum filesystem blocks = 6763315258 block groups8192 blocks per group, 8192 fragments per group2048 inodes per groupSuperblock backups stored on blocks: 8193,245 77, 40961,573 45, 73729,204 801, 221185,401 8192 Writing inode tables: done Creating journal (blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 21 mounts or180 days, whichever comes first. use tune2fs-c or-I to override. [root @ compute-0 mnt] # Step 7: mount the logical volume by using the mount command, then define [root @ compute-0 mnt] # mkdir-p/mnt/lvm [root @ compute-0 in/etc/fstab. Mnt] # mount/dev/houqdvg/houqdlv/mnt/lvm/# mount [root @ compute-0 mnt] # mount # View all mounts/dev/sda2 on/type ext4 (rw) proc on/proc type proc (rw) sysfs on/sys type sysfs (rw) devpts on/dev/pts type devpts (rw, gid = 5, mode = 620) tmpfs on/dev/shm type tmpfs (rw)/dev/sda1 on/boot type ext4 (rw) none on/proc/sys/fs/binfmt_misc type binfmt_misc (rw) vmware-vmblock on/var/run/vmblock-fuse type fuse. vmware-vmb Lock (rw, nosuid, nodev, default_permissions, allow_other) sunrpc on/var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) nfsd on/proc/fs/nfsd type nfsd (rw) /dev/mapper/houqdvg-houqdlv on/mnt/lvm type ext3 (rw) # [root @ compute-0 mnt] # vi/etc/fstab # set automatic mounting at each boot #/etc/fstab # Created by anaconda on Wed Feb 27 13:44:14 2013 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man p Ages fstab (5), findfs (8), mount (8) and/or blkid (8) for more info # UUID = users/ext4 defaults 1 1 UUID = users/boot ext4 defaults 1 2 UUID = a90215c9-be26-4884-bd0e-3799c7ddba7b swap defaults 0 0 tmpfs/dev/shm tmpfs defaults 0 0 devpts/ dev/pts devpts gid = 5, mode = 620 0 0 sysfs/sys sysfs defaults 0 0 proc/proc defaults 0 0/dev/houqdvg/h Ouqdlv/mnt/lvm ext3 defaults 1 2 (2) LVM resizing Step 1: convert the first RAID to a physical volume [root @ compute-0 mnt] # pvcreate/dev/md0 Writing Physical volume data to disk "/dev/md0" physical volume "/dev/ md0 "successfully created [root @ compute-0 mnt] # Step 2: add the physical volume to the existing logical volume Group [root @ compute-0 mnt] # vgextend houqdvg/dev/md0 # Add/dev/md0 to the houqdvg volume group. volume group "houqdvg" successfully extended Step 3: view the size of a logical volume Group [root @ compute-0 mnt] # vg Display --- Volume group --- VG Name houqdvg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable max lv 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 928.00 MiB PE Size 16.00 MiB Total PE 58 # note: here the total PE has doubled, Alloc PE/Size 29/464 .00 MiB Free PE/Size 29/464 .00 MiB # Note: unallocated there are 29 pe vg uuid vBlcd2-qt6Y-Bt1D-v63K-oIJv-a3Hm-tPzoue Step 4: Increase Logical volume space by MB and view Logical volume space size [root @ compute-0 mnt] # lvdisplay --- Logical volume --- LV Name/dev/houqdvg/houqdlv VG Name houqdvg lv uuid T0L4cu-dqNC-PRHz-QQ3u-0ubi-VAWe-YquY7t LV Write Access read/write LV Status available # open 1 LV Size 464.00 MiB # This is the Current LE 29 Segments 1 Allocation inherit Read ahead sectors auto-currently set to 6144 Block device 253: 0 [root @ compute-0 mnt] # lvresize-L + 200 M/dev/houqdvg/houqdlv # Note: The size is directly specified after-L, alternatively, you can specify the Rounding up size to full physical extent 208.00 MiB Extending logical volume houqdlv to 672.00 MiB Logical volume houqdlv successfully resized [root @ compute-0 mnt] # lvdisplay --- Logical volume --- LV Name/dev/houqdvg/houqdlv VG Name houqdvg lv uuid T0L4cu-dqNC-PRHz-QQ3u-0ubi-VAWe-YquY7t LV Write Access read/write LV Status available # Open 1 LV Size 672.00 MiB # you can see that the logical volume does increase by 200 M Current LE 42 Segments 2 Allocation inherit Read ahead sectors auto-currently set to 6144 Block device 253: step 5: increase the file system by 200 MB [root @ compute-0 mnt] # df/mnt/lvm/Filesystem 1K-blocks Used Available Use % Mounted on/dev/mapper/houqdvg-houqdlv 460144 10543 3%/mnt/lvm # note: we have expanded the logical volume/dev/houqdvg/houqdlv, but here it is still more than 400 MB [root @ compute-0 mnt] # resize2fs/de V/houqdvg/houqdlv # Note: The file system is actually resized. The size of the actual block is not increased, but the number of groups is increased, the response space is larger than resize2fs 1.41.12 (17-May-2010) Filesystem at/dev/houqdvg/houqdlv is mounted on/mnt/lvm; on-line resizing requiredold desc_blocks = 2, new_desc_blocks = 3 specify Ming an on-line resize of/dev/houqdvg/houqdlv to 688128 (1 k) blocks. the filesystem on/dev/houqdvg/houqdlv is now 688128 blocks long. [root @ compute-0 mnt] # df/mnt/lvm /Filesystem 1K-blocks Used Available Use % Mounted on/dev/mapper/houqdvg-houqdlv 666415 10789 621236 2%/mnt/lvm # Note: Check whether the cluster has been expanded, this week's experiment will come here first. next week, we may talk about the knowledge about system snapshots based on the two-week experiment.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.