Comprehensive use of RAID and LVM in linux !, Raidlvm
The LVM can elastically adjust the filesystem size, but the disadvantage is that it may not have the acceleration and hardware backup (unlike snapshot) functions. Disk arrays provide performance and backup functions, but cannot provide advantages similar to LVM. In this situation, we want to use the "build LVM on RAID" function to achieve both capabilities.
Objective: To test the architecture of an LVM system on a RAID disk;
Requirement: requires disk management capabilities, including RAID and LVM;
What should we do? The following procedure is a step-by-step implementation:
1. When restoring the system, you must:
Unmount the previously mounted file system with umount;
Modify the data in/etc/fstab so that it will not be automatically mounted upon startup;
Fdisk is used to split the split slot.
In the end, your system should only look like the following: (/dev/sd {B, c, d, e, f} are all formatted as ext3 using mkds)
[Cpp]View plain copy
Disk/dev/sdb: 8589 MB, 8589934592bytes255heads, 63 sectors/track, 1044 cylinders
Units = cylindersof16065 * 512 = 8225280 bytes
Disk/dev/sdbdoesn 'tcontainavalidpartitiontable
Disk/dev/sdc: 8589 MB, 8589934592bytes255heads, 63 sectors/track, 1044 cylinders
Units = cylindersof16065 * 512 = 8225280 bytes
Disk/dev/sdcdoesn 'tcontainavalidpartitiontable
Disk/dev/sdd: 8589 MB, 8589934592bytes255heads, 63 sectors/track, 1044 cylinders
Units = cylindersof16065 * 512 = 8225280 bytes
Disk/dev/sdddoesn 'tcontainavalidpartitiontable
Disk/dev/sde: 8589 MB, 8589934592bytes255heads, 63 sectors/track, 1044 cylinders
Units = cylindersof16065 * 512 = 8225280 bytes
Disk/dev/sdedoesn 'tcontainavalidpartitiontable
Disk/dev/sdf: 8589 MB, 8589934592bytes255heads, 63 sectors/track, 1044 cylinders
Units = cylindersof16065 * 512 = 8225280 bytes
Disk/dev/sdfdoesn 'tcontainavalidpartitiontable
2. Create a RAID. Assume that we use five (/dev/sd {B, c, d, e, f}) 8 GB split slots to create a RAID-5, and has a spare disk
[Cpp]View plain copy
[Root @ linux ~] # Mdadm -- create -- auto = yes/dev/md0 -- level = 5 -- raid-devices = 4 -- spare-devices = 1/dev/sd {B, c, d, e, f} mdadm:/dev/sdbappearstocontainanext2fsfilesystem
Size = 8388608 Kmtime = ThuJan108: 00: 001970 mdadm:/dev/sdcappearstocontainanext2fsfilesystem
Size = 8388608 Kmtime = ThuJan108: 00: 001970 mdadm:/dev/sddappearstocontainanext2fsfilesystem
Size = 8388608 Kmtime = ThuJan108: 00: 001970 mdadm:/dev/sdeappearstocontainanext2fsfilesystem
Size = 8388608 Kmtime = ThuJan108: 00: 001970 mdadm:/dev/sdfappearstocontainanext2fsfilesystem
Size = 8388608 Kmtime = ThuJan108: 00: 001970 Continuecreatingarrayy
Mdadm: array/dev/md0started.
3. Start to process LVM. Now we assume that all the parameters use the default value, including PE, and VG is named raidvg and LV is named raidlv. The basic process is as follows:[Cpp]View plain copy
[Root @ linux ~] # Pvcreate/dev/md0Physicalvolume "/dev/md0" successfullycreated
[Root @ linux ~] # Vgcreateraidvg/dev/md0/dev/cdrom: openfailed: Read-Only File System
Attempttoclosedevice '/dev/cdrom' whichisnotopen./dev/cdrom: openfailed: Read-Only File System
Attempttoclosedevice '/dev/cdrom' whichisnotopen. Volumegroup "raidvg" successfullycreated
[Root @ linux ~] # Lvcreate-l6143-nraidlvraidvgLogicalvolume "raidlv" created
[Root @ linux ~] # Lvdisplay --- Logicalvolume ---
LVName/dev/raidvg/raidlvVGNameraidvg
LVUUIDrBySS0-JxZ6-ANYe-Vp8G-xlUd-Rz1x-G6NjnTLVWriteAccessread/write
LVStatusavailable # open0
LVSize24.00GBCurrentLE6143
Segments1Allocationinherit
Readaheadsectorsauto-currentlysetto768
Blockdevice253: 0
4. Mount[Cpp]View plain copy
[Root @ linux ~] # Mkdir/mnt/raidlvm [root @ linux ~] # Mkfs-text3/dev/raidvg/raidlv
Mke2fs1.39 (29-May-2006) Filesystemlabel =
OStype: LinuxBlocksize = 4096 (log = 2)
Fragmentsize = 4096 (log = 2) 3145728 inodes, 6290432 blocks
314521 blocks (5.00%) reservedforthesuperuserFirstdatablock = 0
Maximumfilesystemblocks = 0192 blockgroups
32768 blockspergroup, 32768fragmentspergroup16384inodespergroup
Superblockbackupsstoredonblocks: 32768,98304, 163840,229376, 294912,819200, 884736,1605632, 2654208,
4096000
Writinginodetables: doneCreatingjournal (32768 blocks): done
Writingsuperblocksandfilesystemaccountinginformation: done
Thisfilesystemwillbeautomaticallycheckedevery21mountsor180days, whichevercomesfirst. Usetune2fs-cor-itooverride.
[Root @ linux ~] # Mount/dev/raidvg/raidlv/mnt/raidlvm/
[Cpp]View plain copy
[Root @ linuxraidlvm] # mdadm -- detail/dev/md0 | grepUUIDUUID: 99de722a: bfd56556: 7b4248e1: 3bf3f4f9
[Root @ linuxraidlvm] # cat/etc/mdadm. confARRAY/dev/md0UUID = 99de722a: bfd56556: 7b4248e1: 3bf3f4f9
[Root @ linuxraidlvm] # cat/etc/fstab | grep/mnt/raidlvm/dev/raidvg/raidlv/mnt/raidlvmext3defaults126, check[Cpp]View plain copy
[Root @ linuxraidlvm] # mdadm -- detail/dev/md0/dev/md0:
Version: 0.90 CreationTime: FriFeb1722: 26: 442012
RaidLevel: raid5ArraySize: 25165632 (24.00GiB25.77GB)
UsedDevSize: 8388544 (8.00GiB8.59GB) RaidDevices: 4
TotalDevices: 5 PreferredMinor: 0
Persistence: Superblockispersistent
UpdateTime: Maid: 39: 112012 State: clean
ActiveDevices: 4 WorkingDevices: 5
FailedDevices: 0 SpareDevices: 1
Layout: left-lateral ric
ChunkSize: 64 K
UUID: 99de722a: bfd56556: 7b4248e1: 3bf3f4f9Events: 0.2
NumberMajorMinorRaidDeviceState
08160 activesync/dev/sdb18321activesync/dev/sdc
28482 activesync/dev/sdd38643activesync/dev/sde
4880-spare/dev/sdf
[Root @ linuxraidlvm] # cat/proc/mdstatPersonalities: [raid6] [raid5] [raid4]
Md0: activeraid5sde [3] sdf [4] (S) sdd [2] sdc [1] sdb [0] 25165632blockslevel5, 64 kchunk, algorithm2 [4/4] [UUUU]
Unuseddevices:
[Root @ linuxraidlvm] # pvscan/dev/cdrom: openfailed: Read-Only File System
Attempttoclosedevice '/dev/cdrom' whichisnotopen. PV/dev/md0VGraidvglvm2 [24.00 GB/0 free]
Total: 1 [24.00 GB]/inuse: 1 [24.00 GB]/innoVG: 0 [0] [root @ linuxraidlvm] # pvdisplay
--- Physicalvolume --- PVName/dev/md0
VGNameraidvgPVSize24.00GB/notusable3.81MB
Allocatableyes (butfull) PESize (KByte) 4096
TotalPE6143FreePE0
AllocatedPE6143PVUUIDKgwVH9-HwTG-q4it-z0Ps-ACac-Si1y-8RxTkx
[Root @ linuxraidlvm] # vgscan
Readingallphysicalvolumes. Thismaytakeawhile.../dev/cdrom: openfailed: Read-Only File System
Attempttoclosedevice '/dev/cdrom' whichisnotopen. Foundvolumegroup "raidvg" usingmetadatatypelvm2
[Root @ linuxraidlvm] # vgdisplay --- Volumegroup ---
VGNameraidvgSystemID
Formatlvm2MetadataAreas1
MetadataSequenceNo2VGAccessread/write
VGStatusresizableMAXLV0
CurLV1OpenLV1
MaxPV0CurPV1
ActPV1VGSize24.00GB
PESize4.00MBTotalPE6143
AllocPE/Size6143/24.00 GBFreePE/Size0/0
VGUUIDzlM0TJ-fjR0-b2kO-rCpO-D6L9-zw0m-W3SVzp
[Root @ linuxraidlvm] # lvscanACTIVE '/dev/raidvg/raidlv' [24.00 GB] inherit
[Root @ linuxraidlvm] # lvdisplay --- Logicalvolume ---
LVName/dev/raidvg/raidlvVGNameraidvg
LVUUIDrBySS0-JxZ6-ANYe-Vp8G-xlUd-Rz1x-G6NjnTLVWriteAccessread/write
LVStatusavailable # open1
LVSize24.00GBCurrentLE6143
Segments1Allocationinherit
Readaheadsectorsauto-currentlysetto768
Blockdevice253: 0 [root @ linux ~] # Df
File System 1 K-available block used % mount point/dev/sda359912322662984301900047 %/
/Dev/sda1101086113738449412 %/boottmpfs51754805175480 %/dev/shm
/Dev/mapper/raidvg-raidlv24766844176204233325561 %/mnt/raidlvm
[Root @ linux ~] # Cd/mnt/raidlvm/[root @ linuxraidlvm] # ll
Total 20drwx ------ 2rootroot1638402-1722: 37 lost + found
-Rw-r -- 1rootroot602-1722: 38tt [root @ linuxraidlvm] # cattt