Disk management-small lab 1 lab requirements 1. ensure data security. Failure of any disk does not affect data loss. I/O performance should also be considered. partition two independent Disk Partitions/web and/data3. you can dynamically expand the partition size. 2. Implement the first step of partitioning the disk [plain] [root @ serv01 ~] # Fdisk/dev/sdb [root @ serv01 ~] # Fdisk/dev/sdc [root @ serv01 ~] # Fdisk/dev/sdd [root @ serv01 ~] # Fdisk/dev/sde Step 2: Create a RAID 5 hard disk [plain] [root @ serv01 ~] # Mdadm-C/dev/md5-l 5-n3/dev/sdb1/dev/sdc1/dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array/dev/md0 started. [root @ serv01 ~] # Mkfs. ext4/dev/md5 mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: Linux Block size = 4096 (log = 2) Fragment size = 4096 (log = 2) stride = 128 blocks, Stripe width = 256 blocks 262144 inodes, 1047552 blocks 52377 blocks (5.00%) reserved for the superuser First data block = 0 Maximum filesystem blocks = 1073741824 32 block groups 32768 blocks per group, 32768 fragments pergroup 8192 inodes per group Sup Erblock backups stored on blocks: 32768,98304, 163840,229 376, 294912,819, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystemaccounting information: done This filesystem will be automaticallychecked every 27 mounts or 180 days, whichever comes first. use tune2fs-c or-I to override. step 3: create a physical volume [plain] [root @ serv01 ~] # Pvcreate/dev/md5 Physical volume "/dev/md5" successfully created step 4 Create a volume group [plain] [root @ serv01 ~] # Vgcreate myvg/dev/md5 Volume group "myvg" successfully created step 5 create a logical Volume [plain] # create a logical Volume mylv01 [root @ serv01 ~] # Lvcreate-L 1000 M-n mylv01 myvg Logical volume "mylv01" created # create a Logical volume mylv02 [root @ serv01 ~] # Lvcreate-L 1000 M-n mylv02 myvg Logical volume "mylv02" created step 6 create related directories and configuration files [plain] # create mdadm. conf file [root @ serv01 ~] # Mdadm -- detail -- scan>/etc/mdadm. conf # create/web directory [root @ serv01 ~] # Mkdir/web # create/data DIRECTORY [root @ serv01 ~] # Mkdir/data # Write the mounting information to the fstab file [root @ serv01 ~] # Echo "/dev/myvg/mylv01/web ext4 defaults 1 2">/etc/fstab [root @ serv01 ~] # Echo "/dev/myvg/mylv02/data ext4 defaults 1 2">/etc/fstab [root @ serv01 ~] # Tail-n 2/etc/fstab/dev/myvg/mylv01/web ext4 defaults 1 2/dev/myvg/mylv02/data ext4 defaults 1 2 Step 7 format the hard disk [plain] # format mylv01 [root @ serv01 ~] # Mkfs. ext4/dev/myvg/mylv01 mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: Linux Block size = 4096 (log = 2) fragment size = 4096 (log = 2) Stride = 128 blocks, Stripe width = 256 blocks 64000 inodes, 256000 blocks 12800 blocks (5.00%) reserved for the superuser First data block = 0 Maximum filesystem blocks = 264241152 8 block groups 32768 blocks per group, 32768 fragments pergroup 8000 inodes per group Superblock backups stored on blocks: 32768,98304, 163840,229 376 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystemaccounting information: done This filesystem will be automaticallychecked every 28 mounts or 180 days, whichever comes first. use tune2fs-c or-I to override. # format mylv02 [root @ serv01 ~] # Mkfs. ext4/dev/myvg/mylv02 mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: Linux Block size = 4096 (log = 2) fragment size = 4096 (log = 2) Stride = 128 blocks, Stripe width = 256 blocks 64000 inodes, 256000 blocks 12800 blocks (5.00%) reserved for the superuser First data block = 0 Maximum filesystem blocks = 264241152 8 block groups 32768 blocks per group, 32768 fragments pergroup 8000 inodes per group Superblock backups stored on blocks: 32768,98304, 163840,229 376 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystemaccounting information: done This filesystem will be automaticallychecked every 24 mounts or 180 days, whichever comes first. use tune2fs-c or-I to override. step 8 Mount [plain] # Mount web [root @ serv01 ~] # Mount/dev/myvg/mylv01/web # mount data [root @ serv01 ~] # Mount/dev/myvg/mylv02/data # view disk information [root @ serv01 ~] # Df-h Filesystem Size Used Avail Use % Mounted on/dev/sda2 9.7G 1.1G 8.1G 12%/tmpfs 188 M 0 188 M 0%/dev/shm/dev/sda1 194 M 25 M 160 M 14%/boot/dev/sda5 4.0G 137 M 3.7G 4%/opt/dev/sr0 3.4G 3.4G 0 100%/iso/dev/mapper/myvg-mylv01 985 M 18 M 918 M 2%/web/dev/mapper/myvg-mylv02 985 M 18 M 918 M 2%/data Step 9 simulate hard disk crashes [plain] # copy files to the web directory [root @ serv01 ~] # Cp/boot/*/web/# view RAID5 details [root @ serv01 ~] # Mdadm-D/dev/md5/dev/md5: Version: 1.2 Creation Time: Fri Aug 200:35:07 2013 Raid Level: raid5 Array Size: 4190208 (4.00 GiB 4.29 GB) used Dev Size: 2095104 (2046.34 MiB 2145.39 MB) Raid Devices: 3 Total Devices: 3 Persistence: Superblock is persistent Update Time: Fri Aug 2 00: 46: 462013 State: clean Active Devices: 3 Working Devices: 3 Failed Devices: 0 Spare Devices: 0 Layout: Left-direction Ric Chunk Size: 512 K Name: serv01.host.com: 5 (local to host serv01.host.com) UUID: 97c47faa: 972aba90: 2248d692: b7fc2b6f Events: 22 Number Major Minor RaidDevice State 0 8 17 0 active sync/dev/sdb1 1 1 8 33 1 active sync/dev/sdc1 3 8 49 2 active sync/dev/sdd1 # Clear/dev /sdb, o [root @ serv01 ~] # Fdisk/dev/sdb [root @ serv01 ~] # Ls/web/config-2.6.32-131.0.15.el6.x86_64 lost + found System. map-2.6.32-131.0.15.el6.x86_64 initramfs-2.6.32-131.0.15.el6.x86_64.img symvers-2.6.32-131.0.15.el6.x86_64.gz vmlinuz-2.6.32-131.0.15.el6.x86_64 # view again, found/dev/sdb marked as removed [root @ serv01 ~] # Mdadm-D/dev/md5/dev/md5: Version: 1.2 Creation Time: Fri Aug 200:35:07 2013 Raid Level: raid5 Array Size: 4190208 (4.00 GiB 4.29 GB) used Dev Size: 2095104 (2046.34 MiB 2145.39 MB) Raid Devices: 3 Total Devices: 2 Persistence: Superblock is persistent Update Time: Fri Aug 2 00: 48: 192013 State: clean, degraded Active Devices: 2 Working Devices: 2 Failed Devices: 0 Spare Devices: 0 Layout: left-connected Ric Chunk Size: 512 K Name: serv01.host.com: 5 (localto host serv01.host.com) UUID: 97c47faa: 972aba90: 2248d692: b7fc2b6f Events: 30 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 33 1 active sync/dev/sdc1 3 8 49 2 active sync/dev/sdd1 step 10 Add hard disk [plain] # Add /dev/sde1 disk [root @ serv01 ~] # Mdadm -- manage/dev/md5 -- add/dev/sde1 mdadm: added/dev/sde1 # Check again and find that/dev/sde is marked as active [root @ serv01 ~] # Mdadm-D/dev/md5/dev/md5: Version: 1.2 Creation Time: Fri Aug 200:35:07 2013 Raid Level: raid5 Array Size: 4190208 (4.00 GiB 4.29 GB) used Dev Size: 2095104 (2046.34 MiB 2145.39 MB) Raid Devices: 3 TotalDevices: 3 Persistence: Superblock is persistent Update Time: Fri Aug 2 00: 49: 192013 State: clean Active Devices: 3 Working Devices: 3 Failed Devices: 0 Spare Devices: 0 Layout: l Eft-shortric Chunk Size: 512 K Name: serv01.host.com: 5 (localto host serv01.host.com) UUID: 97c47faa: 972aba90: 2248d692: b7fc2b6f Events: 51 Number Major Minor RaidDevice State 4 8 65 0 active sync/dev/sde1 1 8 33 1 active sync/dev/sdc1 3 8 49 2 active sync/dev/sdd1 # view RAID Information [root @ serv01 ~] # Cat/proc/mdstat Personalities: [raid6] [raid5] [raid4] md5: active raid5 sde1 [4] sdc1 [1] sdd1 [3] 4190208 blocks super 1.2 level 5,512 k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> Step 1: Add a logical volume [plain] # increase the size of the logical volume [root @ serv01 ~] # Lvextend-L + 1G/dev/myvg/mylv01 Extending logical volume mylv01 to 1.98 GiB Logical volume mylv01 successfully resized # activate the added operation [root @ serv01 ~] # Resize2fs/dev/myvg/mylv01 resize2fs 1.41.12 (17-May-2010) Filesystem at/dev/myvg/mylv01 is mountedon/web; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 mongoming an on-line resize of/dev/myvg/mylv01 to 518144 (4 k) blocks. the filesystem on/dev/myvg/mylv01 is now518144 blocks long. # Check again and find that the disk space is larger [root @ serv01 ~] # Df-h Filesystem Size Used Avail Use % Mounted on/dev/sda2 9.7G 1.1G 8.1G 12%/tmpfs 188 M 0 188 M 0%/dev/shm/dev/sda1 194 M 25 M 160 M 14%/boot/dev/sda5 4.0G 137 M 3.7G 4%/opt/dev/mapper/myvg-mylv01 2.0G 36 M 1.9G 2%/web/dev/mapper /myvg-mylv02 985 M 18 M 918 M 2%/data/dev/sr0 3.4G 3.4G 0 100%/iso # Add a hard disk [root @ serv01 ~] # Mdadm -- manage/dev/md5 -- add/dev/sdf1 mdadm: added/dev/sdf1 [root @ serv01 ~] # Mdadm-D/dev/md5/dev/md5: Version: 1.2 Creation Time: Fri Aug 200:35:07 2013 Raid Level: raid5 Array Size: 4190208 (4.00 GiB 4.29 GB) used Dev Size: 2095104 (2046.34 MiB 2145.39 MB) Raid Devices: 3 Total Devices: 4 Persistence: Superblock is persistent Update Time: Fri Aug 2 00:56:132013 State: clean Active Devices: 3 Working Devices: 4 Failed Devices: 0 Spare Devices: 1 Layout: Left-direction Ric Chunk Size: 512 K Name: serv01.host.com: 5 (localto host serv01.host.com) UUID: 97c47faa: 972aba90: 2248d692: b7fc2b6f Events: 52 Number Major Minor RaidDevice State 4 8 65 0 active sync/dev/sde1 1 8 33 1 active sync/dev/sdc1 3 8 49 2 active sync/dev/sdd1 5 8 81- spare/dev/sdf1 # Make the added hard disk take effect [root @ serv01 ~] # Mdadm -- grow/dev/md5-raid-device = 4 [root @ serv01 ~] # Pvdisplay --- Physical volume --- PVName/dev/md5 VGName myvg PVSize 4.00 GiB/not usable4.00 MiB Allocatable yes PESize 4.00 MiB Total PE 1022 Free PE 266 Allocated PE 756 PVUUID uZoEve-F3Dr-KSBL-tXpA-ZtX5-9ZPM-64uv06 # Let the Physical volume [root @ serv01 ~] # Pvresize/dev/md5 Physical volume "/dev/md5" changed 1 physical volume (s) resized/0 physical volume (s) not resized [root @ serv01 ~] # Pvdisplay --- Physical volume --- PVName/dev/md5 VGName myvg PVSize 5.99 GiB/not usable0 Allocatable yes PESize 4.00 MiB Total PE 1534 Free PE 778 Allocated PE 756 PVUUID uZoEve-F3Dr-KSBL-tXpA-ZtX5-9ZPM-64uv06 # can monitor files change [root @ serv01 ~] # Watch cat/proc/mdstat