磁碟管理——小實驗

來源:互聯網
上載者:User

磁碟管理——小實驗 一 實驗要求1.保證資料的安全,任何一塊磁碟壞掉不影響資料丟失,還要考慮IO效能2.劃分兩個單獨的磁碟分割/web和/data3.可以動態地擴充分區的大小 二 實現功能 第一步 對磁碟進行分區[plain] [root@serv01 ~]# fdisk /dev/sdb  [root@serv01 ~]# fdisk /dev/sdc  [root@serv01 ~]# fdisk /dev/sdd  [root@serv01 ~]# fdisk /dev/sde   第二步 製作RAID5硬碟[plain] [root@serv01 ~]# mdadm -C /dev/md5 -l 5 -n3 /dev/sdb1 /dev/sdc1 /dev/sdd1  mdadm: Defaulting to version 1.2 metadata  mdadm: array /dev/md0 started.  [root@serv01 ~]# mkfs.ext4 /dev/md5  mke2fs 1.41.12 (17-May-2010)  Filesystem label=  OS type: Linux  Block size=4096 (log=2)  Fragment size=4096 (log=2)  Stride=128 blocks, Stripe width=256 blocks  262144 inodes, 1047552 blocks  52377 blocks (5.00%) reserved for the superuser  First data block=0  Maximum filesystem blocks=1073741824  32 block groups  32768 blocks per group, 32768 fragments pergroup  8192 inodes per group  Superblock backups stored on blocks:         32768,98304, 163840, 229376, 294912, 819200, 884736     Writing inode tables: done                             Creating journal (16384 blocks): done  Writing superblocks and filesystemaccounting information: done     This filesystem will be automaticallychecked every 27 mounts or  180 days, whichever comes first.  Use tune2fs -c or -i to override.   第三步 建立物理卷[plain] [root@serv01 ~]# pvcreate /dev/md5   Physical volume "/dev/md5" successfully created   第四步 建立卷組[plain] [root@serv01 ~]# vgcreate myvg /dev/md5   Volume group "myvg" successfully created   第五步 建立邏輯卷[plain] #建立邏輯卷mylv01  [root@serv01 ~]#  lvcreate -L 1000M -n mylv01 myvg   Logical volume "mylv01" created  #建立邏輯卷mylv02  [root@serv01 ~]#  lvcreate -L 1000M -n mylv02 myvg   Logical volume "mylv02" created   第六步 建立相關目錄和設定檔[plain] #建立mdadm.conf檔案  [root@serv01 ~]# mdadm --detail --scan >/etc/mdadm.conf  #建立/web目錄  [root@serv01 ~]# mkdir /web  #建立/data目錄  [root@serv01 ~]# mkdir /data  #將掛載資訊寫到fstab檔案中  [root@serv01 ~]# echo"/dev/myvg/mylv01 /web ext4 defaults 1 2" >> /etc/fstab  [root@serv01 ~]# echo"/dev/myvg/mylv02 /data ext4 defaults 1 2" >> /etc/fstab  [root@serv01 ~]# tail -n 2 /etc/fstab  /dev/myvg/mylv01 /web ext4 defaults 1 2  /dev/myvg/mylv02 /data ext4 defaults 1 2   第七步 格式化硬碟[plain] #格式化mylv01  [root@serv01 ~]# mkfs.ext4 /dev/myvg/mylv01  mke2fs 1.41.12 (17-May-2010)  Filesystem label=  OS type: Linux  Block size=4096 (log=2)  Fragment size=4096 (log=2)  Stride=128 blocks, Stripe width=256 blocks  64000 inodes, 256000 blocks  12800 blocks (5.00%) reserved for the superuser  First data block=0  Maximum filesystem blocks=264241152  8 block groups  32768 blocks per group, 32768 fragments pergroup  8000 inodes per group  Superblock backups stored on blocks:         32768,98304, 163840, 229376     Writing inode tables: done                             Creating journal (4096 blocks): done  Writing superblocks and filesystemaccounting information: done     This filesystem will be automaticallychecked every 28 mounts or  180 days, whichever comes first.  Use tune2fs -c or -i to override.     #格式化mylv02  [root@serv01 ~]# mkfs.ext4 /dev/myvg/mylv02  mke2fs 1.41.12 (17-May-2010)  Filesystem label=  OS type: Linux  Block size=4096 (log=2)  Fragment size=4096 (log=2)  Stride=128 blocks, Stripe width=256 blocks  64000 inodes, 256000 blocks  12800 blocks (5.00%) reserved for the superuser  First data block=0  Maximum filesystem blocks=264241152  8 block groups  32768 blocks per group, 32768 fragments pergroup  8000 inodes per group  Superblock backups stored on blocks:         32768,98304, 163840, 229376     Writing inode tables: done                             Creating journal (4096 blocks): done  Writing superblocks and filesystemaccounting information: done     This filesystem will be automaticallychecked every 24 mounts or  180 days, whichever comes first.  Use tune2fs -c or -i to override.       第八步 掛載[plain] #掛載web  [root@serv01 ~]# mount /dev/myvg/mylv01/web  #掛載data  [root@serv01 ~]# mount /dev/myvg/mylv02/data  #查看磁碟資訊  [root@serv01 ~]# df -h  Filesystem            Size  Used Avail Use% Mounted on  /dev/sda2             9.7G  1.1G 8.1G  12% /  tmpfs                 188M     0 188M   0% /dev/shm  /dev/sda1             194M   25M 160M  14% /boot  /dev/sda5             4.0G  137M 3.7G   4% /opt  /dev/sr0              3.4G  3.4G    0 100% /iso  /dev/mapper/myvg-mylv01                        985M   18M 918M   2% /web  /dev/mapper/myvg-mylv02                        985M   18M 918M   2% /data   第九步 類比硬碟壞掉[plain] #拷貝檔案到web目錄  [root@serv01 ~]# cp /boot/* /web/  #查看RAID5的詳細資料  [root@serv01 ~]# mdadm -D /dev/md5  /dev/md5:         Version : 1.2   Creation Time : Fri Aug  200:35:07 2013      Raid Level : raid5      Array Size : 4190208 (4.00 GiB 4.29 GB)   Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)    Raid Devices : 3   Total Devices : 3     Persistence : Superblock is persistent        Update Time : Fri Aug  2 00:46:462013           State : clean   Active Devices : 3  Working Devices : 3   Failed Devices : 0   Spare Devices : 0             Layout : left-symmetric      Chunk Size : 512K               Name : serv01.host.com:5  (local to host serv01.host.com)            UUID : 97c47faa:972aba90:2248d692:b7fc2b6f          Events : 22        Number   Major   Minor  RaidDevice State        0       8       17       0      active sync   /dev/sdb1        1       8       33       1      active sync   /dev/sdc1        3       8       49       2      active sync   /dev/sdd1  #清除/dev/sdb,o  [root@serv01 ~]# fdisk /dev/sdb  [root@serv01 ~]# ls /web/  config-2.6.32-131.0.15.el6.x86_64         lost+found                             System.map-2.6.32-131.0.15.el6.x86_64  initramfs-2.6.32-131.0.15.el6.x86_64.img  symvers-2.6.32-131.0.15.el6.x86_64.gz  vmlinuz-2.6.32-131.0.15.el6.x86_64  #再次查看,發現/dev/sdb被標記為removed  [root@serv01~]# mdadm -D /dev/md5  /dev/md5:         Version : 1.2   Creation Time : Fri Aug  200:35:07 2013      Raid Level : raid5      Array Size : 4190208 (4.00 GiB 4.29 GB)   Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)    Raid Devices : 3   Total Devices : 2     Persistence : Superblock is persistent        Update Time : Fri Aug  2 00:48:192013           State : clean, degraded   Active Devices : 2  Working Devices : 2   Failed Devices : 0   Spare Devices : 0             Layout : left-symmetric      Chunk Size : 512K               Name : serv01.host.com:5  (localto host serv01.host.com)            UUID : 97c47faa:972aba90:2248d692:b7fc2b6f          Events : 30        Number   Major   Minor  RaidDevice State        0       0        0       0      removed        1       8       33       1      active sync   /dev/sdc1        3       8       49       2      active sync   /dev/sdd1   第十步 添加硬碟[plain] #添加/dev/sde1磁碟  [root@serv01 ~]# mdadm --manage /dev/md5--add /dev/sde1  mdadm: added /dev/sde1  #再次查看,發現/dev/sde被標記為active  [root@serv01 ~]# mdadm -D /dev/md5  /dev/md5:         Version : 1.2   Creation Time : Fri Aug  200:35:07 2013      Raid Level : raid5      Array Size : 4190208 (4.00 GiB 4.29 GB)   Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)    Raid Devices : 3    TotalDevices : 3     Persistence : Superblock is persistent        Update Time : Fri Aug  2 00:49:192013           State : clean   Active Devices : 3  Working Devices : 3   Failed Devices : 0   Spare Devices : 0             Layout : left-symmetric      Chunk Size : 512K               Name : serv01.host.com:5  (localto host serv01.host.com)            UUID : 97c47faa:972aba90:2248d692:b7fc2b6f          Events : 51        Number   Major   Minor  RaidDevice State        4       8       65       0      active sync   /dev/sde1        1       8       33       1      active sync   /dev/sdc1        3       8       49       2      active sync   /dev/sdd1  #查看RAID資訊  [root@serv01 ~]# cat /proc/mdstat  Personalities : [raid6] [raid5] [raid4]  md5 : active raid5 sde1[4] sdc1[1] sdd1[3]       4190208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]         unused devices: <none>   第十一步 增加邏輯卷[plain] #可以增加邏輯卷的大小  [root@serv01 ~]# lvextend -L +1G/dev/myvg/mylv01   Extending logical volume mylv01 to 1.98 GiB   Logical volume mylv01 successfully resized  #讓增加的操作生效  [root@serv01 ~]# resize2fs /dev/myvg/mylv01  resize2fs 1.41.12 (17-May-2010)  Filesystem at /dev/myvg/mylv01 is mountedon /web; on-line resizing required  old desc_blocks = 1, new_desc_blocks = 1  Performing an on-line resize of/dev/myvg/mylv01 to 518144 (4k) blocks.  The filesystem on /dev/myvg/mylv01 is now518144 blocks long.  #再次查看,發現磁碟空間變大  [root@serv01 ~]# df -h  Filesystem            Size  Used Avail Use% Mounted on  /dev/sda2             9.7G  1.1G 8.1G  12% /  tmpfs                 188M     0 188M   0% /dev/shm  /dev/sda1             194M   25M 160M  14% /boot  /dev/sda5             4.0G  137M 3.7G   4% /opt  /dev/mapper/myvg-mylv01                        2.0G   36M 1.9G   2% /web  /dev/mapper/myvg-mylv02                        985M   18M 918M   2% /data  /dev/sr0              3.4G  3.4G    0 100% /iso          #再添加一塊硬碟  [root@serv01 ~]# mdadm --manage /dev/md5--add /dev/sdf1  mdadm: added /dev/sdf1  [root@serv01 ~]# mdadm -D /dev/md5  /dev/md5:         Version : 1.2   Creation Time : Fri Aug  200:35:07 2013      Raid Level : raid5      Array Size : 4190208 (4.00 GiB 4.29 GB)   Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)    Raid Devices : 3   Total Devices : 4     Persistence : Superblock is persistent        Update Time : Fri Aug  2 00:56:132013           State : clean   Active Devices : 3  Working Devices : 4   Failed Devices : 0   Spare Devices : 1             Layout : left-symmetric      Chunk Size : 512K               Name : serv01.host.com:5  (localto host serv01.host.com)            UUID : 97c47faa:972aba90:2248d692:b7fc2b6f          Events : 52        Number   Major   Minor  RaidDevice State        4       8       65       0      active sync   /dev/sde1         1      8       33        1     active sync   /dev/sdc1        3       8       49       2      active sync   /dev/sdd1           5       8       81       -      spare   /dev/sdf1  #讓添加的硬碟生效  [root@serv01 ~]# mdadm --grow /dev/md5—raid-device=4     [root@serv01 ~]# pvdisplay    ---Physical volume ---    PVName               /dev/md5    VGName               myvg    PVSize               4.00 GiB / not usable4.00 MiB   Allocatable           yes    PESize               4.00 MiB   Total PE              1022   Free PE               266   Allocated PE          756    PVUUID              uZoEve-F3Dr-KSBL-tXpA-ZtX5-9ZPM-64uv06        #讓物理卷的大小同步  [root@serv01 ~]# pvresize /dev/md5   Physical volume "/dev/md5" changed    1physical volume(s) resized / 0 physical volume(s) not resized  [root@serv01 ~]# pvdisplay    ---Physical volume ---    PVName               /dev/md5    VGName               myvg    PVSize               5.99 GiB / not usable0     Allocatable           yes    PESize               4.00 MiB   Total PE              1534   Free PE               778   Allocated PE          756    PVUUID              uZoEve-F3Dr-KSBL-tXpA-ZtX5-9ZPM-64uv06     #可以監控檔案的變化  [root@serv01 ~]# watch cat /proc/mdstat   

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.