Implementation of soft array (raid) in Centos System

Source: Internet
Author: User

1. Create a test user and modify Mounting Parameters


[Root @ localhost ~] # Useradd user1 -- create two new users [root @ localhost ~] # Useradd user2 [root @ localhost ~] # Mount-o remount, usrquota, kgquota/mnt/sdb -- Re-mount and add the parameter [root @ localhost ~] # Mount-l -- view mount options/dev/mapper/VolGroup-lv_rooton/typeext4 (rw) proc on/proctypeproc (rw) sysfs on/systypesysfs (rw) devpts on/dev/ptstypedevpts (rw, gid = 5, mode = 620) tmpfs on/dev/shmtypetmpfs (rw, rootcontext = "system_u: object_r: tmpfs_t: s0 ") /dev/sda1on/boottypeext4 (rw) none on/proc/sys/fs/binfmt_misctypebinfmt_misc (rw)/dev/sdb1on/mnt/sdbtypeext4) [root @ localhost ~] # Quotacheck-avug-mf -- generate two quota files: Your kernel probably supports journaled quotabut you are not using it. consider switching to journaled quotato avoid running quotacheckafter an unclean shutdown. quotacheck: Scanning/dev/sdb1 [/mnt/sdb] donequotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Checked 2 directories and 0 filesquotacheck: Old filenot found. quotacheck: Old filenot found. [root @ localhost ~] # Ll/mnt/sdb -- view the two generated files total 26-rw -------. 1 root 6144 Jan 9 aqu59 aquota. group-rw -------. 1 root 6144 Jan 9 aqu59 aquota. userdrwx ------. 2 root 12288 Jan 9 17: 55 lost + found [root @ localhost ~] # Quotaon-avug -- enable the quota function/dev/sdb1 [/mnt/sdb]: group quotas turned on/dev/sdb1 [/mnt/sdb]: user quotas turned on [root @ localhost ~] # Edquota-u user1Disk quotas foruser user1 (uid 500): Filesystem blocks soft hard inodes soft hard/dev/sdb10 10 10 20 0 0 0 0 [root @ localhost ~] # Edquota-u user2Disk quotas foruser user2 (uid 501): Filesystem blocks soft hard inodes soft hard/dev/sdb10 5 10 0 0 0
[Root @ localhost ~] # Useradd user1 -- create two new users [root @ localhost ~] # Useradd user2 [root @ localhost ~] # Mount-o remount, usrquota, kgquota/mnt/sdb -- Re-mount and add the parameter [root @ localhost ~] # Mount-l -- view mount options/dev/mapper/VolGroup-lv_rooton/typeext4 (rw) proc on/proctypeproc (rw) sysfs on/systypesysfs (rw) devpts on/dev/ptstypedevpts (rw, gid = 5, mode = 620) tmpfs on/dev/shmtypetmpfs (rw, rootcontext = "system_u: object_r: tmpfs_t: s0 ") /dev/sda1on/boottypeext4 (rw) none on/proc/sys/fs/binfmt_misctypebinfmt_misc (rw)/dev/sdb1on/mnt/sdbtypeext4) [root @ localhost ~] # Quotacheck-avug-mf -- generate two quota files: Your kernel probably supports journaled quotabut you are not using it. consider switching to journaled quotato avoid running quotacheckafter an unclean shutdown. quotacheck: Scanning/dev/sdb1 [/mnt/sdb] donequotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Cannot stat old user quotafile: No such fileor directoryquotacheck: Cannot stat old group quotafile: No such fileor directoryquotacheck: Checked 2 directories and 0 filesquotacheck: Old filenot found. quotacheck: Old filenot found. [root @ localhost ~] # Ll/mnt/sdb -- view the two generated files total 26-rw -------. 1 root 6144 Jan 9 aqu59 aquota. group-rw -------. 1 root 6144 Jan 9 aqu59 aquota. userdrwx ------. 2 root 12288 Jan 9 17: 55 lost + found [root @ localhost ~] # Quotaon-avug -- enable the quota function/dev/sdb1 [/mnt/sdb]: group quotas turned on/dev/sdb1 [/mnt/sdb]: user quotas turned on [root @ localhost ~] # Edquota-u user1 -- The/mnt/sdb directory used by user user1 should not start with 20 MDisk quotas foruser user1 (uid 500 ): filesystem blocks soft hard inodes soft hard/dev/sdb10 10000 20000 0 0 0 0 [root @ localhost ~] # Edquota-u user2 -- the user user2 must delete Disk quotas foruser user2 (uid 501) at least 10 MB for the sdb directory ): filesystem blocks soft hard inodes soft hard/dev/sdb10 5000 10000 0 0 0

2. Verify the quota


[Root @ localhost ~] # Su-user1 [user1 @ localhost ~] $ Cd/mnt/sdb [user1 @ localhost sdb] $ ddif =/dev/zeroof = 12 bs = 1 M count = 5 -- there is no warning message for creating a 5 M file, normal 5 + 0 records in5 + 0 records out5242880 bytes (5.2 MB) copied, 0.0525754 s, 99.7 MB/s [user1 @ localhost sdb] $ ll-h 12-rw-rw-r --. 1 user1 user1 5.0 M Jan 9 [user1 @ localhost sdb] $ ddif =/dev/zeroof = 123 bs = 1 M count = 21 -- the file that created 12 M has warning information, sdb1: warning, user block quotaexceeded. sdb1: write failed, user block limit Reached. dd: writing '000000': Disk quotaexceeded20 + 0 records in19 + 0 records out20475904 bytes (20 MB) copied, 123 s, 102 MB/s [user1 @ localhost sdb] $ ll-h 123-rw-rw-r --. 1 user1 user1 0 Jan 9 123 [user1 @ localhost sdb] $ exitlogout [root @ localhost ~] # Su-user2 -- User user2 test [user2 @ localhost ~] $ Cd/mnt/sdb [user2 @ localhost sdb] $ ddif =/dev/zeroof = 23 bs = 1 M count = 8 -- 8 M file written successfully sdb1: warning, user block quotaexceeded.8 + 0 records in8 + 0 records out8388608 bytes (8.4 MB) copied, 0.0923618 s, 90.8 MB/s [user2 @ localhost sdb] $ ll-h 23 -- view the file size-rw-r --. 1 user2 user2 8.0 M Jan 9 [user2 @ localhost sdb] $[user2 @ localhost sdb] $ ddif =/dev/zeroof = 23 bs = 1 M count = 11 -- write 11 M file failed sdb1: warning, user block quotaexceeded. sdb1: write failed, user block limit reached. dd: writing '23': Disk quotaexceeded10 + 0 records in9 + 0 records out10235904 bytes (10 MB) copied, 0.106298 s, 96.3 MB/s [user2 @ localhost sdb] $

3. view the quota configuration, modify the warning time, and cancel the quota


[Root @ localhost ~] # Quota-vu user1 user2 -- query the specified user quota information Disk quotas foruser user1 (uid 500 ): filesystem blocks quotalimit grace files quotalimit grace/dev/sdb10 10000 20000 0 0 0 Disk quotas foruser user2 (uid 501 ): filesystem blocks quotalimit grace files quotalimit grace/dev/sdb18193 * 5000 10000 6 days 1 0 0 [root @ localhost ~] # Repquota-av -- all users and quota information *** Report foruser quotas on device/dev/sdb1Block grace time: 7 days; Inode grace time: 7 daysBlock limits File limitsUser used soft hard grace Limit root -- 13 0 0 2 0 0user1 -- 0 10000 20000 8193 0 0 0user2 +-5000 10000 6 days 1 0 0 Statistics: total blocks: 7 Data blocks: 1 Entries: 3 Used Average: 3.000000 [root @ localhost ~] # Edquota-t -- Modify file warning days (Block days Inode days) Grace period before enforcing soft limits forusers: Time unitsmay: days, hours, minutes, or secondsFilesystem Block grace period Inode grace period/dev/sdb17days 7 days [root @ localhost ~] # Vim/etc/warnquota. conf -- View warning information [root @ localhost ~] # Quotaoff/mnt/sdb -- disable the quota Function

4. Disk partitioning, converting the disk format into a soft Array


[Root @ localhost ~] # Sfdisk-l -- check the number of hard disks in the system: Disk/dev/sda: 1044 cylinders, 255 heads, 63 sectors/trackUnits = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End # cyls # blocks Id System/dev/sda1 * 0 + 63-64-512000 83 Linux/dev/sda263 + 1044-981-7875584 8e Linux LVM /dev/sda30-0 0 0 Empty/dev/sda40-0 0 0 EmptyDisk/dev/sdb: 74 cylinders, 255 heads, 63 sectors/track -- Second Hard Disk/dev /Sdc: 79 cylinders, 255 heads, 63 sectors/track -- third Hard Disk/dev/sdd: 74 cylinders, 255 heads, 63 sectors/track -- Fourth Hard Disk/dev/mapper/VolGroup-lv_root: 849 cylinders, 255 heads, 63 sectors/trackDisk/dev/mapper/VolGroup-lv_swap: 130 cylinders, 255 heads, 63 sectors/track [root @ localhost ~] # Fdisk-cu/dev/sdb -- format of the partition to be converted (I will not display all the following partitions here) Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel with disk identifier 0x2255ec93. changes will remain inmemory only, untilyou decide to write them. after that, of course, the previous content won't be recoverable. warning: invalid flag 0x0000 of partition table 4 will be corrected by w (rit E) Command (m forhelp): nCommand actione extendedp primary partition (1-4) pPartition number (1-4): 1 First sector (2048-1196031, default 2048 ): using default value 2048 Last sector, + sectors or + size {K, M, G} (2048-1196031, default 1196031): + 100 MCommand (m forhelp ): nCommand actione extendedp primary partition (1-4) pPartition number (1-4): 2 First sector (206848-1196031, default 206848): Using default va Lue 206848 Last sector, + sectors or + size {K, M, G} (206848-1196031, default 1196031): + 100 MCommand (m forhelp ): nCommand actione extendedp primary partition (1-4) pPartition number (1-4): 3 First sector (411648-1196031, default 411648): Using default value 411648 Last sector, + sectors or + size {K, M, G} (411648-1196031, default 1196031): + 100 MCommand (m forhelp): tPartition number (1-4 ): 1Hex code (typeL List codes): fdChanged system typeof partition 1 to fd (Linux raid autodetect) Command (m forhelp): tPartition number (1-4): 2Hex code (typeL to list codes ): fdChanged system typeof partition 2 to fd (Linux raid autodetect) Command (m forhelp): tPartition number (1-4): 1Hex code (typeL to list codes ): fdChanged system typeof partition 1 to fd (Linux raid autodetect) Command (m forhelp): pDisk/dev/s Db: 612 MB, 612368384 bytes255 heads, 63 sectors/track, 74 cylinders, total 1196032 sectorsUnits = sectors of 1*512 = 512 bytesSector size (logical/physical ): 512 bytes/512 bytesI/Osize (minimum/optimal): 512 bytes/512 bytesDisk identifier: 0x2255ec93Device Boot Start End Blocks Id System/dev/sdb12048 206847 102400 fd Linux raid autodetect/dev/sdb2206848 411647 102400 fd Linux raid autodetec T/dev/sdd3421348 616447 102400 fd Linux raid autodetectCommand (m forhelp): wThe partition table has been altered! Calling ioctl () to re-readpartition table. Syncing disks. [root @ localhost ~] # Partx-a/dev/sdb -- force read partition Table BLKPG: Device or resource busyerror adding partition 1 BLKPG: Device or resource busyerror adding partition 2 BLKPG: device or resource busyerror adding partition 3 [root @ localhost ~] # Partx-a/dev/sdc -- force read partition Table BLKPG: Device or resource busyerror adding partition 1 BLKPG: Device or resource busyerror adding partition 2 BLKPG: device or resource busyerror adding partition 3 [root @ localhost ~] # Partx-a/dev/sdd -- force read partition Table BLKPG: Device or resource busyerror adding partition 1 BLKPG: Device or resource busyerror adding partition 2 BLKPG: Device or resource busyerror adding partition 3

5. Make the first partition of the second and third hard disks into RAID 0.


[Root @ localhost ~] # Mdadm -- create/dev/md0 -- raid-devices = 2 -- level = 0/dev/sd {B, c} 1 -- Second, the first partition of the three hard disks is raid 0mdadm: defaulting to version 1.2 metadatamdadm: array/dev/md0started. [root @ localhost ~] # Cat/proc/mdstat -- View raid information Personalities: [raid0] md0: active raid0 sdc1 [1] sdb1 [0] 224256 blocks super 1.2 512 k chunksunused devices: <none> [root @ localhost ~] # Mkfs. ext4/dev/md0 -- format mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) Fragment size = 1024 (log = 0) stride = 512 blocks, Stripe width = 1024 blocks561_inodes, 224256 blocks000012 blocks (5.00%) reserved forthe super userFirst data block = 1 Maximum filesystem blocks = 6737100828 block groups8192 blocks per group, 8192 fragments per group2008 inodes per groupSuperblock back Ups stored on blocks: 8193,245 77, 40961,573 45, 73729,204 801, 221185 Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 38 mounts or180 days, whichever comes first. use tune2fs-c or-I to override. [root @ localhost ~] # Mount/dev/md0/mnt/sdb [root @ localhost ~] #

6. Run the second partition of the second and third hard disks as raid1.


[Root @ localhost ~] # Mdadm -- create/dev/md1 -- raid-devices = 2 -- level = 1/dev/sd {B, c} 2 mdadm: Note: this array has metadata at the start andmay not be suitable as a boot device. if you plan tostore'/boot' on this device please ensure thatyour boot-loader understands md/v1.x metadata, or use -- metadata = 0.90 Continue creating array? Continue creating array? (Y/n) ymdadm: Defaulting to version 1.2 metadatamdadm: array/dev/md1started. [root @ localhost ~] # Mkfs. ext4/dev/md1mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) Fragment size = 1024 (log = 0) stride = 0 blocks, Stripe width = 0 blocks28112 inodes, 112320 blocks5616 blocks (5.00%) reserved forthe super userFirst data block = 1 Maximum filesystem blocks = 6737100814 block groups8192 blocks per group, 8192 fragments per group2008 inodes per groupSuperblock backups stored o N blocks: 8193,245 77, 40961,573 45, 73729 Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 35 mounts or180 days, whichever comes first. use tune2fs-c or-I to override. [root @ localhost ~] # Mount/dev/md1/mnt/sdb1/

7. Make the third partition of the second, third, and fourth hard disks into RAID 5.


[Root @ localhost ~] # Mdadm -- create/dev/md2 -- raid-devices = 3 -- level = 5/dev/sd {B, c, d} 3 mdadm: Defaulting to version 1.2 metadatamdadm: array/dev/md2started. [root @ localhost ~] # Mkfs. ext4/dev/md2mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) Fragment size = 1024 (log = 0) stride = 512 blocks, Stripe width = 1024 blocks561_inodes, 224256 blocks000012 blocks (5.00%) reserved forthe super userFirst data block = 1 Maximum filesystem blocks = 6737100828 block groups8192 blocks per group, 8192 fragments per group2008 inodes per groupSuperblock backups st Ored on blocks: 8193,245 77, 40961,573 45, 73729,204 801, 221185 Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 35 mounts or180 days, whichever comes first. use tune2fs-c or-I to override. [root @ localhost ~] # Mount/dev/md2/mnt/sdb2/

8. View raid Information


[Root @ localhost ~] # Cat/proc/mdstatPersonalities: [raid0] [raid1] [raid6] [raid5] [raid4] md2: active raid5 sdd3 [3] sdc3 [1] sdb3 [0] 224256 blocks super 1.2 level 5,512 k chunk, algorithm 2 [3/3] [UUU] md1: active raid1 sdc2 [1] sdb2 [0] 112320 blocks super 1.2 [2/2] [UU] md0: active raid0 sdc1 [1] sdb1 [0] 224256 blocks super 1.2 512 k chunksunused devices: <none> [root @ localhost ~] # Df-THFilesystem Type Size Used Avail Use % Mounted on/dev/mapper/VolGroup-lv_rootext4 6.9G 6.4G 166 M 98%/tmpfs 262 M 0 262 M 0%/dev/shm/dev /sda1ext4 508 M 48 M 435 M 10%/boot/dev/md0ext4 223 M 6.4 M 205 M 3%/mnt/sdb/dev/md1ext4 112 M 5.8 M 100 M 6%/mnt/ sdb1/dev/md2ext4 223 M 6.4 M 205 M 3%/mnt/sdb2 [root @ localhost ~] #

9. raid fault recovery and use of raid logical volumes (lvm)


[Root @ localhost ~] # Mdadm-a/dev/md2/dev/sdd1 -- add a partition mdadm: added/dev/sdd1 to raid5 [root @ localhost ~] # Mdadm-f/dev/md2/dev/sdd3 -- change the third partition in raid5 to invalid mdadm: set/dev/sdd3faulty in/dev/md2 [root @ localhost ~] # Mdadm-r/dev/md2/dev/sdd3 -- remove the third partition mdadm: hot removed/dev/sdd3from/dev/md2 [root @ localhost ~] From raid5. # Cat/proc/mdstatPersonalities: [raid0] [raid1] [raid6] [raid5] [raid4] md2: active raid5 sdd1 [4] sdc3 [1] sdb3 [0] -- view all partitions in raid5: 224256 blocks super 1.2 level 5,512 k chunk, algorithm 2 [3/3] [UUU] md1: active raid1 sdc2 [1] sdb2 [0] 112320 blocks super 1.2 [2/2] [UU] md0: active raid0 sdc1 [1] sdb1 [0] 224256 blocks super 1.2 512 k chunksunused devices: <none> [root @ localhost ~] # Pvcreate/dev/md2 -- convert raid5 to Physical volume "/dev/md2" successfully created [root @ localhost ~] # Vgcreate vg0/dev/md2 -- a Volume group composed of physical volumes Volume group "vg0" successfully created [root @ localhost ~] # Lvcreate-L 150 M-n test/dev/vg0 -- divide logical volume Rounding up size to full physical extent 152.00 MiBLogical volume "test" created [root @ localhost ~] # Mkfs. ext4/dev/vg0/test -- format the logical volume mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) fragment size = 1024 (log = 0) Stride = 512 blocks, Stripe width = 1024 blocks38912 inodes, 155648 blocks7782 blocks (5.00%) reserved forthe super userFirst data block = 1 Maximum filesystem blocks = 6737100819 block groups8192 blocks per group, 8192 fragments per group2048 inodes per groupSuperblo Ck backups stored on blocks: 8193,245 77, 40961,573 45, 73729 Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 36 mounts or180 days, whichever comes first. use tune2fs-c or-I to override. [root @ localhost ~] # Mount/dev/vg0/test/mnt/sdb2/-- mount [root @ localhost ~] # Df-THFilesystem Type Size Used Avail Use % Mounted on/dev/mapper/VolGroup-lv_rootext4 6.9G 6.4G 166 M 98%/tmpfs 262 M 0 262 M 0%/dev/shm/dev /sda1ext4 508 M 48 M 435 M 10%/boot/dev/md0ext4 223 M 6.4 M 205 M 3%/mnt/sdb/dev/md1ext4 112 M 5.8 M 100 M 6%/mnt/ sdb1/dev/mapper/vg0-testext4 155 M 5.8 M 141 M 4%/mnt/sdb2 [root @ localhost ~] #


Implementation of logical volume (lvm) in linux: http://tongcheng.blog.51cto.com/6214144/1350144

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.