Linux raid5+ Backup Disk test

Source: Internet
Author: User

RAID5 disk array technology requires at least 3 disks to do, plus 1 backup disks (this hard drive device is usually idle, and will automatically be replaced if there is a hard drive in the RAID array group), a total of 4 hard disk devices that need to be emulated into the virtual machine.

[[email protected] ~]# fdisk-ldisk/dev/sdb:2147 MB, 2147483648 bytes, 4194304 sectorsunits = sectors of 1 * 512 = Bytessector size (logical/physical): bytes/512 bytesi/o size (minimum/optimal): + bytes/512 bytesdisk/dev/s de:2147 MB, 2147483648 bytes, 4194304 sectorsunits = sectors of 1 * = bytessector size (logical/physical): by tes/512 bytesi/o Size (minimum/optimal): bytes/512 bytesdisk/dev/sdd:2147 MB, 2147483648 bytes, 4194304 sectorsU NITs = sectors of 1 * bytessector size (logical/physical): bytes/512 bytesi/o size (minimum/optimal): 512 bytes/512 bytesdisk/dev/sda:5368 MB, 5368709120 bytes, 10485760 sectorsunits = sectors of 1 * = Bytessector si Ze (logical/physical): bytes/512 bytesi/o size (minimum/optimal): bytes/512 bytesdisk label Type:dosdisk Iden      TIFIER:0X0006AE1E Device Boot Start End Blocks Id system/dev/sda1 * 2048 616447 307200 linux/dev/Sda2 616448 10485759 4934656 8e Linux lvmdisk/dev/sdc:2147 MB, 2147483648 bytes, 4194304 sectorsunits = Sectors of 1 * = bytessector size (logical/physical): bytes/512 bytesi/o size (minimum/optimal):-byte s/512 bytesdisk/dev/mapper/rhel-root:3976 MB, 3976200192 bytes, 7766016 sectorsunits = sectors of 1 * = BytesS Ector size (logical/physical): bytes/512 bytesi/o size (minimum/optimal): + bytes/512 Bytesdisk/dev/mapper/rhel  -swap:1073 MB, 1073741824 bytes, 2097152 sectorsunits = sectors of 1 * = bytessector size (logical/physical): 512 bytes/512 bytesi/o Size (minimum/optimal): bytes/512 bytes

Now to create a RAID5 disk array group + backup disk, -N 3 parameter represents the number of hard drives required to create this RAID5, the-L 5 parameter represents the level of the RAID disk array, and the -X 1 parameter Represents 1 backup disks, and when you look at the/dev/md0 disk array group, you can see that a backup disk is waiting.

[[email protected] ~]# mdadm-cv/dev/md0-n 3-l 5-x 1/dev/sdb/dev/sdc/dev/sdd/dev/sdemdadm:layout defaults to  Left-symmetricmdadm:layout defaults to left-symmetricmdadm:chunk size defaults to 512kmdadm:size set to 2095104Kmdadm: Defaulting to version 1.2 metadatamdadm:array/dev/md0 started.     [[email protected] ~]# mdadm-d/dev/md0/dev/md0:version:1.2 Creation time:tue 1 23:31:26 2017 Raid LEVEL:RAID5 Array size:4190208 (4.00 GiB 4.29 GB) used Dev size:2095104 (2046.34 MiB 2145.39 MB) raid De          Vices:3 Total devices:4 persistence:superblock is persistent Update Time:tue 1 23:31:39 2017 State:clean, degraded, recovering Active devices:2working devices:4 Failed devices:0 Spare devices:2 L  Ayout:left-symmetric Chunk size:512k Rebuild status:65% complete name:victory.rusky.com:0           (Local to host victory.rusky.com)    uuid:ca1f08c6:07e51bc7:668168b7:2bb84496     Events:11 number Major Minor raiddevice State 0 8 0 Active SYNC/DEV/SD   B 1 8 1 active SYNC/DEV/SDC 4 8 2 spare rebuilding /DEV/SDD--Indicates being created in 3 8 64-spare/dev/sde[[email protected] ~]# mdadm-d/dev/md0/ dev/md0:version:1.2 Creation time:tue 1 23:31:26 Raid level:raid5 Array size:4190208 (4 .% GiB 4.29 GB) used Dev size:2095104 (2046.34 MiB 2145.39 MB) Raid devices:3 total devices:4 Persistence: Superblock is persistent Update time:tue 1 23:31:47 state:clean Active devices:3working Devic Es:4 Failed devices:0 Spare devices:1 layout:left-symmetric Chunk size:512k name:victor  y.rusky.com:0           (Local to host victory.rusky.com) uuid:ca1f08c6:07e51bc7:668168b7:2bb84496 events:18 number Major MinoR Raiddevice State 0 8 0 Active SYNC/DEV/SDB 1 8 32 1      Active SYNC/DEV/SDC 4 8 2 active SYNC/DEV/SDD 3 8 64-  Spare/dev/sde[[email protected] ~]#

Format this piece of RAID5 disk array as an XFS file format and mount it to the directory, so you can use it.

[Email protected] ~]# Echo"/dev/md0/fuckraid xfs defaults 0 0">>/etc/Fstab[[email protected]~]# cat/etc/Fstab # #/etc/fstab# Created by Anaconda on Tue1 Geneva: -: -  .# # Accessible filesystems, by reference, is maintained under'/dev/disk'# See mans pages Fstab (5), Findfs (8), Mount (8) and/or Blkid (8) forMore info#/DEV/MAPPER/RHEL-ROOT/XFS defaults0 0UUID=e7987771-c54c-4b36-8a5c-8e71f129c3fe/boot XFS Defaults0 0/dev/mapper/rhel-swap Swap swap Defaults0 0/dev/md0/fuckraid XFS Defaults0 0[[Email protected]~]# mkdir/Fuckraid[[email protected]~]# Mount-A[[email protected]~]# DF-hfilesystem Size used Avail use%mounted on/dev/mapper/rhel-root3.7G 896M2.9G -% /Devtmpfs 910M0910M0% /Devtmpfs 920M0920M0%/dev/Shmtmpfs 920M8.4M912M1% /Runtmpfs 920M0920M0%/sys/fs/Cgroup/DEV/SDA1 297M 114M 184M the% /Boottmpfs 184M0184M0%/run/user/0/dev/md04.0G 33M4.0G1% /Fuckraid[[email protected]~]#

Move the hard drive device/dev/sdb out of the disk array group again, so quickly look at the status of the/DEV/MD0 disk array group and find that the backup disk has been automatically replaced, which is very practical, on the basis of data security assurance of RAID array group to further improve the data reliability.

[Email protected] ~]# Cd/fuckraid/[email protected] fuckraid]# lltotal0[[Email protected] fuckraid]# CD~[[Email protected]~]# Cd/fuckraid/[[email protected] fuckraid]# Touch Testaddfile[[email protected] fuckraid]# lltotal0-rw-r--r--.1Root root0The1  at: WuTestaddfile[[email protected] fuckraid]# mdadm/dev/md0-f/dev/Sdbmdadm:Set/dev/sdb Faultyinch/dev/Md0[[email protected] fuckraid]# mdadm-d/dev/md0/dev/md0:version:1.2Creation Time:tue1  at: to: -  .Raid level:raid5 Array Size:4190208(4.00GiB4.29GB) Used Dev Size:2095104(2046.34Mib2145.39MB) Raid Devices:3Total Devices:4Persistence:superblock isPersistent Update Time:tue1  at: Wu: +  .State:clean Active Devices:3working Devices:3Failed Devices:1Spare Devices:0Layout:left-symmetric Chunk size:512k Name:victory.rusky.com:0(Local to host victory.rusky.com) uuid:ca1f08c6:07e51bc7:668168b7:2bb84496 Events: ANumber Major Minor raiddevice State3       8        -        0Active sync/dev/SDE1       8        +        1Active sync/dev/SDC4       8        -        2Active sync/dev/SDD0       8        --faulty/dev/Sdb[[email protected] fuckraid]# lltotal0-rw-r--r--.1Root root0The1  at: WuTestaddfile[[email protected] fuckraid]#

Linux raid5+ Backup Disk test

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.