To prepare the operation:
1, centos7.2 RAID 5 experimental detailed
Add 4 20G hard drives to VMware
2, view the hard disk (after the new disk needs to perform partprobe-to make kernel re-read the partition table)
[Email protected] ~]# fdisk-l
disk/dev/sdb:21.5 GB, 21474836480 bytes, 41943040 sectors
Device Boot Start End Blocks Id System
/DEV/SDB1 2048 41943039 20970496-up Linux
disk/dev/sdc:21.5 GB, 21474836480 bytes, 41943040 sectors
Device Boot Start End Blocks Id System
/DEV/SDC1 2048 41943039 20970496-up Linux
disk/dev/sdd:21.5 GB, 21474836480 bytes, 41943040 sectors
Device Boot Start End Blocks Id System
/DEV/SDD1 2048 41943039 20970496-up Linux
disk/dev/sde:21.5 GB, 21474836480 bytes, 41943040 sectors
Device Boot Start End Blocks Id System
/dev/sde1 2048 41943039 20970496-up Linux
Start building:
Create raid
1. Yum Install Mdadm
2, create a RAID--create create RAID5 name,--level:raid level; 3 disks, one to do hot standby
[Email protected] ~]# mdadm--create--auto=yes/dev/md0--level=5--raid-devices=3--spare-devices=1/dev/sd[b-e]1
Mdadm:defaulting to version 1.2 metadata
Mdadm:array/dev/md0 started.
3. View RAID Details
[Email protected] ~]# mdadm-d/dev/md0
4, Description: If the RAID boot boot. You need to configure the Riad configuration file. The default name is mdadm.conf, and this file does not exist by default and should be built by itself. The main function of this configuration file is to automatically load soft raid when the system starts up, and also to manage it later.
Note, the mdadm.conf file consists mainly of the following sections: The Devices option makes up the raid all devices, the array option specifies the device name of the array, the RAID level, the number of active devices in the array, and the device's UUID number.
5, [[email protected] ~]# mdadm--detail--scan >/etc/mdadm.conf
[Email protected] ~]# cat/etc/mdadm.conf
Devices/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1
array/dev/md0 metadata=1.2 Spares=1 name=centos7-67:0 uuid=c5795cff:9c3f8dfb:1bdf421d:fd03a587
6. Create the/dev/md0 file system
[Email protected] ~]# mkfs.ext4/dev/md0
7. Mount the/dev/md0 to the system
[Email protected] ~]# Mkdir/bakcup
8, [[email protected] ~]# mount/dev/md0/bakcup/
9. Add to the default Mount profile to mount it with the system boot
[Email protected] ~]# Vi/etc/fstab
/DEV/MD0/VAR/RAID5 EXT4 Defaults 0 0
10. What happens if one of the hard drives is broken? The system will automatically stop the hard drive, and then let the backup drive on top of the work. Under test:
[Email protected] ~]# cp-r/data/package//bakcup/
Let a piece of disk stop working first
[Email protected] ~]# mdadm/dev/md0--FAIL/DEV/SDC1
[Email protected] ~]# Cat/proc/mdstat
Personalities: [RAID6] [RAID5] [RAID4]
Md0:active RAID5 sdd1[4] sde1[3] sdc1[1] (f) sdb1[0]# (f) indicates that the disk is fail
And look at the/bakcup/file, it's normal.
[Email protected] ~]# mdadm-d/dev/md0 #这里也可以看到
11. Disk Management
Removing a bad hard drive
Mdadm/dev/md0--REMOVE/DEV/SDC1
Add a piece of hard disk
Mdadm/dev/md0--ADD/DEV/SDC1
12, the test will be RAID5 in the 3 hard drives are stopped, and then restarted after the discovery of the server network is not normal, unable to access the login.
Resolution: 1, need to/etc/fstab under the RAID5 record comments off, and then restart can
2. Rebuilding RAID5
3, Mdadm--stop/dev/md0
4, Mdadm--create--auto=yes/dev/md0--level=5--raid-devices=3--spare-devices=1/dev/sd[b-e]1
5, mount/dev/md0/backup/
Linux environment soft RAID 5 Build