RAID configuration and management in Linux
I. Lab Environment
1): Virtual Machine
2): configure the lingux system on the Virtual Machine
3): use linux to implement Raid configuration
4): Add 6 hard disks to the Virtual Machine
Ii. Lab Objectives
1): familiar with several commonly used Raid
2): familiar with configuration commands for Raid 0 Raid 1 and Raid 5
3): understand the differences and usage of several commonly used Raid
4): understanding of several uncommon Raid
5): understand and remember the Raid lab requirements
How to build a RAID 10 array on Linux
Debian soft RAID Installation notes-use mdadm to install RAID1
Common RAID technology introduction and demo (Multi-chart)
The most common disk array in Linux-RAID 5
RAID0 + 1 and RAID5 Performance Test Results
Getting started with Linux: disk array (RAID)
3. Experiment steps
1): Configure raid0
1: Environment:
Add an sdb hard disk in two 1g primary partitions. Sdb1 and sdb2
2: Steps
Create two 1 GB primary partitions for the sdb hard drive
Create RAID0
Export the array configuration file
Format and mount to a specified directory
Modify/etc/fstab permanent mounting
3: Experiment steps
1): create two 1 GB primary partitions for the sdb hard disk.
[Root @ xuegod63 ~] # Fdisk/dev/sdb # Enter and divide it into two primary partitions
N
P // create a primary Partition
1 // The primary partition is sdb1
+ 1G // set the space to 1G.
[Root @ localhost ~] # Ll/dev/sdb * # view partitions. "*" indicates all partitions after sdb.
Brw-rw ----. 1 root disk 8, June 28 20:13/dev/sdb
Brw-rw ----. 1 root disk 8, 17 Aug 17 20:13/dev/sdb1
Brw-rw ----. 1 root disk 8, June 28 20:13/dev/sdb2
[Root @ localhost ~] # Ls/dev/sdb *
/Dev/sdb/dev/sdb1/dev/sdb2
# View in two ways. You can clearly see that there are three partitions in/dev.
2: Create RAID0
[Root @ localhost ~] # Mdadm-C-v/dev/md0-l0-n 2/dev/sdb1/dev/sdb2
# Create an array named "md0" with two hard disks at level 0: 1:/dev/sdb1 2:/dev/sdb2
Mdadm: chunk size defaults to 512 K
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array/dev/md0 started. // The created array md0 starts to run.
[Root @ localhost ~] # Mdadm-Ds # scan the newly created array. The array name is/dev/md0.
ARRAY/dev/md0metadata = 1.2 name = localhost. localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471 cbab
[Root @ localhost ~] # Mdadm-D/dev/md0 # view the array details
Number Major Minor RaidDevice State
0 8 17 0 active sync/dev/sdb1
1 8 18 1 active sync/dev/sdb2
[Root @ localhost ~] # Mdadm-Ds>/etc/mdadm. conf # generate a raid configuration file pointing to>/etc/mdadm. conf
[Root @ localhost ~] # Cat! $ # View the generated configuration file
Cat/etc/mdadm. conf
ARRAY/dev/md0 metadata = 1.2 name = localhost. localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471 cbab
[Root @ localhost ~] # Fdisk/dev/md0 # partitioning the Array
[Root @ localhost ~] # Ll/dev/md0 * # Partition
Brw-rw ----. 1 root disk 9, 0 June 28 20:32/dev/md0
Brw-rw ----. 1 root disk 259, 0 June 28 20:32/dev/md0p1 # Red, that is, the new partition
3: Format and mount the file to the specified directory.
Format the new area (/dev/md0p1)
[Root @ localhost ~] # Mkfs. ext4/dev/md0p1
Writing to inode table: complete
Creating journal (16384 blocks): Done
Writing superblocks and filesystemaccounting information: complete
Create directory and Mount
[Root @ localhost ~] # Mkdir/raid0 # create a file directory that is the same as Raid0
[Root @ localhost ~] # Mount/dev/md0p1/raid0 # mount/dev/md0p1 to/raid0
Enable mounting upon startup
[Root @ localhost ~] # Vim/etc/fstab
/Dev/md0p1/raid0/ext4 defaults 0 0
Save
View mounting
Root @ localhost ~] # Df-h
File System capacity in use available % mount point
/Dev/sda2 9.7 GB 3.2G 6.1G 35%/
Tmpfs 1000 M 264 K 1000 M 1%/dev/shm
/Dev/sda1 194 M 28 M 157 M 15%/boot
/Dev/md0p1 2.0G 68 M 1.9G 4%/raid0
Mounted successfully
Raid0 created successfully
2): Configure RAID1
1: Environment:
Create a partition: sdc1, sdc2, and the size of sdc3 is 1G: 2:
2: Steps
Create RAID1
Add 1g hot spare Disk
Simulate disk faults and automatically replace faulty Disks
Detach an array
3: Experiment steps
1: Create and view partitions
[Root @ localhost ~] # Fdisk/dev/sdc # create a partition
[Root @ localhost ~] # Ll/dev/sdc * # view the four zones
Brw-rw ----. 1 root disk 8, June 28 20:46/dev/sdc
Brw-rw ----. 1 root disk 8, 33 Aug 17 20:46/dev/sdc1
Brw-rw ----. 1 root disk 8, 34 June 28 20:46/dev/sdc2
Brw-rw ----. 1 root disk 8, 35 Aug 17 20:46/dev/sdc3
2: Create Raid1
[Root @ localhost ~] # Mdadm-C-v/dev/md1-l 1-n 2-x 1/dev/sdc1/dev/sdc2/dev/sdc3
# Create an array named "md1" with three hard disks at level 1:/dev/sdc1 2 and 3.
Mdadm: size set to 1059222 K
Continue creating array? Y # select Y
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array/dev/md1 started.
[Root @ localhost ~] # Ll/dev/md1 # view the array md1
Brw-rw ----. 1 root disk 9, June 28 20:56/dev/md1
[Root @ localhost ~] # Cat/proc/mdstat # The created md0 and md1 are running.
Personalities: [raid0] [raid1]
Md1: active raid1 sdc3 [2] (S) sdc2 [1] sdc1 [0]
1059222 blocks super 1.2 [2/2] [UU]
Md0: active raid0 sdb2 [1] sdb1 [0]
2117632 blocks super 1.2 512 k chunks
[Root @ localhost ~] # Mdadm-Ds>/etc/mdadm. conf # specify the generated file
[Root @ localhost ~] # Cat! $
Cat/etc/mdadm. conf
ARRAY/dev/md0 metadata = 1.2 name = localhost. localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471 cbab
ARRAY/dev/md1 metadata = 1.2 spares = 1 name = localhost. localdomain: 1 UUID = f7c34545: ecab8452: d826598e: e68c64f3
Partition, verify, and format it
[Root @ localhost ~] # Fdisk/dev/md1 # Partition
P
Partition number (1-4): 1
First cylinder (1-264805, default 1 ):
Using default value 1
Last cylinder, + cylinders or + size {K, M, G} (1-264805, default 264805 ):
Using default value 264805
Command (m for help): w
[Root @ localhost ~] # Ll/dev/md1 * # Verification
Brw-rw ----. 1 root disk 9, June 28 21:13/dev/md1
Brw-rw ----. 1 root disk 259, June 28 21:13/dev/md1p1
# For the md1 array partition, The md1p1 partition is automatically located under md1.
[Root @ localhost ~] # Mkfs. ext4/dev/md1p1 # format
Writing to inode table: complete
Creating journal (8192 blocks): Done
Writing superblocks and filesystemaccounting information: complete
Create directory and Mount
[Root @ localhost ~] # Mkdir/raid1 # create a directory
[Root @ localhost ~] # Mount/dev/md1p1/raid1 # mount it to the raid1 directory
[Root @ localhost ~] # Df-h # view mounted
File System capacity in use available % mount point
/Dev/sda2 9.7 GB 3.2G 6.1G 35%/
Tmpfs 1000 M 288 K 1000 M 1%/dev/shm
/Dev/sda1 194 M 28 M 157 M 15%/boot
/Dev/md0p1 2.0G 68 M 1.9G 4%/raid0
/Dev/md1p1 1019 M 34 M 934 M 4%/raid1
# We can see that my md1p1 has been mounted to raid1.
[Root @ localhost ~] # Cat/proc/mdstat # verify and view the running process
Personalities: [raid0] [raid1]
Md1: active raid1 sdc3 [2] (S) sdc2 [1] sdc1 [0]
1059222 blocks super 1.2 [2/2] [UU]
3. Fault Simulation
[Root @ localhost ~] # Vim/etc/mdadm. conf
ARRAY/dev/md1 metadata = 1.2 spares = 1 name = localhost. localdomain: 1 UUID = f7c34545: ecab8452: d826598e: e68c64f3
Idle-spares-backup hard disk
Before failure
Every 1.0 s: cat/proc/mdstat Sun Jun 28 21:4:432015
Personalities: [raid0] [raid1]
Md1: active raid1 sdc3 [2] (s) sdc2 [1] sdc1 [0]
1059222 blocks super 1.2 [2/2] [UU]
# Here we can clearly see that the md1 array is running normally. Sdc3 [2] (s) is a hot backup disk used for backup.
Disable/dev/sdc1 under/dev/md1
[Root @ localhost ~] # Mdadm-f/dev/md1/dev/sdc1
Mdadm: set/dev/sdc1 faulty in/dev/md1
After troubleshooting
Every 1.0 s: cat/proc/mdstat Sun Jun 28 21:44:15 2015
Personalities: [raid0] [raid1]
Md1: active raid1 sdc3 [2] sdc2 [1] sdc1 [0] (F)
1059222 blocks super 1.2 [2/2] [UU]
At this time, we can see that sdc1 [0] is followed by a (F), which indicates that the disk failed to run, and we can see the hot spare disk sdc3 [2] (s) now there is no (S) in the next sentence. It replaces the hard disk that was run just now.
[Root @ localhost ~] # Mdadm-r/dev/md1/dev/sdc1
Remove the faulty Disk
Mdadm: hot removed/dev/sdc1 from/dev/md1 # Remove/dev/sdc1 from/dev/md1
View
[Root @ localhost ~] # Watch-n 1 cat/proc/mdstat
Every 1.0 s: cat/proc/mdstat Sun Jun 28 21:50:15 2015
Personalities: [raid0] [raid1]
Md1: active raid1 sdc3 [2] sdc2 [1]
1059222 blocks super 1.2 [2/2] [UU]
No faulty sdc1 is available here.
Note: after the removal, You need to regenerate the following files to prevent future problems.
[Root @ localhost ~] # Mdadm-Ds>/etc/mdadm. conf # configure and generate the file
[Root @ localhost ~] # Cat! $
Cat/etc/mdadm. conf
ARRAY/dev/md0 metadata = 1.2 name = localhost. localdomain: 0 UUID = 0293bd32: 6821c095: 686fd2b9: 0471 cbab
ARRAY/dev/md1 metadata = 1.2 name = localhost. localdomain: 1 UUID = f7c34545: ecab8452: d826598e: e68c64f3
/Dev/md1 (spares = 1) there is no hot backup either.
For more details, please continue to read the highlights on the next page: