First, RAID introduction
RAIDAn inexpensive redundant disk array (redundant Array of inexpensive Disks), fromLinux 2.4kernel start,Linuxsoftware is availableRAID, you don't have to buy expensive hardwareRAIDcontrollers and accessories (typically medium-and high-block servers provide such devices and hot-swappable hard drives) can improve read and write performance by parallel processing of multiple independent I/O requests, and increase the reliability of data storage by increasing redundant information.
Ii. several types of raid
RAID 0
Non-redundant, good read and write performance, data reliability is lower than a single disk.
650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>650 "this.width=650;" src= "http ://s4.51cto.com/wyfs02/m01/83/a2/wkiom1d4-ygdtg7waaer60ixpuo232.png-wh_500x0-wm_3-wmp_4-s_2201079210.png "title = "QQ picture 20160703193905.png" alt= "Wkiom1d4-ygdtg7waaer60ixpuo232.png-wh_50"/>
RAID 1
Mirroring, read performance, write performance and a single disk equivalent, high data reliability, high cost.
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A1/wKioL1d4-lvS1E3qAACswLwIRFw253.png-wh_500x0-wm_3 -wmp_4-s_2983343105.png "title=" qq picture 20160703194251.png "alt=" Wkiol1d4-lvs1e3qaacswlwirfw253.png-wh_50 "/>
RAID 2
Parallel access, through the use of the sea-plain to achieve redundancy, good read and write performance, disk synchronization rotation, with error correction function, high reliability, good read and write performance, but only one I/O request can be executed at a time.
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M02/83/A1/wKioL1d4-smwbgifAAB2Z7H9vu0627.png-wh_500x0-wm_3 -wmp_4-s_4294878254.png "title=" qq picture 20160703194442.png "alt=" Wkiol1d4-smwbgifaab2z7h9vu0627.png-wh_50 "/>
RAID 3
and access, through parity to achieve redundancy, good read and write performance, disk synchronization rotation, with error detection function, high reliability, good read and write performance, but only one I/O request can be executed at a time.
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A2/wKiom1d4-0nx-3q_AABtKrq8sZQ913.png-wh_500x0-wm_3 -wmp_4-s_3021949447.png "title=" qq picture 20160703194654.png "alt=" Wkiom1d4-0nx-3q_aabtkrq8szq913.png-wh_50 "/>
RAID 4
Independent access, in blocks to calculate parity block and storage and check disk, high data reliability, good reading performance, poor write performance (because each write to update the check disk data), check disk becomes a performance bottleneck.
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/83/A2/wKiom1d4-7yAtAn0AAC8wpyen2k658.png-wh_500x0-wm_3 -wmp_4-s_2050667390.png "title=" qq picture 20160703194839.png "alt=" Wkiom1d4-7yatan0aac8wpyen2k658.png-wh_50 "/>
RAID 5
On the basis of RAID 4, the parity blocks are circulated across all disks, reducing the performance bottleneck of a single check disk, and read and write performance and reliability similar to RAID 4.
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M00/83/A2/wKiom1d4_BWjRZcHAADlCpIAJng392.png-wh_500x0-wm_3 -wmp_4-s_3551027609.png "title=" qq picture 20160703195017.png "alt=" Wkiom1d4_bwjrzchaadlcpiajng392.png-wh_50 "/>
Third, hardware RAID and software raid
Hardware RAID
(1) The hardware RAID controller is used to control the hard drive by an integrated or dedicated array card.
(2) High access performance and data protection capability, but also high cost.
(3) Linux considers a hardware disk array as a real hard disk with the device name/dev/sd[a-p].
Software raid
(1) Using the Software RAID function provided by the operating system.
(2) suitable for low-cost occasions.
(3) Linux considers the software disk array as a multiple disk device (MD) with the device name/dev/md0,/DEV/MD1, and so on.
Iv. RAID configuration in Linux
in theLinuxsystem, mainly providesRAID 0,RAID 1,RAID 5three levels ofRAID. Mdadm is a standard soft RAID management tool under Linux, is a modal tool (in different modes), the program works in the memory User program area, provides the user with the raid interface to operate the kernel module, realizes various functions.
1. RAID 1 Configuration
(1) Create two RAID partitions of the same size, set the partition ID to FD.
(2) Setting up RAID devices
Mdadm--create/dev/md0--level 1--RAID-DEVICES=2/DEV/SDB1/DEV/SDC1
(3) Setting Mdadm configuration file/etc/mdadm.conf
Device/dev/sdb1/dev/sdc1
Array/dev/md0 DEVICES=/DEV/SDB1,/DEV/SDC1
(4) Creating a file system
Mkfs-t ext3/dev/md0
(5) mounting a RAID 1 device
Mkdir/mnt/raid1
Mount/dev/md0/mnt/raid1
(6) Managing RAID 1 arrays
# impersonating a member disk fails
Mdadm/dev/md0--FAIL/DEV/SDC1
# Remove the failed member from the RAID 1 array
Mdadm/dev/md0--REMOVE/DEV/SDC1
# Prepare a piece of disk to be replaced and join the new disk to the array
Mdadm/dev/md0--ADD/DEV/SDD1
# View Array real-time information
Cat/proc/mdstat
Mdadm--detail/dev/md0
2. RAID 5 Configuration
(1) Prepare 4 array members (create a RAID partition)
(2) Create a RAID device: The system defaults to only md0 devices, others need to be created by themselves.
LS–L/DEV/MD0 # View MD device type and primary and secondary device number
MKNOD/DEV/MD1 B 9 1 # Create a device file
(3) Create a RAID 5 device
mdadm--create/dev/md1--level=5--raid-devices=3--spare-devices=1/dev/sdd[5-8] Mdadm--DETAIL/DEV/MD 1
(4) Setting Mdadm configuration file/etc/mdadm.conf
Device/dev/sdd5/dev/sdd6/dev/sdd7/dev/sdd8
ARRAY/DEV/MD1 DEVICES=/DEV/SDD5,/DEV/SDD6,/DEV/SDD7,/DEV/SDD8
(5) Creating a file system
Mkfs.ext3/dev/md1
(6) Mounting a RAID 5 device
Mkdir/mnt/raid5
Mount/dev/md1/mnt/raid5
(7) Managing RAID 5 arrays
# Rebuild RAID 5 with backup disk
MDADM/DEV/MD1--fail/dev/sdd6
Mdadm--DETAIL/DEV/MD1
# you can see that the backup disk is automatically involved in rebuilding the array, and the fault disk becomes the backup disk, and # Note: To wait for the raid rebuild to complete, replace the failed disk
# Remove the failed disk and add the new magnetic
MDADM/DEV/MD1--remove/dev/sdd6
MDADM/DEV/MD1--add/dev/sde1
Mdadm--DETAIL/DEV/MD1
(8) Enable/disable/monitor RAID devices
# Stop the RAID device (to uninstall before stopping)
Mdadm--stop/dev/md0
# Launch RAID device
Mdadm--assemble--scan/dev/md0
# Monitor RAID devices
Mdadm--monitor [email protected]--delay=180/dev/md0
# Turn monitoring tasks into background execution
Nohup mdadm--monitor [email protected]--delay=180/dev/md0
(9) If you want to delete a RAID multi-disk device (optional)
Each MD device can only be built once, and if the Create command (mdadm–create) fails, it will cause the MD device to be unusable, and you will need to remove the wrong MD device before you can recreate it by following the steps below.
#1. Deactivate a RAID device
Mdadm--stop/dev/md0
#2. Empty the super block of each constituent partition
Mdadm--ZERO-SUPERBLOCK/DEV/SDB1
Mdadm--ZERO-SUPERBLOCK/DEV/SDC1
This article is from the "IT" blog, so be sure to keep this source http://wang119.blog.51cto.com/9428009/1795382
Raid detailed in Linux