The disk array is made up of many inexpensive disks, combined into a large disk group, which uses individual disks to provide data with the added effect to improve the performance of the entire disk system. Using this technology, the data is cut into many sections, stored on each hard drive, even if there is a piece of damage, no data loss, and can continue to use.
Introduction to RAID (disk array):
What is raid: multiple disks synthesize an "array" to provide better performance, redundancy, or both;
Features of RAID:
Improved IO capability:
Parallel disk read and write;
Increased Durability:
Disk redundancy to achieve;
Level: Multiple disk organizations work together in different ways;
How the RAID is implemented:
External disk array: Provides adaptive capability through expansion cards;
Internal raid: Motherboard integrated RAID controller;
Configured in the BIOS before installing the OS;
Software RAID:
RAID level:
RAID-0: Striped Roll, strip
RAID-1: Mirrored volume, mirror
RAID-2
..
RAID-5
RAID-6
RAID-10
RAID-01
Introduction to RAID levels:
RAID-0:
Read and write performance improvement;
Free space: n*min (S1,s2,...) ;
No fault-tolerant ability;
Minimum number of disks: 2, 2;
RAID-1:
Read performance improvement, write performance slightly decreased;
Free space: 1*min (S1,s2,...) ;
have redundancy capability;
Minimum number of disks: 2, 2N;
RAID-4:
Multiple data disk XOR or operation value, stored in a special check disk;
RAID-5:
Read and write performance improvement;
Free space: (N-1) *min (s1,s2,...) ;
Fault tolerance: Allow up to 1 blocks of disk damage;
Minimum number of disks: 3;
RAID-6:
Read and write performance improvement;
Free space: (N-2) *min (s1,s2,...) ;
Fault tolerance: Allow up to 2 blocks of disk damage;
Minimum number of disks: 4, 4+;
RAID-10:
Read and write performance improvement;
Free space: n*min (S1,s2,...) /2;
Fault tolerance: Each group of images can only be broken one piece;
Minimum number of disks: 4, 4+;
RAID-01, RAID-50
RAID7: Can be understood as an independent storage computer, with its own operating system and management tools, can run independently, theoretically the highest performance of the RAID mode;
Jbod:just a Bunch of Disks;
Function: The space of multiple disks is combined with a large continuous space;
Free space: sum (s1,s2,...) ;
Common levels: RAID-0, RAID-1, RAID-5, raid-10,raid-50, JBOD;
Soft RAID implementations:
MDADM: Provides management interface for soft raid;
Add redundancy to the spare disk;
Combined with the MD (Multi devices) in the kernel;
RAID devices can be named/dev/md0,/dev/md1,/DEV/MD2,/dev/md3, and so on;
Implementation of software RAID:
Mdadm: a modular tool;
Syntax format for commands: mdadm [mode] <raiddevice> [options]<component-devices>;
Supported RAID levels: LINEAR, RAID0, RAID1, RAID4,RAID5, RAID6, RAID10;
Mode:
Create pattern:-C
-N #: Create this raid with # blocks of devices;
-L #: Indicates the level of RAID to be created;
-A {Yes|no}: Automatically create device files for target RAID devices;
-C Chunk_size: Indicates the block size;
-X #: Indicates the number of free disks;
Display mode:-D (displays RAID details)
Mdadm-d/dev/md#
Management mode:-F,-R,-a
-F: flag specifies that the disk is corrupt
-A: Adding disks
-R: Remove disk
Assembly:-A
Monitoring:-F
<raiddevice>:/dev/md#
<component-devices>: any block device
Observe the status of MD:
Cat/proc/mdstat
To stop the MD device:
Mdadm-s/dev/md#
Soft RAID Management:
Build configuration file: Mdadm–d–s >>/etc/mdadm.conf
Stop service: mdadm–s/dev/md0
Activation: mdadm–a–s/dev/md0
Delete Raid information: MDADM–ZERO-SUPERBLOCK/DEV/SDB1
Soft RAID Testing and remediation:
Analog disk failure:
#mdadm/dev/md0-f/dev/sda1
To remove a disk:
#mdadm/dev/md0–r/dev/sda1
To repair a disk failure from a software RAID disk:
Replace the failed disk and turn it on;
Rebuilding partitions on the standby drive;
#mdadm/dev/md0-a/dev/sda1
Mdadm,/proc/mdstat, and system log information
To implement a software RAID instance:
1. fdisk t FD (disk or partition conversion RAID format)
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/87/20/wKiom1fU_lPAW4xqAABH5zqBIhw052.png-wh_500x0-wm_3 -wmp_4-s_2284452384.png "title=" disk or partition to convert the RAID format. png "alt=" wkiom1fu_lpaw4xqaabh5zqbihw052.png-wh_50 "/>
2. Create a RAID device
Mdadm-c/dev/md0-a yes-l 5-n 4-x1/dev/sd{b2,c1,c2,d1,d2}1 (consent to create RAID device md0-l is RAID level,-N uses several blocks to make raid,-x spare blocks)
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/87/1D/wKioL1fU_nmDODbeAAAQLr-CptQ278.png-wh_500x0-wm_3 -wmp_4-s_1231670863.png "title=" to create a raid device. png "alt=" wkiol1fu_nmdodbeaaaqlr-cptq278.png-wh_50 "/>
Mdadm-d/dev/md0 (show created RAID)
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/87/20/wKiom1fU_2qA2GBOAABQ6VZVIEk675.png-wh_500x0-wm_3 -wmp_4-s_693839745.png "title=" displays the raid device created. png "alt=" wkiom1fu_2qa2gboaabq6vzviek675.png-wh_50 "/>
3. MKFS.EXT4/DEV/MD0 (Create file system for MD0)
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M01/87/1E/wKioL1fU_53xXCoAAABgJUtwNBg164.png-wh_500x0-wm_3 -wmp_4-s_1797506676.png "title=" creates a file system for raid. png "alt=" wkiol1fu_53xxcoaaabgjutwnbg164.png-wh_50 "/>
4. Vim/etc/fstab (edit config file)
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/87/20/wKiom1fVAHfBd7XKAABk7KceGYs613.png-wh_500x0-wm_3 -wmp_4-s_4269175707.png "title=" to add a mount to the configuration file. PNG "alt=" Wkiom1fvahfbd7xkaabk7kcegys613.png-wh_50 "/>
5. mdadm-ds/dev/md0 >/etc/mdadm.conf Generate configuration file
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/87/1E/wKioL1fVALuBdBrQAAAr1EzV7Oo382.png-wh_500x0-wm_3 -wmp_4-s_692393578.png "title=" build configuration file. PNG "alt=" Wkiol1fvalubdbrqaaar1ezv7oo382.png-wh_50 "/>
6, start raid
Mdadm-a/dev/md0
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M01/87/20/wKiom1fVAlPQ4ojmAAACwRkmKx8867.png-wh_500x0-wm_3 -wmp_4-s_1573744103.png "title=" Start Raid.png "alt=" Wkiom1fvalpq4ojmaaacwrkmkx8867.png-wh_50 "/>
7, test
Mdadm/dev/md0-f/DEV/SDB2 Analog Damage
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M02/87/20/wKiom1fVAnrBrl6EAAAOiqZ3FGs182.png-wh_500x0-wm_3 -wmp_4-s_2503838207.png "title=" simulates damage. PNG "alt=" Wkiom1fvanrbrl6eaaaoiqz3fgs182.png-wh_50 "/>
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/87/20/wKiom1fVAzSQgyJ9AACFFppxb_U855.png-wh_500x0-wm_3 -wmp_4-s_3152052105.png "title=" simulates damage after viewing the status of the raid. png "alt=" wkiom1fvazsqgyj9aacffppxb_u855.png-wh_50 "/>
Mdadm/dev/md0-r/DEV/SDB2 Deleting members
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/87/1E/wKioL1fVAo6zb9DEAAAR8egtj2A759.png-wh_500x0-wm_3 -wmp_4-s_1410703745.png "title=" delete the raid member. PNG "alt=" wkiol1fvao6zb9deaaar8egtj2a759.png-wh_50 "/>
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M00/87/1E/wKioL1fVA0vw-md6AACDW0vgl6g066.png-wh_500x0-wm_3 -wmp_4-s_3712495050.png "title=" after deleting the disk, check the status of the raid. png "alt=" wkiol1fva0vw-md6aacdw0vgl6g066.png-wh_50 "/>
Mdadm/dev/md0-a/DEV/SDF1 Increase
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/87/20/wKiom1fVAtSQ0I3lAAAIeclOCOU453.png-wh_500x0-wm_3 -wmp_4-s_618130554.png "title=" adds members to the raid. png "alt=" wkiom1fvatsq0i3laaaieclocou453.png-wh_50 "/>
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/87/20/wKiom1fVA3HBxf7wAABhLwqaERY409.png-wh_500x0-wm_3 -wmp_4-s_3370205702.png "title=" after adding, then check the RAID device. png "alt=" wkiom1fva3hbxf7waabhlwqaery409.png-wh_50 "/>
Mdadm-g/dev/md0-n 6-a/dev/sdd4 added members
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/87/1E/wKioL1fVAt_A5NcKAACCiTOc5Ig718.png-wh_500x0-wm_3 -wmp_4-s_234644563.png "title=" adds a new member to the raid. PNG "alt=" Wkiol1fvat_a5nckaaccitoc5ig718.png-wh_50 "/>
7. Delete raid
Umount/mnt/raid (un-mount)
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/87/1E/wKioL1fVAx2jClasAABYG1CAhy4449.png-wh_500x0-wm_3 -wmp_4-s_1955824005.png "title=" to cancel the mount. PNG "alt=" Wkiol1fvax2jclasaabyg1cahy4449.png-wh_50 "/>
Mdadm-s/dev/md0 Stop raid
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M00/87/20/wKiom1fVA5KA4c5wAAAUxXXPBCs232.png-wh_500x0-wm_3 -wmp_4-s_1267097890.png "title=" to stop the raid device. png "alt=" wkiom1fva5ka4c5waaauxxxpbcs232.png-wh_50 "/>
Rm-f/etc/mdadm.conf (delete config file)
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/87/20/wKiom1fVA7Hzc-3sAAAH5u9ymB0377.png-wh_500x0-wm_3 -wmp_4-s_2012969034.png "title=" to delete the configuration file. PNG "alt=" Wkiom1fva7hzc-3saaah5u9ymb0377.png-wh_50 "/>
Vi/etc/fstab (Delete the edited profile)
FDISK/DEV/SDA (delete member of RAID)
Mdadm--zero-superblock/dev/sdd1 (Removal of RAID information)
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/87/1E/wKioL1fVA_7jkO-IAADD8Jam6YM950.png-wh_500x0-wm_3 -wmp_4-s_985407600.png "title=" clears the residue. PNG "alt=" Wkiol1fva_7jko-iaadd8jam6ym950.png-wh_50 "/>
RAID (disk array) use