Basic ConceptsRaid is mainly used because of high reliability and high data transmission rate. Bit-level Dispersion: disperses the bit of each byte on multiple disks. Block-level Dispersion: blocks of a file can be distributed across multiple disks. For N disks, block I of a file can exist on the disk (I mod n) + 1.
Raid levelRAID 0: it refers to a disk array with scattered blocks, but no redundant RAID 1: it refers to a disk image. A group of image disks raid 2 is added on the basis of 0: it is also known as the memory-based error correction code structure. The memory system always implements error detection based on parity. Each byte in the memory system has a related parity bit. The number of positions 1 in the record is an even number (parity = 0) or an odd number (parity = 1 ). If one byte is corrupted (0 to 1 or 1 to 0), the parity of the byte will also change. RAID 3: Also known as the bitwise intertwined parity structure. It improves the level 2. Unlike the memory system, the disk controller can detect whether a sector is correctly read, in this way, a single parity can be used for error detection and error correction. The solution is as follows: if one slice is damaged, you can know which slice it is. By calculating the parity value of the corresponding bits of the slice of other disks, you can determine whether the corruption bits are 1 or 0. If the parity value of other bits is equal to the storage parity value, the missing bits are 0; otherwise, the value is 1. Raid 4: it is also known as the block-separated parity structure. It adopts the same block-level dispersion as level 0. In addition, it saves the corresponding block parity block RAID 5 of N other disks on an independent Disk: it is also known as the block intertwined distribution parity structure. Unlike level 4, it distributes data and parity on all N + 1 magnetic disks, instead of storing data on N disks, parity is stored on a single disk. For each block, one disk stores parity while others store data. For example, for the five disk arrays, the n-th block is stored in the disk (N mod 5) + 1, and the n-th block of the other four stores the real data corresponding to the n-th block. Raid 6: The p + q redundancy solution is similar to level 5, but stores additional redundant information to prevent errors on multiple disks. It does not use parity check, but uses the Error Correction Code RAID 0 + 1: refers to the combination of level 0 and Level 1, level 0 first and Level 1 RAID 1 + 0: it refers to the combination of level 1 and level 0, first level 1 and then level 0
Select raid levelRaid level 0 is used for data loss. It is not critical to high-performance applications. Raid Level 1 is used for high reliability and fast recovery. Raid Level 5 is used to store large amounts of data. Raid level 0 + 1/1 + 0 is used for performance and reliability. important applications
Differences between software and hardware Disk ArraysA hardware disk array uses a disk array. A dedicated chip on the disk array card is used to process raid tasks, so the performance is better. The software disk array simulates arrays through software, so it consumes a lot of system resources.
Software Disk Array applicationNote: here we use the Linux operating system to provide software disk array software-mdadm software hardware Disk Array Device Name:/dev/SD [A-p], software Disk Array Device File Name/dev/MD [0-n] mdadm [parameter] Option and parameter: -- create: Create raid option -- Auto = yes: decide to create a software disk array device, I .e./dev/md0 -- raid-devices = Number: use several disks as disk array devices -- spare-devices = Number: use several disks as the backup device -- level = Number: set the level of this set of disk arrays. Generally, we recommend that you use, 5 -- detail followed by the disk array device name: output disk array details -- manage: Manage raid option -- add: Add the following device to MD -- remove: remove the following device from MD -- fail: set the following device to an error RAID5 environment: Each disk is 5 GB and three disks are used to form RAID5. Then, two disks are used to set the device to spare D. ISK mounts the RAID 5 to the/mnt/raid directory to create a disk device [root @ localhost ~] # Fdisk/dev/SDF (disk device name) Command (M for help): n saves intermediate content. For details, see fdisk usage command (M for help ): w. Use mdadm to create a raid [root @ localhost ~] # Mdadm -- create -- Auto = Yes/dev/md0 -- level = 5 -- raid-devices = 3 -- spare-devices = 2/dev/SD {B, c, d, e, f} view raid information [root @ localhost ~] # Mdadm -- detail/dev/md0 (it takes some time to create a raid) omitting the intermediate content state: Clean active devices: 3 Working devices: 5 failed devices: 0 spare devices: 2. view the raid status [root @ localhost ~] # Cat/proc/mdstatpersonalities: [raid6] [RAID5] [raid4] md0: Active RAID5 SDD [2] SDF [3] (s) SDE [4] (s) SDC [1] SDB [0] 10485632 blocks level 5, 64 K chunk, algorithm 2 [3/3] [uuu] unused devices: <none> Row 2: indicates that md0 is RAID 5, in addition, SDB, SDC, and sdbd disks are used. The numbers in the brackets [] after each device are in the order of disk raid. The S in the column after SDF and SDE represents the third row of spare: this disk array has 10485632 blocks (each block is 1 K), so the total volume is 10 GB (only two disks are used, and one disk is used for parity check) use raid level 5, write the disk cell block size to 64 K, use algorithm 2 disk array algorithm, [M/N] indicates that this array requires m devices, in addition, N devices run normally. The [uuu] at the end represents the startup of the three devices (M in [M/N]) and the U represents normal, if it is _, it indicates that the raid [root @ localhost ~] is not properly formatted and mounted. # Format mkfs-T ext3/dev/md0 [root @ localhost ~] # Mkdir/mnt/raid [root @ localhost ~] # Mount/dev/md0/mnt/raid Mount boot automatically start raid and automatically mount [root @ localhost ~] # Mdadm-Detail/dev/md0 | grep-I UUID displays UUID information [root @ localhost ~] # Add the following content to VIM/etc/mdadm. conf: array/dev/md0 UUID = replace it with the corresponding ID number [root @ localhost ~] # Add the following content to VIM/etc/fstab:/dev/md0/mnt/raid ext3 default 1 2 disable [root @ localhost ~] # Mdadm -- stop/dev/md0 simulate disk error [root @ localhost raid] # mdadm -- manage/dev/md0 -- fail/dev/SDB