Application of RAID disk array
RAID, which is generally translated as a Disk array and its full name is Redundant Arrays of Inexpensive Disk. It was initially conceived as a project by a research team at the University of California, Berkeley, they hope to build a cheap and highly available disk array through a large number of cheap hard disks. However, the development of RAID has deviated from its original intention of cheaper prices. However, RAID also brings another benefit. How to choose a proper RAID level can create a disk with higher availability and better fault tolerance.
RAID is mainly divided into software RAID and hardware RAID. Software RAID is mainly implemented through the operating system, which increases the CPU burden, so it is rarely used in actual scenarios. Hardware RAID uses independent hardware devices and control chips. The overall performance is superior to software RAID. RAID has different specifications, but there are two types in general: Standard RAID and hybrid RAID. Although RAID has many different specifications, many of them are just a transitional experimental product, which is rarely used in the actual production environment.
1. Standard RAID
1.1 RAID 0
RAID 0, also known as the striping, concatenates two or more hard disks to form a large-capacity disk. Data is stored in these disks separately. Because data can be read and written in parallel, RIID is the fastest speed at all levels. However, RAID neither provides redundancy nor fault tolerance. If one physical disk is damaged, all data will be lost. Therefore, RAID is only used in scenarios with low data security requirements but high speed requirements, such as video and image workstations.
1.2 RAID 1
RAID 1 is called the image technology and requires more than two hard disks to make images for each other. That is to say, the data on the primary disk is exactly the same as that on the image disk. Therefore, the speed of Data Reading is greatly improved in a multi-threaded operating system. The reliability of RAID 1 is very high. Data integrity can be guaranteed as long as a hard disk is normal. However, the disadvantage of RAID is that it wastes a lot of storage space.
RAID2-RAID4 is an experimental product that is rarely used in actual production environments.
1.3 RAID 5
RAID 5 introduces the data verification function, and the verification data is distributed and stored on each hard disk. RAID 5 is actually a compromise between speed and reliability. It is widely used in actual scenarios. Compared with RAID, storage costs are relatively low. RAID 5 requires at least three disks.
1.4 RAID 6
Compared with RAID 5, RAID adds a second independent information verification block. Two independent parity systems use different algorithms, and the data reliability is very high. Even if the two disks are invalid at the same time, the data usage will not be affected. RAID 6 requires at least four disks.
2. Hybrid RAID
JBOD 2.1
Strictly speaking, JBOD (Just a Bunch Of Disks) does not belong to the RAID level. JBOD does not have strict specifications, it is mainly used to logically combine each independent hard disk space into a large hard disk. If the hard disk is damaged, the above data cannot be recovered. If the first hard disk is damaged, all data will be lost, and the risk level is no less than RAID 0. However, JBOD also has its application scenarios. For example, Hadoop encourages JBOD, because Hadoop uses its own Disaster Tolerance solution.
2.2 RAID 01
RAID 01 is a combination of RAID 0 and RAID 1. The main solution is to first divide the data into two groups and then map the data to images. RAID 0 and RAID 1 are implemented first.
2.3 RAID 10
RAID 10 is the opposite of RAID 01. Data is grouped only after data mirroring is performed.
2.4 RAID 50
In combination with RAID 0, RAID 5 is used first, and RAID 0 is used, that is, access to multiple groups of RAID 5 is structured as a strip. RAID 50 is based on RAID 5, and RAID 5 requires at least three hard disks. RAID 50 consists of multiple groups of RAID 5, so RAID requires at least six hard disks. If one hard disk is damaged in any or multiple groups of RAID 5 at the underlying layer, RAID 50 can still operate. However, if both hard disks are damaged at the same time, the entire RAID 50 is invalid.
Use Linux to implement soft RAID:
In Linux, RAID is implemented mainly through mdadm.
Mdadm is a modeled command. The main modes are as follows:
Creation Mode
Management Mode
Monitoring Mode
Growth Model
Assembly Mode
The basic mdadm format is:
# Mdadm [mode] [options]
The following options are available for RAID in assembly mode:
-L: Specifies the RAID level;
-N: specify the number of devices, that is, the number of disks;
-A: automatically creates a device file for it;
-C, -- chunk specifies the size of the split data block
1. Implement RAID 0:
Preparations:
Two 1 GB disk partitions.
[Root @ local ~] # Mdadm-C/dev/md0-a yes-l 0-n 2/dev/sdb {1, 2}
Mdadm:/dev/sdb1 appears to contain an ext2fs file system
Size = 104388 K mtime = Thu Jan 1 08:00:00 1970
Continue creating array? (Y/n) y
Create a file system for it:
[Root @ local ~] # Mke2fs-j/dev/md0
Mount a file system:
[Root @ local mnt] # mount/dev/md0/mnt/raid
View the information of the mounted file system:
[Root @ local mnt] # df-h
File System capacity used available % mount point
/Dev/mapper/VolGroup00-LogVol00
18G 3.3G 14G 20%/
/Dev/sda1 99 M 13 M 82 M 13%/boot
Tmpfs 252 M 0 252 M 0%/dev/shm
/Dev/sr0 3.3G 3.3G 0 100%/mnt/cdrom
. Host:/56G, 44G, 12G, 79%/mnt/hgfs
/Dev/md0 1.9G 35 M 1.8G 2%/mnt/raid
The files shown here are not normally displayed at 2 GB, because RAID also has some metadata to save.
2. Implement RAID 1:
Preparations:
Two 1 GB disk partitions.
[Root @ local ~] # Mdadm-C/dev/md1-a yes-l 1-n 2/dev/sdb {5, 6}
View status information
[Root @ local ~] # Cat/proc/mdstat
Personalities: [raid0] [raid1]
Md1: active raid1 sdb6 [1] sdb5 [0]
987840 blocks [2/2] [UU]
Md0: active raid0 sdb2 [1] sdb1 [0]
1975744 blocks 64 k chunks
Unused devices: <none>
Create a File System
[Root @ local ~] # Mke2fs-j/dev/md1
View detailed information of a specified RAID device
[Root @ local ~] # Mdadm-D/dev/md1
/Dev/md1:
Version: 0.90
Creation Time: Tue Mar 3 17:26:24 2015
Raid Level: raid1
Array Size: 987840 (964.85 MiB 1011.55 MB)
Used Dev Size: 987840 (964.85 MiB 1011.55 MB)
Raid Devices: 2
Total Devices: 2
Preferred Minor: 1
Persistence: Superblock is persistent
Update Time: Tue Mar 3 17:30:22 2015
State: clean
Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0
UUID: f6a3844f: 282828af: 45d573d8: 5f0aa269
Events: 0.4
Number Major Minor RaidDevice State
0 8 21 0 active sync/dev/sdb5
1 8 22 1 active sync/dev/sdb6
Simulate disk damage (Management Mode)
[Root @ local ~] # Mdadm/dev/md1 -- fail/dev/sdb5
Mdadm: set/dev/sdb5 faulty in/dev/md1
Remove the damaged hard disk: (-r = -- remove)
[Root @ local ~] # Mdadm/dev/md1-r/dev/sdb5
Mdadm: hot removed/dev/sdb5
Replace the new disk (consistent with the partition of the broken disk)
[Root @ local ~] # Mdadm/dev/md1-a/dev/sdb7
Mdadm: added/dev/sdb7
Stop a disk array
[Root @ local ~] # Mdadm-S/dev/md1
Mdadm: stopped/dev/md1
Reassemble the disk array
[Root @ local ~] # Mdadm-A -- run/dev/md1/dev/sdb5/dev/sdb6
Mdadm:/dev/md1 has been started with 1 drive (out of 2 ).
Scan the disk array information for automatic assembly in the future:
[Root @ local ~] # Mdadm-D -- scan>/etc/mdadm. conf
3. Implement RAID 5:
Preparations:
3 disks of MB size
[Root @ local ~] # Mdadm-C/dev/md5-l5-n3/dev/sdb {8, 9, 10}
How to build a RAID 10 array on Linux
Debian soft RAID Installation notes-use mdadm to install RAID1
Common RAID technology introduction and demo (Multi-chart)
The most common disk array in Linux-RAID 5
RAID0 + 1 and RAID5 Performance Test Results
Getting started with Linux: disk array (RAID)
This article permanently updates the link address: