Basic Linux RAID operations
RAID stands for "A Case for Redundant Arrays of Inexpensive Disks (RAID)", known as A "cheap Redundant disk array", which was published by UC Berkeley in 1987. The main idea of RAID is to make several small physical disks into a large-capacity virtual storage device to improve the Read and Write efficiency of disk storage, and provide redundancy to improve the security of data storage.
Depending on the application direction, RAID is divided into different levels, commonly used RAID-0, RAID-1, RAID-5, RAID-10.
RAID-0 is also called the Strip mode (striping). It distributes continuous data to multiple disks for access. The minimum number of disks must be greater than or equal to 2. When the system has data requests, it can be concurrently executed by multiple disks. Each disk executes its own data requests. This type of parallel operations on data can make full use of the bandwidth of the bus, significantly improving the overall disk access performance. Because reading and writing are done in parallel on the device, the Read and Write Performance will increase, which is also the main reason for using RAID-0. However, RAID-0 does not have data redundancy. If the drive fails, no data can be restored. Therefore, RAID-0 is generally used in businesses that require high reading of devices but do not require data security.
RAID-1 is also called Mirroring. RAID-1 requires at least two integer times of hard disks and uses 0 or more backup disks. Data is written to the image disk at the same time each time you write data. This array is highly reliable, but its effective capacity is reduced to half of the total capacity. At the same time, the size of these disks should be equal; otherwise, the total capacity will only have the minimum disk size. This method completely backs up the data, and the write speed of the data is slightly reduced, and the disk utilization is only 1/2. However, its advantage is that it has a good fault tolerance for data and greatly improves data reading.
RAID-5 has a Data Reading Speed similar to RAID-0, and the disk space utilization is also higher than RAID 1. It is a compromise between RAID-0 and RAID-1. It is a storage solution that combines storage performance, data security, and storage costs. Because the storage cost is relatively low, it is a widely used solution.
The data distribution in the RAID-5 array is similar to that in RAID-0. The data is also distributed to each hard disk, but Raid-5 does not have an independent parity disk, he stored the verified data cyclically and distributed across all disks, where any N-1 block disk stores the complete data, that is, there is space equivalent to a disk capacity for storing parity information. Therefore, RAID5 can ensure normal data access when a disk is offline without affecting data integrity, thus ensuring data security. After a damaged disk is replaced, RAID will automatically use the remaining parity information to reconstruct the data on the disk to maintain the high reliability of RAID 5.
RAID-5 requires at least three or more disks and can use 0 or more backup disks. The data security level is lower than RAID 1, data Writing speed is slower than that of a single disk. If two or more hard disks are offline at the same time, or the RAID information is incorrect, the array will become invalid and data needs to be reorganized. In addition, the disk capacity of all raid 5 arrays must be the same. When the disk capacity is not the same, the minimum disk capacity will prevail. At the same time, it is recommended that the disk speed be the same, otherwise it will affect performance.
RAID 1 + 0 is also known as the RAID-10 standard. It is actually a product that combines RAID-0 and RAID-1 standards, then RAID-0, which provides redundancy and increases the speed.
When data is continuously divided by bit or byte and multiple disks are read/written in parallel, the disk image is redundant for each disk. It has the advantage of RAID-0 extraordinary speed and RAID-1 Data high reliability, but the CPU usage is also higher, and the disk utilization is relatively low. RAID-10 has become a cost-effective level because of its high read/write efficiency of RAID-0 and the high data protection and recovery capability of RAID-1, currently, almost all RAID control cards support this level.
However, the utilization of RAID-10 to storage capacity is as low as RAID-1, with only 50%. Therefore, RAID10 is a high-reliability and efficient disk structure. It is a zone-based structure and a mirror structure, which can achieve both high efficiency and high speed, RAID 10 provides better performance than RAID 5, which is expensive to use.
Because RAID is highly usable and the kernel used by hard disks needs to be identified, the latest kernel has loaded drivers for common RAID cards by default. In linux, RAID devices are identified as/dev/md [N] N numbers. Use the mdadm command to configure the RAID device.
Currently, RAID is divided into two types: hardware-based RAID and software-based RAID. In Linux, the RAID function can be implemented through the built-in software. Because the RAID function is implemented by the software, the configuration is flexible and convenient to manage. Using software RAID, you can also combine several physical disks into a larger virtual device to achieve performance improvement and data redundancy.
Linux Soft RAID is a software RAID configuration on the linux operating system. Although it can also protect data, in the actual production environment, we recommend that you use disk arrays and hard RAID in storage to achieve disk storage redundancy. The hardware-based RAID solution is superior in terms of performance and service performance than the software RAID technology, in addition, it is more secure in terms of the ability to detect and repair multiple-bit errors, automatic detection of error disks, and array reconstruction.
Summary of basic operation commands:
Creation Mode:
-C: Create md0
-N #: specify the number of disks required to create a raid.
-#: Indicates the level at which raid is created.
-C: indicates the size of the created block.
-X: number of redundant (idle) Disks
-A {yes | no}: automatically creates the device file of the target raid device.
Management Mode:
-F: indicates that the specified simulated disk is damaged.
-A: Add a disk to the raid.
-R: Remove Disk
-S: Stop the Array
-A-s: activates the array.
-D-s: generate the configuration file (mdadm-D-s>/etc/mdadm. conf)
Mdadm-zero-superblock/dev/sdb1 (delete raid Information)
Monitoring Mode:
-F: (generally not frequently used)
Assembly mode:
Soft raid is system-based. When the original system is damaged, we need to re-assemble the raid.
-A (for example, mdadm-A/dev/md1/dev/sdb5/dev/sdb6)
Zeng Long Mode:
Used to add disks and resize Arrays
-G (for example, [root @ localhost ~] # Mdadm-G/dev/md2-n 4)
View:
Mdadm-D/dev/md # (displays raid array details)
Cat/proc/mdstat (view raid status)
--------------------------------------------------------------------------------
Next, let's try the specific operation !!
1. After four disks are added, let's check whether they exist:
Root@bkjia.com ~ # Fdisk-l
Disk/dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
Units = sectors of 1*512 = 512 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk label type: dos
Disk identifier: 0x000271fa
Device Boot Start End Blocks Id System
/Dev/sda1*2048 976895 487424 83 Linux
/Dev/sda2 976896 196288511 97655808 83 Linux
/Dev/sda3 196288512 200194047 1952768 82 Linux swap/Solaris
/Dev/sda4 200194048 251658239 25732096 5 Extended
/Dev/sda5 200196096 239257599 19530752 83 Linux
Disk/dev/sdd: 128.8 GB, 128849018880 bytes, 251658240 sectors (fourth empty Disk)
Units = sectors of 1*512 = 512 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk/dev/sdc: 128.8 GB, 128849018880 bytes, 251658240 sectors (third empty Disk)
Units = sectors of 1*512 = 512 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk/dev/sdb: 128.8 GB, 128849018880 bytes, 251658240 sectors (second empty Disk)
Units = sectors of 1*512 = 512 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk/dev/sde: 128.8 GB, 128849018880 bytes, 251658240 sectors (fifth blank Disk)
Units = sectors of 1*512 = 512 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
2. Create several idle disks as raid arrays.
Root@bkjia.com ~ # Mdadm-C/dev/md0-n3-l5-x1/dev/sd {B, c, d, e}
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array/dev/md0 started.
3. Check the status of the created raid array.
Root@bkjia.com ~ # Cat/proc/mdstat # display the array status
Personalities: [raid6] [raid5] [raid4]
Md0: active raid5 sdd [4] sde [3] (S) sdc [1] sdb [0]
251527168 blocks super 1.2 level 5,512 k chunk, algorithm 2 [3/3] [UUU]
Bitmap: 0/1 pages [0 KB], 65536KB chunk # Here we can see that the array has been synchronized!
Root@bkjia.com ~ # Mdadm-D/dev/md0 ### display details of our array
/Dev/md0:
Version: 1.2
Creation Time: Sat Jun 4 10:17:02 2016
Raid Level: raid 5 # Raid Level
Array Size: 251527168 (239.88 GiB 257.56 GB)
Used Dev Size: 125763584 (119.94 GiB 128.78 GB)
Raid Devices: 3
Total Devices: 4
Persistence: Superblock is persistent
Intent Bitmap: Internal
Update Time: Sat Jun 4 10:27:34 2016
State: clean # normal
Active Devices: 3 # Number of Active device disk blocks
Working Devices: 4 # Total number of Working device Disks
Failed Devices: 0 # No damaged disk
Spare Devices: 1 # Number of backed up disks
Layout: left-lateral ric
Chunk Size: 512 K
Name: bkjia.com: 0 (local to host bkjia.com)
UUID: 0ad970f7: f655d497: bbeeb6ad: aca1241d
Events: 127
Number Major Minor RaidDevice State
0 8 16 0 active sync/dev/sdb
1 8 32 1 active sync/dev/sdc
4 8 48 2 active sync/dev/sdd
3 8 64-spare/dev/sde # This hard disk is idle
4. format the disk
Root@bkjia.com ~ # Mkfs. ext4/dev/md0
Mke2fs 1.42.9 (28-Dec-2013)
Filesystem label =
OS type: Linux
Block size = 4096 (log = 2)
Fragment size = 4096 (log = 2)
Stride = 128 blocks, Stripe width = 256 blocks
15720448 inodes, 62881792 blocks
3144089 blocks (5.00%) reserved for the super user
First data block = 0
Maximum filesystem blocks = 2210398208
1919 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768,983 04, 163840,229 376, 294912,819 200, 884736,160 5632, 2654208,
4096000,796 2624, 11239424,204 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done # format successful!
5. Mount the device and use it to see if it is normal.
Root@bkjia.com ~ # Mkdir/md0dir
Root@bkjia.com ~ # Mount/dev/md0/md0dir/
Root@bkjia.com ~ # Mount
Tmpfs on/run/user/0 type tmpfs (rw, nosuid, nodev, relatime, seclabel, size = 100136 k, mode = 700)
/Dev/md0 on/md0dir type ext4 (rw, relatime, seclabel, stripe = 256, data = ordered) # temporary mounting successful
Root@bkjia.com ~ # Vim/etc/fstab # Set automatic device mounting upon startup
#/Etc/fstab
# Created by anaconda on Wed May 11 18:44:18 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab (5), findfs (8), mount (8) and/or blkid (8) for more info
#
UUID = 267aae0a-088b-487f-a470-fec8fcdf772f/xfs ults 0 0
UUID = d8d9403c-8fa1-4679-be9b-8e236d3ae57b/boot xfs defaults 0 0
UUID = 7f62d6d9-9eda-4871-b2d7-2cbd2bc4cc89/testdir xfs defaults 0 0
UUID = abba10f4-18b3-4bc3-8cca-22ad619fadef swap defaults 0 0
/Dev/md0/md0dir ext4 defaults 0 0
~
Root@bkjia.com ~ # Mount-a # mount all unmounted files in the fsta File
Root@bkjia.com ~ # Cd/md0dir/# It is normal to create a file in the Mount directory!
[Root@bkjia.com md0dir] # ls
Lost + found
The [root@bkjia.com md0dir] # touch 1.txt
[Root@bkjia.com md0dir] # ls
1. txt lost + found
[Root@bkjia.com md0dir] #
6. Now let's simulate a disk failure and see what the raid will do.
[Root@bkjia.com md0dir] # mdadm/dev/md0-f/dev/sdd # MARK/dev/sdd as corrupt
Mdadm: set/dev/sdd faulty in/dev/md0
[Root@bkjia.com md0dir] # mdadm-D/dev/md0 # view raid Information
/Dev/md0:
Version: 1.2
Creation Time: Sat Jun 4 10:17:02 2016
Raid Level: raid5
Array Size: 251527168 (239.88 GiB 257.56 GB)
Used Dev Size: 125763584 (119.94 GiB 128.78 GB)
Raid Devices: 3
Total Devices: 4
Persistence: Superblock is persistent
Intent Bitmap: Internal
Update Time: Sat Jun 4 11:55:39 2016
State: clean, degraded, recovering
Active Devices: 2
Working Devices: 3
Failed Devices: 1 #### the status here is also different, marking the number of damaged blocks
Spare Devices: 1
Layout: left-lateral ric
Chunk Size: 512 K
Rebuild Status: 0% complete
Name: bkjia.com: 0 (local to host bkjia.com)
UUID: 0ad970f7: f655d497: bbeeb6ad: aca1241d
Events: 129
Number Major Minor RaidDevice State
0 8 16 0 active sync/dev/sdb
1 8 32 1 active sync/dev/sdc
3 8 64 2 spare rebuilding/dev/sde
# At this time,/dev/sdd starts rebuild data
4 8 48-faulty/dev/sdd #/dev/sdd damaged
[Root@bkjia.com md0dir] # cat/proc/mdstat
Personalities: [raid6] [raid5] [raid4]
Md0: active raid5 sdd [4] (F) sde [3] sdc [1] sdb [0]
251527168 blocks super 1.2 level 5,512 k chunk, algorithm 2 [3/2] [UU _]
[> ......] Recovery = 2.2% (2847492/125763584) finish = 10.0 min speed = 203392 K/sec # Start data synchronization!
Bitmap: 0/1 pages [0 KB], 65536KB chunk
Unused devices: <none>
The [root@bkjia.com md0dir] # cd
Root@bkjia.com ~ # Cd/md0dir/
[Root@bkjia.com md0dir] # ls
1. txt lost + found
[Root@bkjia.com md0dir] # touch 2.txt
[Root@bkjia.com md0dir] # ls
1. txt 2.txt lost + found ### it seems that everything is normal.
7. Next, we will remove the damaged disk.
[Root@bkjia.com md0dir] # mdadm/dev/md0-r/dev/sdd
Mdadm: hot removed/dev/sdd from/dev/md0
[Root@bkjia.com md0dir] # mdadm-D/dev/md0
/Dev/md0:
Version: 1.2
Creation Time: Sat Jun 4 10:17:02 2016
Raid Level: raid5
Array Size: 251527168 (239.88 GiB 257.56 GB)
Used Dev Size: 125763584 (119.94 GiB 128.78 GB)
Raid Devices: 3
Total Devices: 3
Persistence: Superblock is persistent
Intent Bitmap: Internal
Update Time: Sat Jun 4 12:07:12 2016
State: clean
Active Devices: 3
Working Devices: 3
Failed Devices: 0
Spare Devices: 0
Layout: left-lateral ric
Chunk Size: 512 K
Name: bkjia.com: 0 (local to host bkjia.com)
UUID: 0ad970f7: f655d497: bbeeb6ad: aca1241d
Events: 265
Number Major Minor RaidDevice State
0 8 16 0 active sync/dev/sdb
1 8 32 1 active sync/dev/sdc
3 8 64 2 active sync/dev/sde
# Now we only have three disks in the raid array.
8. If a disk is damaged, the data will be damaged.
[Root@bkjia.com md0dir] # mdadm/dev/md0-a/dev/sdd # since I have not enough disk, I will remove the block and add it.
Mdadm: re-added/dev/sdd
[Root@bkjia.com md0dir] # mdadm-D/dev/md0
/Dev/md0:
Version: 1.2
Creation Time: Sat Jun 4 10:17:02 2016
Raid Level: raid5
Array Size: 251527168 (239.88 GiB 257.56 GB)
Used Dev Size: 125763584 (119.94 GiB 128.78 GB)
Raid Devices: 3
Total Devices: 4
Persistence: Superblock is persistent
Intent Bitmap: Internal
Update Time: Sat Jun 4 12:11:54 2016
State: clean
Active Devices: 3
Working Devices: 4
Failed Devices: 0
Spare Devices: 1
Layout: left-lateral ric
Chunk Size: 512 K
Name: bkjia.com: 0 (local to host bkjia.com)
UUID: 0ad970f7: f655d497: bbeeb6ad: aca1241d
Events: 266
Number Major Minor RaidDevice State
0 8 16 0 active sync/dev/sdb
1 8 32 1 active sync/dev/sdc
3 8 64 2 active sync/dev/sde
4 8 48-spare/dev/sdd # OK, we have a backup disk.
Next, let's stop the raid service. Because I mounted it before, we unmount it first and then stop the service.
Root@bkjia.com ~ # Umount/md0dir/
Root@bkjia.com ~ # Mdadm-S/dev/md0
Mdadm: stopped/dev/md0
How to build a RAID 10 array on Linux
Debian soft RAID Installation notes-use mdadm to install RAID1
Common RAID technology introduction and demo (Multi-chart)
The most common disk array in Linux-RAID 5
RAID0 + 1 and RAID5 Performance Test Results
Getting started with Linux: disk array (RAID)
This article permanently updates the link address: