Linux independent Redundant disk array introduction

Source: Internet
Author: User

RAID (Redundant array of independent disk independent Redundant array) technology was proposed by the University of California, Berkeley, in 1987, initially to combine small, inexpensive disks in place of large expensive disks, while hoping that the disk will fail without damaging access to the data Development of a certain level of data protection technology. RAID is a redundant array of inexpensive disks that appear as a separate, large storage device under the operating system. RAID can give full play to the advantages of multiple hard drives, can increase the speed of the hard disk, increase capacity, provide fault-tolerant work to ensure data security, easy to manage the advantages of any one hard disk problems can continue to operate, not affected by the damage to the hard drive.

Common raid Introduction RAID 0

RAID 0, also known as stripe or striping, represents the highest storage performance in all RAID levels. RAID 0 improves storage performance by spreading continuous data across multiple disks, so that the system has data requests that can be executed in parallel by multiple disks, each of which performs its own portion of the data request. The parallel operation on this data can make full use of the bus bandwidth and significantly improve the overall disk access performance.

Use: with 2 disks as an example, the data is stored in half
Advantage: Write-read fast, suitable for saving non-important data
Free space: num*min (device)

RAID 1

RAID 1 enables data redundancy through disk data mirroring, resulting in data being backed up on paired independent disks. When raw data is busy, data can be read directly from the mirrored copy, so RAID 1 can improve read performance. RAID 1 is the highest unit cost in a disk array, but provides high data security and availability. When a disk fails, the system can automatically switch to read and write on the mirrored disk without having to reorganize the failed data.

Use: 2 disk For example, one copy of the data
Benefits: Improved read performance, data redundancy
Cons: Write performance drops slightly
Free space: 1*min (device)

RAID 4

RAID is an abbreviation for the English Redundant array of independentdisks, which is known as a redundant array of independent disks. RAID is a redundant array of multiple hard disks. Although RAID contains multiple hard disks, it appears as a separate, large storage device under the operating system. The RAID4 is a separate disk structure with parity codes, RAID4 and RAID3 are very much alike. In a standalone access array, each disk is run independently, so different I/O requests can be in parallel to meet

Use: 3 disk For example, a check disk, user restore data, 2 pieces of data disk
Pros: Have redundancy, read and write performance improvements, allow bad piece of disk
Disadvantage: The check disk pressure is too large
Free space: (N-1) *min (device)

RAID 5

RAID 5 is a storage solution that combines storage performance, data security, and storage costs. RAID 5 can be understood as a compromise between RAID 0 and RAID 1. RAID 5 can provide data security for the system, but with a lower level of protection than mirror and higher disk space utilization than mirror. RAID 5 has a similar data read speed as RAID 0, with only one parity information, which is slower than writing to a single disk. At the same time, because of multiple data corresponding to a parity information, RAID 5 disk space utilization is higher than RAID 1, the storage cost is relatively low, is the use of more than a solution.

Use: Take 3 disk as an example, 1 take turns to do check disk and 2 turns storage disk
Advantages: Lower check disc pressure compared to RADI4
Disadvantage: Only bad piece of plate
Free space: (N-1) *min (device)

RAID 6

RAID6 technology is a raid method designed to further enhance data protection based on RAID 5, which is actually an extended RAID 5 rating. Unlike RAID 5, where there is a peer data XOR check area on each hard disk, there is also an XOR check area for each block of data. Of course, the current disk data block verification data can not exist in the current disk but interleaved storage, so that, equal to each block has two verification protection barrier (a layered check, one is the overall check), so the data redundancy performance of RAID 6 is quite good. However, due to the addition of a checksum, the efficiency of writing is worse than RAID 5, and the design of the control system is more complex, and the second block also reduces the effective storage space.

Use: 4 disk For example, two check disk, user restore data, 2 data disk
Pros: Better fault tolerance than RAID 5
Cons: Write no RAID 5 good
Free space: (N-2) *min (device)

RAID 10

RAID 10 is a combination of RAID 1 and RAID0, which implements stripe set mirroring with parity, so it inherits the RAID0 's fast and RAID1 security. We know that RAID 1 is a redundant backup array here, while RAID 0 is responsible for reading and writing the data array. In fact, the right image is just a raid 10 way, more cases are separated from the main channel two, do striping operation, that is, the data segmentation, and this points out each road is divided into two ways, do mirroring operation, that is, mirror each other.

Use: 4 disk For example, 2 pairs of raid, RAID 0 raid
Advantages: Improve read and write, redundant, each pair of RAID1 can bad piece

Virtual implementation of RAID 5

Mdadm Common Options Description

Supported RAID levels: LINEAR, RAID0, RAID1, RAID4, RAID5, RAID6, RAID10;
Mode:
-C: Create pattern
|----N #: Use a # block device to create this raid;
|----L #: Indicates the level of RAID to be created;
|----a {yes|no}: Automatically create a device file for the target RAID device;
|----C chunk_size: Indicates the block size;
|----X #: Indicates the number of free disks;
-D: Management mode displays RAID details;
-F: flag specifies that the disk is damaged;
-A: Adding disks
-R: Remove disk

The experiment first created 4 partitions of 1G, with 2 pieces of storage data, 1 storage checksum, 1 block backup, create RAID 5,

#创建4块分区,注意格式是fd[[email protected] sh]# fdisk /dev/sdb命令(输入 m 获取帮助):t分区号 (1-7,默认 7):5Hex 代码(输入 L 列出所有代码):fd已将分区“Linux”的类型更改为“Linux raid autodetect”  设备 Boot      Start         End      Blocks   Id  System/dev/sdb1            2048    16779263     8388608    5  Extended/dev/sdb5            4096     2101247     1048576   fd  Linux raid autodetect/dev/sdb6         2103296     4200447     1048576   fd  Linux raid autodetect/dev/sdb7         4202496     6299647     1048576   fd  Linux raid autodetect/dev/sdb8         6301696     8398847     1048576   fd  Linux raid autodetect

View native virtual raid

[[email protected] sh]# cat /proc/mdstatPersonalities :unused devices: <none>

Create

 [[email protected] sh]# mdadm -C /dev/md0 -a yes -n 3 -x 1 -l 5 /dev/sdb{5,6,7,8}#-x 1 一块是空闲备份   -l 5 raid5 -n 3块使用盘mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md0 started.

Check Again

 [[email protected] sh]# cat /proc/mdstatPersonalities : [raid6] [raid5] [raid4]md0 : active raid5 sdb7[4] sdb8[3](S) sdb6[1] sdb5[0]      2095104 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]unused devices: <none>#active  活动状态,可以使用

Mount

 [[email protected] sh]# mke2fs -t ext4 /dev/md0[[email protected] sh]# mount /dev/md0 /mnt/t2[[email protected] sh]# df -h /mnt/t2/      #2g可以用设备文件系统        容量  已用  可用 已用% 挂载点/dev/md0        2.0G  6.0M  1.9G    1% /mnt/t2

For more information, see:

[[email protected] t2]# mdadm-d/dev/md0/dev/md0:version:1.2 Creation Time:mon Apr 23 03:14:26 20  Raid level:raid5 Array size:2095104 (2046.00 MIB 2145.39 MB) used Dev size:1047552 (1023.00 MIB 1072.69 MB) Raid devices:3 Total devices:4 persistence:superblock is persistent Update time: Mon Apr 03:20:48 2018 State:clean Active devices:3 working devices:4 Failed devices:0 S Pare devices:1 layout:left-symmetric Chunk size:512kconsistency policy:resync Name  : localhost.localdomain:0              (Local to host Localhost.localdomain)       UUID:24DEC4A5:D57DBE03:F9AA2729:B2997A05 events:18 number Major Minor raiddevice State 0        8 0 Active SYNC/DEV/SDB5 1 8 1 active SYNC/DEV/SDB6 4     8 2 Active SYNC/DEV/SDB7 3  8 24-SPARE/DEV/SDB8 Idle State 

Manually mark a piece of error disk

[[email protected] t2]# mdadm  /dev/md0 -f /dev/sdb6mdadm: set /dev/sdb6 faulty in /dev/md0[[email protected] t2]# cat /proc/mdstat   #这个阶段可以观察数据恢复  直到activePersonalities : [raid6] [raid5] [raid4]md0 : active raid5 sdb7[4] sdb8[3] sdb6[1](F) sdb5[0]      2095104 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]unused devices: <none>

Observe again

[[email protected] t2]# mdadm -D /dev/md0#会发现 1       8       22        -      faulty   /dev/sdb6      #损坏盘   而8已经从空闲到正常使用 #如果在移掉一个 State : clean也会变化成clean,degrade降级状态

Manually remove a bad partition

 [[email protected] t2]# mdadm  /dev/md0 -r /dev/sdb6mdadm: hot removed /dev/sdb6 from /dev/md0

Add again

[[email protected] t2]# mdadm  /dev/md0 -a /dev/sdb6mdadm: added /dev/sdb6

Remove RAID 5

[[email protected] mnt]# umount /mnt/t2[[email protected] mnt]# mdadm -S /dev/md0mdadm: stopped /dev/md0

Linux independent Redundant disk array introduction

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.