first, let's look at the features and principles of various raid volumes.
1,RAID0 features: With split, the data will be split on several disks. Data is divided into chunks, and each chunk is written to a different disk. As a result, each disk's workload is reduced, which helps speed data transfer. RAID-0 makes the disk more responsive, especially for e-mail, database, and Internet applications. A minimum of two drives is required to implement RAID-0. Advantage: Improve system performance by distributing I/O load across multiple drives. Simple implementation. It is important to note that RAID-0 does not have data protection capabilities and is not suitable for critical data.
2,RAID1 Features: RAID-1 is implemented by disk mirroring and is primarily used to ensure data reliability. The same data will be replicated to different disks, and if a disk fails, the data can be found on one of the disks in the array, making it easy to recover. Mirroring not only creates redundant data and provides high availability, it also keeps critical applications up and running. Advantage: Data read performance is improved, and data write performance is no different than a single disk. 100% data redundancy means that there is no need to rebuild data in the event of a disk failure. It is important to note the inefficient use of disk capacity-the highest cost for all RAID types (100%).
3,RAID10 Features: RAID-10 is a combination of RAID-1 and RAID-0. This configuration requires at least 4 hard drives, with the best performance, protection, and capacity in all RAID levels. The RAID-10 contains paired mirrored disks whose data is stripped across the array. In most cases, the RAID-10 can withstand the failure of multiple disks, so that the system can be more properly run. Its data is least likely to be lost. Advantage: The same redundancy as RAID-1 (mirror) is the ideal choice for data protection. It is important to note that the price may be high, related to the mirrored disk array.
4,RAID5 Features: RAID-5 maintains data redundancy through a technique called parity. When data is stripped on multiple disks, the parity data is also included and distributed across all the disks in the array. Parity data is used to maintain the integrity of the data and rebuild when a disk fails. If one of the disks in the array fails, the lost data can be rebuilt based on the parity bit data on the other disks. RAID-5 configuration requires at least 3 hard drives. Advantage: More efficient use of disk capacity for all redundant RAID configurations. Maintain good read and write performance. It is important to note that a disk failure can affect the throughput rate. The time to reconstruct the information after the failure is longer than the mirror configuration.
5,RAID50 Features: RAID-50 is a combination of RAID-5 and RAID-0. This configuration makes the stripping of data, including parity information, on each disk of the RAID-5 sub-disk group. Each RAID-5 sub-disk group requires three hard disks. The RAID-50 has higher fault tolerance because it allows one disk in a group to fail without causing data loss. And because the odd and even bits are on the RAID-5 sub-disk group, the reconstruction speed is greatly improved. Advantage: Higher fault tolerance with the potential for faster data read rates. It is important to note that a disk failure can affect throughput. The time to reconstruct the information after the failure is longer than the mirror configuration.
Features of the 6,RAID6:
Performance of RAID6:
(1) RAID6 Random Read performance: Very good (when using large data blocks).
(2) RAID6 Random Write performance: poor, because not only to write the checksum on each hard disk and to write the data on the special check hard disk.
(3) RAID6 Continuous Read performance: Good (when small data blocks are used).
(4) Persistent write performance for RAID6: General.
(5) The advantages of RAID6: Fast read performance, higher fault tolerance.
(6) Disadvantages of RAID6: Slow write speeds, RAID controllers are more complex and cost-efficient to design.
Features of the 7.RAID60:
Higher fault tolerance, support for both HDD failure repair, and higher read performance. There are still some problems in technology, not mature enough, and few users at present.
think of the simple principle of reading more words
RAID 0 multiple disks with the same size (high concurrency read/write) (not easy to extend)
The total capacity equals the sum of multiple disks and the disk size must be the same.
Disadvantage: Data is lost once a block of disk is damaged.
RAID1 Volume two or more disks of the same size (high reliability)
Total capacity =2/1 disk capacity disk size must be the same
Disk corruption data can be resumed when the interrupted mirrored volume becomes a simple volume
RAID5 volume three or more disks of the same size (high read and write High security)
Total capacity =n-n/1* disk all capacity and (all disk capacity minus one disk capacity)
The disk must be the same size, and the disk can be corrupted to restore data and damage a hard disk.
RAID6: Sum of all disk capacities minus two disk capacities
More than four disks of the same size (very reliable)
Can damage two disks at a time
Write performance is inferior to RAID5
Raid1+0: More than four blocks of the same size disk
Half of all disk capacity (high read/write, high reliability)
Data loss when a mirrored volume group is missing
--------------------------------------------------------------------------------------------------------------- ---------------------------------------------
Next is the experimental section, and I'll take a few representative volumes to do it.
Experimental environment: A machine equipped with a Linux redhat6.5 system, where virtual machines are used and 12 fast disks are added.
操作系统 : CentOS 6.5
IP 地址 : 192.168.10.100
主机名 : zred
磁盘 1-12 [20GB] : /dev/sd[b-m]
No brain return, PA Pa.
===raid 5===
Creating RAID 5 requires a minimum of 3 disks, and you can add more disks, where we use the "Mdadm" package to create a software raid.
Mdadm is a package that allows us to configure and manage RAID devices under Linux.
One. Here we use b-d disk, but your system may be missing "mdadm" package, if not, use the following command to install according to your Linux distribution.
# yum Install Mdadm [in Redhat/centos system]
# apt-get Install Mdadm [in Debain system]
Two. First use the FDISK command to list the hard drives we added on the system.
[Email protected] ~]# Fdisk-l | grep SD
Three. It is now time to check whether the three disks have RAID blocks and use the following command to check.
[Email protected] ~]# MDADM-E/dev/sd[b-d]
The above picture illustrates that no super blocks have been detected. Therefore, RAID is not defined in these three disks. We are now starting to create a.
Four. Partition the disk.
The disk must have partitions before the RAID is created, so use the FDISK command to partition it before proceeding to the next step.
Follow the instructions below to create a partition on the/DEV/SDB hard disk:
[[email protected] ~]# FDISK/DEV/SDB//Start creation
1. Press N to create a new partition.
2. Then press P to select the primary partition. The primary partition is selected because the over-area has not yet been defined.
3. Next select the partition number of 1. The default is 1.
4. Here is the choice of cylinder size, we do not have to choose the size specified, because we need to use the entire partition for the RAID, so just press two times the Enter key to allocate the entire capacity to it by default.
5. Then press P to print the created partition.
6. To change the partition type, press L to list all available types.
7. Press T to modify the partition type.
8. Use FD here to set the type of RAID.
9. Then use P again to see the changes we made.
10. Save the changes using W.
Repeat the above steps, and SDC and SDD also create good partitions.
Afterwards we use:
[[email protected] ~]# mdadm-e/dev/sd[b-d]//view three disk changes.
Four. Now create a RAID device "md0" (that is,/dev/md0) using all newly created partitions (SDB1, SDC1, and SDD1).
The command is as follows:
[Email protected] ~]# mdadm--create/dev/md0--level=5--raid-devices=3/dev/sdb1/dev/sdc1/dev/sdd1
Then use:
[[email protected] ~]# Cat/proc/mdstat//View creation status
While waiting for the progress bar, you can also use:
[[email protected] ~]# WATCH-N1 cat/proc/mdstat//monitor the creation process with watch tracking
Five. Verification
The following command verifies:
[Email protected] ~]# mdadm-e/dev/sd[b-d]1
Verifying RAID Arrays:
[Email protected] ~]# Mdadm--detail/dev/md0
Six. Format and mount.
Create a EXT4 file system for the "MD0" Device (format)
[Email protected] ~]# mkfs.ext4/dev/md0
Under /mnt
Create directory RAID5, then mount file system to/mnt/raid5/, here we use automatic mount mode to put down the boot lost mount.
[[email protected] ~]# MKDIR/MNT/RAID5//Create RAID5 Directory
[[email protected] ~]# vim/etc/fstab//Modify Auto Mount Profile
/DEV/MD0/MNT/RAID5 EXT4 Defaults 0 0//Add this sentence at the end of the Fstab line
Verify that the mount is mounted
[[email protected] dev]# Mount–a//Mount configuration file options
[[email protected] dev]# df–h//view files already mounted
To save the configuration for Raid 5:
[Email protected] dev]# mdadm--detail--scan--verbose >>/etc/mdadm.conf
[[email protected] dev]# cat/etc/mdadm.conf//view Configuration
RAID 5 is configured to do so!
----RAID Ten----
Requirements: In RAID 10, we need at least 4 disks, the first 2 disks are RAID 1, and the other 2 disks are RAID 0.
Through the above detailed RAID5 configuration I believe everyone is familiar with the operation of the procedure, below we use/sd[e-h] simple configuration raid10.
I. Partitioning and detecting disks.
Using FDISK, the command creates a new partition for 4 disks (/DEV/SDE,/DEV/SDF,/DEV/SDG, and/DEV/SDH).
#fdisk/DEV/SDE
#fdisk/DEV/SDF
#fdisk/DEV/SDG
#fdisk/DEV/SDH
After Setup, look under:
[Email protected] dev]# fdisk-l/dev/sd[e-h]
Check that the disk has raid:
[Email protected] dev]# MDADM-E/dev/sd[e-h]
Two. Create a RAID10 volume.
When you are sure that the Mdadm tool is installed, start creating the raid.
[Email protected] dev]# mdadm--create/dev/md10--level=10--raid-devices=4/dev/sd[e-h]1
Check it out.
[Email protected] ~]# Mdadm--DETAIL/DEV/MD10
Three. Format and mount.
Formatting:
[Email protected] ~]# MKFS.EXT4/DEV/MD10
To add an automatic mount:
[[email protected] ~]# MKDIR/MNT/RAID10//Create mount Directory
[[email protected] ~]# vim/etc/fstab//edit config file
[[email protected] ~]# Mount–a//Mount up
[[email protected] ~]# df–h//view
At last:
[[email protected] ~]# mdadm--detail--scan--verbose >>/etc/mdadm.conf//Save RAID Configuration
Summarize
This paper mainly describes the features and principles of various raid volumes, and explains in detail how to configure the RAID5 and RAID10. After Setup is complete, you can create some files under the mount point, add content to the file, and then examine the content. To verify.
Yours
Linux configuration disk array raid 0, RAID1, RAID5, Raid6, RAID10