Configure a disk array (RAID) on the rhel6 System)

Source: Internet
Author: User
Tags disk usage

The full name of the disk array is: Redundant Arrays of Inexpensive Disks, raid, which roughly means a cheap redundant disk array. Raid can use a technology (software or hardware) to integrate multiple smaller disks into a larger disk device. This larger disk not only expands the storage space, data protection is also available.

 

Raid makes the integrated disk have different functions based on different levels. The following are common levels:

 

 

Raid level Division

 

RAID 0: disk overlay

This mode is generally composed of disks of the same model and capacity. In this mode, raid switches the disk out of the same block. When a file needs to be written to a RAID device, the file is cut according to the block size, and then put them in the disks. Because each disk stores data in an staggered manner, when data is written to raid, the data will be put on each disk in the same way.

Therefore, RAID 0 has the following features:

1. The more disks, the larger the RAID device capacity.

2. The total capacity is the total capacity of multiple hard disks.

3. The more disks, the higher the write efficiency.

4. If a non-large hard disk is used, when a small disk is full, data is directly written to a disk with a large space.

5. The minimum number of disks is 2, and the disk usage is 100%.

The cause is that, in case of a problem with one of the disks, all the data will go wrong. Because data is stored separately.

 

 

 

Raid 1: Image backup

This mode stores the same data in different disks. The same data is written to different disks. Therefore, when a large number of RAID 1 devices are written, the write efficiency becomes very poor. However, if you are using a hardware RAID (disk array card), the disk array card will take the initiative to copy a copy instead of using the system's I/O bus, which has little impact on performance. If you use a software disk array, the performance will be significantly reduced.

Raid 1, which features:

1. ensures data security,

2. The capacity of the raid 1 device is half of the total capacity of all disks.

3. When multiple disks form a RAID 1 device, the total capacity will be dominated by the smallest disk.

4. The reading efficiency is relatively increased. This is because data is stored on different disks. If multiple processes read the same data, raid will achieve the best read Balance on its own.

5. The number of disks must be an integer multiple of 2. Disk utilization is 50%

The disadvantage is that the write efficiency is reduced.

 

RAID 5: balanced performance and Data Backup

RAID 5: at least three disks are required to form this type of disk array. The data written to this disk array is similar to RAID 0. However, during the write process of each loop, a partition is added to each disk ), this data records the backup data of other disks for rescue when the disk is damaged.

 

Features:

1. When any disk is damaged, the data in the original disk can be re-built using the check code of other disks. The security is significantly enhanced.

2. Because the same-bit check code exists, the total capacity of RAID 5 is reduced by one for the entire disk.

3. When the number of damaged disks is greater than or equal to two, the RAID 5 data is damaged. Because RAID 5 supports damage to only one disk by default.

4. read/write performance is similar to raid-0.

5. At least 3 disks, disk utilization N-1 Blocks

Insufficient: The data write efficiency does not necessarily increase, because the data to be written to RAID 5 must pass the calculated check code (parity ). Therefore, the write performance is significantly related to the system hardware. Especially when a software disk array is used, the check code (parity) is calculated by the CPU instead of a dedicated disk array. Therefore, when data verification is restored, the hard disk performance will be significantly reduced.

Raid0 raid1 RAID5 three-level data storage process, you can refer

 

Raid 01 or raid 10

This raid level combines RAID 0 and RAID 1 based on the above features and shortcomings.

The so-called raid 01 is: (1) first make the disk into RAID 0 (2) Regroup the RAID 0 to raid 1. This is RAID 0 + 1.

Raid 10 is RAID 1 and RAID 0, which is RAID 1 + 0.

Features and disadvantages: as RAID 0 is used, the efficiency is improved. As RAID 1 is used, data is backed up. However, due to the disadvantages of RAID 1, the total capacity will be less than half used for backup.

 

For more information about raid10's data storage process, see

 

As RAID5 only supports damage to one disk, another level has been developed: Raid 6, which uses the capacity of two disks as the storage of parity, therefore, the overall disk capacity will be less than two, but the number of wrong disks can reach two, that is, in the case of RAID 6, when both disks are damaged, data can still be recovered. At this level, there are at least four raid disks and the utilization is N-2.

 

Spare Disk: hot backup disk

His role is: when the disk in the disk array is damaged, this hot backup disk can immediately replace the location of the damaged disk. At this time, our disk array will be automatically rebuilt. Then, all data is automatically restored. However, this or multiple hot standby disks are not included in the original disk array level. They only take effect when the disk array has any disk damage.

 

We will only introduce the theoretical knowledge here. Of course, we can also extend a variety of combinations. As long as we understand the content above, it is not difficult for other levels. It is nothing more than a variety of combinations. Through the above explanation, I believe everyone knows the advantages of Disk Arrays: 1. Enhanced Data Security and 2. Improved read/write performance, 3. the disk capacity can be effectively expanded. But don't forget that his disadvantage is that the cost increases. But compared with the data, I think this cost is nothing!

 

 

 

The following uses RAID5 as an example to describe it.

1. Add a hard disk.

I added 6 new hard disks on the VM, each of which is 2 GB. Haha, so much just to do experiments!

 

 

[Root @ yufei ~] # Ls/dev/SD *

/Dev/SDA/dev/sda2/dev/SDC/dev/SDE/dev/SDG

/Dev/sda1/dev/sdb/dev/SDD/dev/SDF

In addition to the previous SDA, the following are all newly added. Of course, you can also query it through fdisk-L. Others are not over-partitioned. So a prompt like "Disk/dev/SDB doesn't contain a valid Partition Table" is displayed!

 

First, three (SDB, SDC, and SDD) are used as RAID 5, which is the minimum number of hard disks used for RAID 5. But to ensure security, we need to add SDE for hot backup disk. This is the most secure setting. Of course, if you do not heat the backup, RAID5 can do the same.

 

Note: You can create partitions, but it is not scientific! Another point is that you can either convert the partition type (FD) or not. It seems that this is not important. I will test whether the conversion and non-conversion are the same.

 

2. Create a RAID device file

[Root @ yufei ~] # Mdadm-C/dev/MD5-L 5-N 3-x 1/dev/SD {B, c, d, e}

Mdadm: Partition Table exists on/dev/SDB but will be lost or

Meaningless after creating Array

Mdadm: Partition Table exists on/dev/SDC but will be lost or

Meaningless after creating Array

Mdadm: Partition Table exists on/dev/SDD but will be lost or

Meaningless after creating Array

Continue creating array? Y

Mdadm: defaulting to version 1.2 metadata

Mdadm: array/dev/MD5 started.

At this time, the MD folder and MD5 device file will be created under/dev/, and the/dev/MD folder contains a connection file and an image file of the MD device.

[Root @ yufei ~] # Ls-L/dev/MD *

BRW-RW ----. 1 root disk 9, 5 May 31/dev/MD5

 

/Dev/MD:

Total 4

Lrwxrwxrwx. 1 Root 8 May 31 MD5-> ../md127

-RW -------. 1 Root 53 May 31 00:19 MD-device-Map

Run the following command to view the RAID device status:

[Root @ yufei ~] # Mdadm-D/dev/MD5

/Dev/MD5:

Version: 1.2

Creation Time: Tue May 31 00:19:11 2011

Raid level: RAID5

Array size: 4191232 (4.00 gib 4.29 GB)

Used Dev size: 2095616 (2046.84 MIB 2145.91 MB)

Raid devices: 3

Total devices: 4

Persistence: superblock is persistent

 

Update Time: Tue May 31 00:19:22 2011

State: clean

Active devices: 3

Working devices: 4

Failed devices: 0

Spare devices: 1

 

Layout: left-lateral ric

Chunk Size: 512 K

 

Name: yufei: 5 (Local to host yufei)

UUID: 69443d97: 7e32415d: 7f3843c5: 4d5015cf

Events: 18

 

Number major minor raiddevice state

0 8 16 0 Active Sync/dev/SDB

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

 

3 8 64-Spare/dev/SDE

 

Here we can regard him as a common hard disk, but he is a combination of multiple hard disks. You can partition, format, and mount the data.

 

3. partition, format, and mount raid Devices

 

If you want to partition It, You can also, the partition name after the partition is md5p1 md5p2, and so on. Of course, it can be used normally without partitioning. It depends on your purpose. I will not partition it here. Format directly.

 

[Root @ yufei ~] # Mkfs. ext4/dev/MD5

Mke2fs 1.41.12 (17-may-2010)

Filesystem label =

OS type: Linux

Block size = 4096 (log = 2)

Fragment size = 4096 (log = 2)

Stride = 128 blocks, stripe width = 256 Blocks

262144 inodes, 1047808 Blocks

52390 blocks (5.00%) reserved for the Super User

First data block = 0

Maximum filesystem blocks = 1073741824

32 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768,983 04, 163840,229 376, 294912,819 200, 884736

 

Writing inode tables: Done

Creating Journal (16384 blocks): Done

Writing superblocks and filesystem accounting information: Done

 

This filesystem will be automatically checked every 31 mounts or

180 days, whichever comes first. Use tune2fs-C or-I to override.

You can do this if you want to partition

[Root @ yufei ~] # Fdisk/dev/MD5

I will not partition it here

 

Mount and use

[Root @ yufei ~] # Mount/dev/MD5/mnt

[Root @ yufei ~] # DF

Filesystem 1k-blocks used available use % mounted on

/Dev/sda1 15118728 7014960 7335768 49%/

Tmpfs 255784 0 255784 0%/dev/SHM

/Dev/MD5 4125376 73720 3842096 2%/mnt

Write data files to it.

[Root @ yufei ~] # Touch/mnt/testfil1

[Root @ yufei ~] # Touch/mnt/testfil2

[Root @ yufei ~] # Ls/mnt

Lost + found testfil1 testfil2

4. Disk Damage Simulation

We simulate that the hard disk of/dev/SDB is damaged.

[Root @ yufei ~] # Mdadm/dev/MD5-F/dev/SDB

Mdadm: Set/dev/SDB faulty in/dev/MD5

Check the MD5 status.

[Root @ yufei ~] # Mdadm-D/dev/MD5

Omitted

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

 

0 8 16-Faulty spare/dev/SDB

We can see that the hot spare disk is currently in use, and/dev/SDB is in a corrupt state.

 

Note: When the hot spare disk is replacing the damaged disk, the performance of the RAID device will decrease significantly because it needs to perform data inspection and recovery.

 

Check whether the file exists?

[Root @ yufei ~] # Ls/mnt

Lost + found testfil1 testfil2

Everything is normal

The following describes how to remove the damaged hard disk and add a new hard disk to act as a hot backup disk. NOTE: If I do not heat the backup disk at this time, the data will still be normal if any hard disk in MD5 is damaged. Let's test it.

 

5. Remove the damaged disk

Mdadm: Hot removed/dev/SDB from/dev/MD5

Check the MD5 information.

[Root @ yufei ~] # Mdadm-D/dev/MD5

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

Removed

 

6. Newly heated backup disk

[Root @ yufei ~] # Mdadm/dev/MD5-A/dev/SDF

Mdadm: added/dev/SDF

View MD5 again

[Root @ yufei ~] # Mdadm-D/dev/MD5

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

 

5 8 80-Spare/dev/SDF

The hot spare disk has been added.

 

[Root @ yufei ~] # Ls/mnt/

Lost + found testfil1 testfil2

 

Add storage disks to raid

If the raid space I have prepared is not enough, we can add a new hard disk to it to increase the raid space. By default, the disks we add to raid will be treated as hot backup disks by default. How can we add hot backup disks to raid? The following shows the demo.

 

We proceed with the above operations

 

[Root @ yufei ~] # Mdadm-D/dev/MD5

/Dev/MD5:

Version: 1.2

Creation Time: Tue May 31 19:46:20 2011

Raid level: RAID5

Array size: 4191232 (4.00 gib 4.29 GB)

Used Dev size: 2095616 (2046.84 MIB 2145.91 MB)

Raid devices: 3

Total devices: 4

Persistence: superblock is persistent

 

Update Time: Tue May 31 19:49:07 2011

Omitted

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

 

5 8 80-Spare/dev/SDF

Add a hard disk to raid

 

[Root @ yufei ~] # Mdadm/dev/MD5-A/dev/SDG

Mdadm: added/dev/SDG

[Root @ yufei ~] # Mdadm-D/dev/MD5

/Dev/MD5:

Version: 1.2

Creation Time: Tue May 31 19:46:20 2011

Raid level: RAID5

Array size: 4191232 (4.00 gib 4.29 GB)

Used Dev size: 2095616 (2046.84 MIB 2145.91 MB)

Raid devices: 3

Total devices: 5

Persistence: superblock is persistent

 

Update Time: Tue May 31 19:53:53 2011

Omitted

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

 

5 8 80-Spare/dev/SDF

6 8 96-Spare/dev/SDG

The Newly Added Disk has become a hot backup disk and has not been added to the RAID storage disk. We need to make one of the hot backup disks play a storage role. See how to operate.

 

[Root @ yufei ~] # Mdadm-g/dev/MD5-N4

Mdadm: need to back up 3072 K of critical section ..

 

[Root @ yufei ~] # Mdadm-D/dev/MD5

/Dev/MD5:

Version: 1.2

Creation Time: Tue May 31 19:46:20 2011

Raid level: RAID5

Array size: 4191232 (4.00 gib 4.29 GB)

Used Dev size: 2095616 (2046.84 MIB 2145.91 MB)

Raid devices: 4

Total devices: 5

Persistence: superblock is persistent

 

Update Time: Tue May 31 20:02:34 2011

Omitted

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

6 8 96 3 Active Sync/dev/SDG

 

5 8 80-Spare/dev/SDF

We can see that a RAID storage disk has been added, from 3 to 4, but note the following information: array size: 4191232 (4.00 gib 4.29 GB ), although we have already added a hard disk, the raid space has not increased. below is how to expand the raid space.

 

Resize2fs/dev/MD5

Let's take a look at the data in MD5.

 

[Root @ yufei ~] # Ls/mnt

Lost + found testfile1 testfile2

 

[Root @ yufei ~] # Resize2fs/dev/MD5

Resize2fs 1.41.12 (17-may-2010)

Filesystem at/dev/MD5 is mounted on/MNT; on-line resizing required

Old desc_blocks = 1, new_desc_blocks = 1

Ming an on-line resize of/dev/MD5 to 1571712 (4 K) blocks.

The filesystem on/dev/MD5 is now 1571712 blocks long.

 

[Root @ yufei ~] # Mdadm-D/dev/MD5

/Dev/MD5:

Version: 1.2

Creation Time: Tue May 31 20:21:36 2011

Raid level: RAID5

Array size: 6286848 (6.00 gib 6.44 GB)

Used Dev size: 2095616 (2046.84 MIB 2145.91 MB)

Raid devices: 4

Total devices: 5

Persistence: superblock is persistent

 

Update Time: Tue May 31 20:26:15 2011

Omitted below

At this time, we can see that the array size: 6286848 (6.00 gib 6.44 GB) is increased by 2 GB.

 

[Root @ yufei ~] # Ls/mnt

Lost + found testfile1 testfile2

And the data in it is normal.

 

 

Mount a RAID device

We also need to do the following to ensure that our raid device can be normally used at next boot.

 

1. Write the mounting information to fstab.

 

[Root @ yufei ~] # Vim/etc/fstab

Add the following content

/Dev/MD5/mnt ext4 defaults 0 0

[Root @ yufei ~] # Mount-

No error. It means we have not written it wrong!

 

2. Write our raid information to the configuration file.

 

Let's take a look at the content of the/etc/mdadm. conf file.

 

[Root @ yufei ~] # Cat/etc/mdadm. conf

# Mdadm. conf written out by anaconda

Mailaddr Root

Auto + imsm + 1.x-all

There is already some content in it, but there is no raid information, so we need to write the raid information into this file, otherwise the RAID device will not work at the next boot.

 

[Root @ yufei ~] # Mdadm-d-s>/etc/mdadm. conf

[Root @ yufei ~] # Cat/etc/mdadm. conf

# Mdadm. conf written out by anaconda

Mailaddr Root

Auto + imsm + 1.x-all

Array/dev/MD5 metadata = 1.2 spares = 1 name = yufei: 5 UUID = 69443d97: 7e32415d: 7f3843c5: 4d5015cf

Raid information has been written. Note: If the system has multiple raid, this command collects all the raid information in the system and writes it to this file. Therefore, when you use> to Append content, you may need to modify the content as needed.

 

3. restart the system to test whether the system is successful.

 

After restarting, view the content

 

[Root @ yufei ~] # DF

Filesystem 1k-blocks used available use % mounted on

/Dev/sda1 15118728 7015236 7335492 49%/

Tmpfs 255784 0 255784 0%/dev/SHM

/Dev/MD5 4125376 73720 3842096 2%/mnt

[Root @ yufei ~] # Ls/mnt

Lost + found testfil1 testfil2

Everything is normal

 

 

 

Delete a RAID device

This is not a correct method for removing many materials from the internet. I will give you a detailed explanation today.

 

This is generally the case for many online tutorials.

 

Umount detaches a RAID device

 

Edit the configuration file including

 

/Etc/mdadm. conf

 

/Etc/fstab

 

Stop a RAID device

 

Mdadm-S/dev/MD5

 

Now it's over. Now on rhel6, you will find that the above steps are totally useless. After rhel6 is restarted, it will automatically create files such as/dev/md127 (numbers of different levels here will be different) and you cannot use the devices in raid. If you encounter such a situation, it means that the raid we have not completely deleted. Then let's take a look at how I completely delete it.

 

1. umount detaches a RAID device

[Root @ yufei ~] # Umount/dev/MD5

2. Stop the RAID device.

[Root @ yufei ~] # Mdadm-S/dev/MD5

Mdadm: stopped/dev/MD5

Here, by the way, let's talk about how to enable the feature after the device is stopped. It's an episode.

 

[Root @ yufei ~] # Mdadm-a-s/dev/MD5

Mdadm:/dev/MD5 has been started with 3 drives and 1 spare.

Before stopping, you need to check the hard disk information in raid, because this will be used later, it is critical!

 

[Root @ yufei ~] # Mdadm-D/dev/MD5

Number major minor raiddevice state

3 8 64 0 Active Sync/dev/SDE

1 8 32 1 Active Sync/dev/SDC

4 8 48 2 Active Sync/dev/SDD

 

5 8 80-Spare/dev/SDF

OK, stop again now

 

[Root @ yufei ~] # Mdadm-S/dev/MD5

Mdadm: stopped/dev/MD5

3. Remove the disk from the raid (this is a key step, but it is not written in many tutorials)

 

Delete all disks in raid

 

In this step, the raid must be stopped before execution. Otherwise, the following error message will appear.

 

Mdadm: couldn't open/dev/SDE for write-not zeroing

 

[Root @ yufei ~] # Mdadm -- MISC -- zero-superblock/dev/SDE

[Root @ yufei ~] # Mdadm -- MISC -- zero-superblock/dev/SDC

[Root @ yufei ~] # Mdadm -- MISC -- zero-superblock/dev/SDD

[Root @ yufei ~] # Mdadm -- MISC -- zero-superblock/dev/SDF

OK. In this case, all the disks in the raid are deleted.

 

4. Delete the raid information in the configuration file.

[Root @ yufei ~] # Vim/etc/mdadm. conf

Delete the line we added

 

Array/dev/MD5 metadata = 1.2 spares = 1 name = yufei: 5 UUID = 69443d97: 7e32415d: 7f3843c5: 4d5015cf

[Root @ yufei ~] # Vim/etc/fstab

Delete the line we added

/Dev/MD5/mnt ext4 defaults 0 0

After these four steps, the RIAD is completely deleted. Restart and there will be no raid content.

 

If you want to be more proficient, we recommend that you set RAID5 raid0 to raid1 RAID5 + raid0. I think this is not difficult. For the command parameters of mdadm, you can view the help to obtain

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.