RAID disk soft array in CentOS

Source: Internet
Author: User
Tags hex code
RAID overview RAID is short for RedundantArrayofInexpensiveDisk. with this technology, multiple disks can be integrated into one array and can be used as a single disk. The RAID disk array can be used to improve data read/write efficiency and data redundancy (backup) based on different technologies. when a disk in the array fails, data can be restored from other disks through verification, greatly enhancing the read/write performance and reliability of application system data. More Common

RAID overview

RAID is short for Redundant Array of Inexpensive Disk.

Multiple disks are combined into an array, which can be used as a single disk. RAID disk array

Different techniques can be used to improve data read/write efficiency and improve data redundancy (backup). when a disk in the array fails

In order to restore data from other disks through verification, the read/write performance and reliability of application system data are greatly enhanced.

Common RAID technologies include the following levels:

„ RAID 0: The most basic array method, which combines multiple disks (at least two disks)

A large disk. When data is accessed, data is written into different disks in segments at the same time, which greatly improves the efficiency.

Rate. However, there is no data redundancy in this method. any disk that breaks down may cause data loss.

„ RAID 1: Disk mirroring technology, requires at least two disks (disk utilization: 1/n ). In this way

Data written to multiple disks in the array is mirrored to each other. Therefore, any of the disks is broken.

Data will not be lost in the future.

„ RAID 5: Data verification technology is introduced to ensure data security. at least three disks are required (disk utilization: n-1 ).

This method does not use a fixed disk to store verification data, but is stored in segments on each disk. Therefore

If any disk in the disk is broken, it can be restored based on the verification data in other disks.

RAID5 array technology enhances reliability through data redundancy and improves efficiency by writing data to multiple disks at the same time.

Rate, which has been widely used.

RAID technology that is not implemented using hardware disk cards is usually referred to as soft RAID technology. In RHEL5,

Use multiple partitions on different disks to configure a RAID 5 disk array.


9.2 build a disk array using soft RAID

In the RHEL5 system, you can configure a soft RAID array by installing the mdadm package. This software package is generally

The system is installed by default. if you check that the system is not installed, find and install it on the RHEL5 system disk.

[Root @ localhost ~] # Mount/dev/cdrom/media/cdrom/

Mount: block device/dev/cdrom is write-protected, mounting read-only

[Root @ localhost ~] # Rpm-ivh/media/cdrom/Server/mdadm-2.5.4-3.el5.i386.rpm

Preparing... ######################################## ### [100%]

1: mdadm ####################################### #### [100%]

[Root @ localhost ~] # Rpm-qi mdadm | grep "Summary"

Summary: mdadm controls Linux md devices (software RAID arrays)


The following describes how to configure and use a RAID 5 disk array.


Prepare partitions for RAID arrays

Each partition used to form a RAID array should be located on different disk devices. Otherwise, it is of little practical value. Capacity of each partition

It is best to have the same volume. if necessary, you can divide the entire hard disk into a partition.

Add four SCSI hard disks to the Linux server, and use the fdisk tool to partition each disk into a 2 GB partition:

/Dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1. Before partitioning, make sure that no other program is using the corresponding

Disk. In the next section, we will take the four partitions as an example (RAID 5 requires at least three disks or partitions) to explain the RAID 5 disk array.

.

The preceding Partition type ID should also be changed to "fd", corresponding to "Linux raid autodetect", indicating that

In the RAID disk array.

[Root @ localhost ~] # Fdisk/dev/sdb

......

Command (m for help): n

Command action

E extended

P primary partition (1-4)

P

Partition number (1-4): 1

First cylinder (1-522, default 1 ):

Using default value 1

Last cylinder or + size or + sizeM or + sizeK (1-522, default 522): + 2048 M

Command (m for help): t

Selected partition 1

Hex code (type L to list codes): fd

Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): p

......

Device Boot Start End Blocks Id System

/Dev/sdb1 1 250 2008093 + fd Linux raid autodetect


Command (m for help): w

The partition table has been altered!

Calling ioctl () to re-read partition table.

Syncing disks.

[Root @ localhost ~] #


After creating the three partitions, run "partprobe" to re-test the partition table (or restart the system) to verify the partition type and capacity.

Volume and other information.

[Root @ localhost ~] # Partprobe

[Root @ localhost ~] # Fdisk-l/dev/sd [B-e] | grep "^/dev/sd"

/Dev/sdb1 1 250 2008093 + fd Linux raid autodetect

/Dev/sdc1 1 250 2008093 + fd Linux raid autodetect

/Dev/sdd1 1 250 2008093 + fd Linux raid autodetect

/Dev/sde1 1 250 2008093 + fd Linux raid autodetect


Create a RAID device

The mdadm tool can combine multiple RAID partitions as a disk array.

"/Dev/md0" and "/dev/md1.

[Root @ localhost ~] # Mdadm-Cv/dev/md0-a yes-n4-l5/dev/sd [B-e] 1

Mdadm: layout defaults to left-connected Ric

Mdadm: chunk size defaults to 64 K

Mdadm:/dev/sdb1 appears to be part of a raid array:

Level = raid5 devices = 4 ctime = Sat Jul 25 08:44:50 2009

Mdadm:/dev/sdc1 appears to be part of a raid array:

Level = raid5 devices = 4 ctime = Sat Jul 25 08:44:50 2009

Mdadm:/dev/sdd1 appears to be part of a raid array:

Level = raid5 devices = 4 ctime = Sat Jul 25 08:44:50 2009

Mdadm:/dev/sde1 appears to be part of a raid array:

Level = raid5 devices = 4 ctime = Sat Jul 25 08:44:50 2009

Mdadm: size set to 2008000 K

Continue creating array? Y

Mdadm: array/dev/md0 started.

[Root @ localhost ~] #

In the preceding command operation, "/dev/md0" indicates the name of the new RAID array device, and "/dev/sd [bcd] 1" indicates the array.

The/dev/sdb1,/dev/sdc1, and/dev/sdd1 columns are used. The meanings of other options and parameters are as follows:

„-C, equivalent to -- create: create a new array device

-V, equivalent to -- verbose: output details during execution

-A, equivalent to -- auto =: If the parameter is set to yes, it indicates that if the corresponding device file does not exist, it is automatically created.

„-N, equivalent to -- raid-devices =: Number of partition devices used to form the array. "-n3" indicates three

…-L, equivalent to -- level =: The RAID level used. "-l5" indicates RAID 5.

For more options of the mdadm command, see "man mdadm" help.

After the md0 array device is created, it is automatically activated. run "cat/proc/mdstat" to observe

Running status.

[Root @ localhost ~] # Ls-l/dev/md0

Brw ------- 1 root 9, 0 07-25/dev/md0

[Root @ localhost ~] # Cat/proc/mdstat

Personalities: [raid6] [raid5] [raid4]

Md0: active raid5 sde1 [3] sdd1 [2] sdc1 [1] sdb1 [0]

6024000 blocks level 5, 64 k chunk, algorithm 2 [4/4] [UUUU]

The first "4" in "[4/4]" indicates the number of member devices, and the "4" behind it indicates the number of active devices.

Number. UUUU corresponds to the status of the member device. For example, if "[4/3] [UUU _]" is displayed

The member device (/dev/sde1) is faulty.


Create a file system in a RAID device

After the disk array device File "/dev/md0" is created, you can create a file system on the device. In RHEL5

In the system, you can use the mkfs command to format the device and use it as the ext3 file system.

[Root @ localhost ~] # Mkfs-t ext3/dev/md0

Mke2fs 1.39 (29-may-2006)

Filesystem label =

OS type: Linux

Block size = 4096 (log = 2)

Fragment size = 4096 (log = 2)

753664 inodes, 1506000 blocks

75300 blocks (5.00%) reserved for the super user

First data block = 0

Maximum filesystem blocks = 1543503872

46 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768,983 04, 163840,229 376, 294912,819 200, 884736

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 33 mounts or

180 days, whichever comes first. Use tune2fs-c or-I to override.

Mount and use a file system

Create the mount point directory "/mdata" and mount the file system created in the previous section to this directory.

[Root @ localhost ~] # Mkdir/mdata

[Root @ localhost ~] # Mount/dev/md0/mdata

[Root @ localhost ~] # Df-T | grep "md0" // verify the mounted "/mdata" file system

/Dev/md0 ext3 5929360 142976 5485184 3%/mdata

After a member device is used as the verification disk, the effective storage space of the disk array is the sum of the capacity of the three member devices (large

About 2 GB x 3 = 6 GB ).

If you want to automatically mount the array device after each boot, you can add the corresponding settings in the "/etc/fstab" file.

[Root @ localhost ~] # Vi/etc/fstab

/Dev/md0/mdata ext3 defaults 0 0



Array management and device recovery


Basic management operations

1. scan or view Disk Array Information

When using the mdadm command, the "-D" option is equivalent to "-- detail", indicating that the detailed content of the scan result is displayed; "-s"

The option is equivalent to "-- scan", used to scan array devices.

If no array device file is specified, the array configuration information and RAID device list in the current system are displayed.

[Root @ localhost ~] # Mdadm-vDs

ARRAY/dev/md0 level = raid5 num-devices = 4

UUID = 35bcffa1: cdc5ba41: 0c5b5702: e32a3259

Devices =/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

When an array device is specified as a parameter, you can output detailed parameters for the specified array device, including the number of active devices

The number of valid devices, update time, and location of the list member devices.

[Root @ localhost ~] # Mdadm-vDs/dev/md0

/Dev/md0:

Version: 00.90.03

Creation Time: Sat Jul 25 11:23:07 2009

Raid Level: raid5

Array Size: 6024000 (5.74 GiB 6.17 GB)

Device Size: 2008000 (1961.27 MiB 2056.19 MB)

Raid Devices: 4

Total Devices: 4

Preferred Minor: 0

Persistence: Superblock is persistent

Update Time: Sat Jul 25 11:26:01 2009

State: clean

Active Devices: 4

Working Devices: 4

Failed Devices: 0

Spare Devices: 0

Layout: left-lateral Ric

Chunk Size: 64 K

UUID: 35bcffa1: cdc5ba41: 0c5b5702: e32a3259

Events: 0.6

Number Major Minor RaidDevice State

0 8 17 0 active sync/dev/sdb1

1 8 33 1 active sync/dev/sdc1

2 8 49 2 active sync/dev/sdd1

3 8 18 3 active sync/dev/sde1


2. create the configuration file mdadm. conf.

The configuration file of mdamd is "/etc/mdadm. conf", which is only used to facilitate user management and use.

This file does not affect the disk array function. You can save the configuration information of multiple disk arrays in the configuration file. Configuration

The basic information in the file can be obtained through the "mdadm-vDs" command described earlier.

[Root @ localhost ~] # Vi/etc/mdadm. conf

DEVICE/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1

ARRAY/dev/md0 level = raid5 num-devices = 4

UUID = 35bcffa1: cdc5ba41: 0c5b5702: e32a3259

Devices =/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

CREATE owner = root group = root mode = 0640.

In the preceding file, the "ARRAY" and "UUID" and "devices" parts are in the same row, and the last

"CREATE" in the row is used to set the owner, group, and default permissions for automatically creating array device files. About mdadm. conf

For more configuration items in the configuration file, refer to the "man mdadm. conf" help information.

3. Start/Stop the RAID array

The array device can be stopped if no program is used to read or write the disk array device. You only need to use mdadm

The command can be combined with the "-S" option (equivalent to the "-- stop" option. This operation will disable the corresponding array device,

Release related resources.

[Root @ localhost ~] # Mdadm-S/dev/md0

Mdadm: stopped/dev/md0

Combined with the "-A" option (equivalent to the "-- assemble" option), you can recombine the corresponding disk array device.

[Root @ localhost ~] # Mdadm-A/dev/md0

Mdadm:/dev/md0 has been started with 4 drives.

[Root @ localhost ~] # Mount/dev/md0/mdata/


Device Recovery

1. simulate array device failure

For disk arrays running, you can use the "-f" option of the mdadm command (equivalent to the "" option) for the module

Fault of the intended member device. for example, you can mark "/dev/sdb1" in the array as a faulty device.

[Root @ localhost ~] # Mdadm/dev/md0-f/dev/sde1

Mdadm: set/dev/sde1 faulty in/dev/md0

When a member device in the array fails, the array marks it as inactive. In this case

"/Proc/mdstat" indicates the loss of the faulty device (/dev/sde1 ).

[Root @ localhost ~] # Cat/proc/mdstat

Personalities: [raid6] [raid5] [raid4]

Md0: active raid5 sde1 [3] (F) sdd1 [2] sdc1 [1] sdb1 [0]

6024000 blocks level 5, 64 k chunk, algorithm 2 [4/3] [UUU _]


2. change the faulty device and restore data

For a faulty device, you can remove it with the "-r" option, replace it with a normal device, and combine "-"

Option to add it to the array.

[Root @ localhost ~] # Mdadm/dev/md0-r/dev/sde1

Mdadm: hot removed/dev/sde1 // remove the faulty device

[Root @ localhost ~] # Mdadm/dev/md0-a/dev/sde1

Mdadm: re-added/dev/sde1 // re-add to the normal device

The raid 5 disk array can be reconstructed and restored in a short period of time.

To observe the recovery progress of the array status during this period.

[Root @ localhost ~] # Cat/proc/mdstat

Personalities: [raid6] [raid5] [raid4]

Md0: active raid5 sde1 [3] sdd1 [2] sdc1 [1] sdb1 [0]

6024000 blocks level 5, 64 k chunk, algorithm 2 [4/3] [UUU _]

[=>...] Recovery = 16.3% (328192/2008000) finish = 2.6 min

Speed = 10586 K/sec

Unused devices:

After the data recovery is complete, check the array status again to make it normal.

[Root @ localhost ~] # Cat/proc/mdstat

Personalities: [raid6] [raid5] [raid4]

Md0: active raid5 sde1 [3] sdd1 [2] sdc1 [1] sdb1 [0]

6024000 blocks level 5, 64 k chunk, algorithm 2 [4/4] [UUUU]

Unused devices:


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.