Creation and maintenance of linux soft RAID

Source: Internet
Author: User

RAID (Redundant Array of Inexpensive Disks) is a Redundant Array of cheap Disks. The basic idea of RAID is to combine multiple cheap small disks into a single disk group, so that the performance can reach or exceed a large and expensive disk.

Currently, RAID is divided into two types: hardware-based RAID and software-based RAID. In Linux, the RAID function can be implemented through the built-in software, which can greatly enhance the IO performance and reliability of the disk without buying expensive hardware RAID controllers and accessories. The RAID function is implemented by software, so it has flexible configuration and convenient management. Using software RAID, you can also combine several physical disks into a larger virtual device to achieve performance improvement and data redundancy. Of course, the hardware-based RAID solution is better than the software RAID technology in terms of performance and service performance, this is manifested in the ability to detect and repair multiple-bit errors, automatic detection of error disks, and array reconstruction. This section describes how to create and maintain Soft RAID on the red-flag Linux server.

RAID level Introduction

With the continuous development of RAID technology, there are seven basic RAID levels from RAID 0 to RAID 6, and a combination of RAID 0 and RAID 1, also known as RAID 10. The level does not represent the technical level, while RAID 2 and RAID 4 are basically no longer used. RAID 3 is rarely used because of its complexity. Currently, these commonly used RAID-level Linux kernels are supported. This section takes the Linux 2.6 kernel as an example. The soft RAID in the Linux 2.6 kernel supports the following levels: RAID 0, RAID 1, RAID 4, RAID 5, and RAID 6. In addition to RAID levels, Linux 2.6 supports LINEAR (LINEAR mode) Soft RAID, in linear mode, two or more disks are combined into one physical device. The disk does not have to have the same size. When writing data to a raid device, disk A is first filled with disk A, and then disk B, and so on.

  • RAID 0

It is also called the Strip mode (striped), that is, to distribute continuous data to multiple disks for access, as shown in 1. When the system has data requests, it can be concurrently executed by multiple disks. Each disk executes its own data requests. This type of parallel operations on data can make full use of the bandwidth of the bus, significantly improving the overall disk access performance. Because reading and writing are done in parallel on the device, the Read and Write Performance will increase, which is usually the main reason for running RAID 0. However, RAID 0 does not have data redundancy. If the drive fails, no data can be restored.

 

Figure 1 RAID 0

  • RAID 1

Raid 1, also known as mirroring, is a fully redundant mode, as shown in figure 2. Raid 1 can be used for two or two XN disks, and 0 or more backup disks are used. Data is written to the image disk at the same time each time data is written. This array is highly reliable, but its effective capacity is reduced to half of the total capacity. At the same time, the size of these disks should be equal; otherwise, the total capacity will only have the minimum disk size.

 

Figure 2 RAID 1

  • RAID 4

Creating raid 4 requires three or more disks. It Stores Verification Information on one drive and writes data to other disks in RAID 0 mode, as shown in figure 3. Because a disk is reserved for verification information, the size of the array is (N-l) * s, where S is the minimum drive size in the array. As in Raid 1, the disk size should be equal.

If a drive fails, you can use the verification information to recreate all data. If two drives fail, all data will be lost. This level is not frequently used because verification information is stored on a drive. This information must be updated each time you write data to another disk. Therefore, when writing a large amount of data, it is easy to cause the bottleneck of disk verification. Therefore, raid at this level is rarely used.

 

Figure 3 raid 4

  • RAID 5

RAID 5 may be the most useful raid mode when you want to combine a large number of physical disks and retain some redundancy. RAID 5 can be used on three or more disks, and 0 or more backup disks are used. Like raid 4, the size of the RAID5 device is (N-1) * s.

The biggest difference between RAID5 and RAID4 is that the verification information is evenly distributed on each drive, as shown in Figure 4. This avoids bottlenecks in RAID 4. If one of the disks fails, all data remains unchanged due to verification information. If you can use a backup disk, Data Synchronization starts immediately after the device fails. If both disks fail at the same time, all data will be lost. RAID5 can withstand faults of one disk, but cannot withstand faults of two or more disks.

 

Figure 4 RAID 5

  • RAID 6

RAID 6 is extended based on RAID 5. Like RAID 5, both data and verification codes are divided into data blocks and then stored on each hard disk of the disk array. Only one verification disk is added to RAID 6 to back up the verification code distributed on each disk, as shown in Figure 5. In this way, the RAID 6 disk array allows two disks to be faulty at the same time, therefore, the RAID 6 disk array requires at least four hard disks.

 

Figure 5 RAID 6

Create a soft RAID

The red-flag Linux server uses the mdadm tool to create and maintain soft RAID. mdadm is convenient and flexible in creating and managing Soft RAID. Common mdadm parameters include:

  • -- Create or-C: create a New Soft RAID followed by the name of the raid device. For example,/dev/md0 and/dev/md1.

  • -- Assemble or-A: load an existing array, followed by the array and the device name.

  • -- Detail or-D: Output detailed information of the specified RAID device.

  • -- Stop or-S: stop the specified RAID device.

  • -- Level or-l: Set the RAID level. For example, if "-- level = 5" is set, the level of the created array is RAID 5.

  • -- Raid-devices or-n: specify the number of active disks in the array.

  • -- Scan or-s: scan the configuration file or the/proc/mdstat file to search for the configuration information of Soft RAID. This parameter cannot be used independently and can only be used when other parameters are configured.

The following describes how to implement the soft RAID through mdadm through an instance.

[Instance 1]

A machine has four idle hard disks:/dev/sdb,/dev/sdc,/dev/sdd, And/dev/sde, use these four hard disks to create a RAID 5. The procedure is as follows:

1. Create a partition

First, use the "fdisk" command to create a partition on each hard disk. The operation is as follows:

# Fdisk/dev/sdb

Device contains neither a valid DOS partition table, Nor Sun, SGI or OSF disklabel

Building a new dos disklabel. changes will remain in memory only,

Until you decide to write them. After that, of course, the previous

Content won't be recoverable.

Warning: Invalid flag 0x0000 of Partition Table 4 will be corrected by W (RITE)

Command (M for help): n

Command action

E extended

P primary partition (1-4)

P

Partition Number (1-4): 1

First cylinder (1-102, default 1 ):

Using default value 1

Last cylinder or + size or + sizem or + sizek (1-102, default 102 ):

Using default value 102

Command (M for help): W

The partition table has been altered!

Calling IOCTL () to re-read partition table.

Syncing disks.

Perform the same operation on the remaining hard disks. If you directly create a RAID device based on the disk, skip this step.

2. Create RAID 5

After you have created four partitions:/dev/sdb1,/dev/sdc1,/dev/sdd1, And/dev/sde1, you can create RAID 5, set/dev/sde1 as the standby device, and the other as the active device. The standby device can be replaced immediately if a device is damaged. The command is as follows:

# Mdadm -- create/dev/md0 -- level = 5 -- raid-devices = 3 -- spare-devices = 1/dev/sd [B-e] 1

Mdadm: array/dev/md0 started.

"-- Spare-devices = 1" indicates that there is only one backup device in the current array, that is, "/dev/sde1" as the backup device. If there are multiple backup devices, set the value of "-- spare-devices" to the corresponding number. After the RAID device is successfully created, run the following command to view the RAID details:

# Mdadm -- detail/dev/md0

/Dev/md0:

Version: 00.90.01

Creation Time: Mon Jan 22 10:55:49 2007

Raid Level: raid5

Array Size: 208640 (203.75 MiB 213.65 MB)

Device Size: 104320 (101.88 MiB 106.82 MB)

Raid Devices: 3

Total Devices: 4

Preferred Minor: 0

Persistence: Superblock is persistent

Update Time: Mon Jan 22 10:55:52 2007

State: clean

Active Devices: 3

Working Devices: 4

Failed Devices: 0

Spare Devices: 1

Layout: left-lateral ric

Chunk Size: 64 K

Number Major Minor RaidDevice State

0 8 17 0 active sync/dev/sdb1

1 8 33 1 active sync/dev/sdc1

2 8 49 2 active sync/dev/sdd1

3 8 65-1 spare/dev/sde1

UUID: b372436a: 6ba09b3d: 2c80612c: efe19d75

Events: 0.6

3. Create a RAID configuration file

The RAID configuration file is named "mdadm. conf ", which does not exist by default, so you need to create it manually. The main function of this configuration file is to automatically load soft RAID when the system starts and facilitate future management. "Mdadm. the conf file includes: All devices used for Soft RAID specified by the DEVICE option, and the device name of the ARRAY, the RAID level, the number of active devices in the ARRAY, and the UUID of the device. Run the following command to generate a RAID configuration file:

# Mdadm -- detail -- scan>/etc/mdadm. conf

However, the content of the currently generated "mdadm. conf" file does not comply with the specified format, so it does not take effect. In this case, you need to manually modify the file content to the following format:

# Vi/etc/mdadm. conf

DEVICE/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1

ARRAY/dev/md0 level = raid5 num-devices = 3 UUID = b372436a: 6ba09b3d: 2c80612c: efe19d75

If you have not created a RAID configuration file, you must manually attach the software RAID to use it after each system startup. The command to manually attach the software RAID is as follows:

# Mdadm -- assemble/dev/md0/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1

Mdadm:/dev/md0 has been started with 3 drives and 1 spare.

4. Create a File System

Next, you only need to create a file system on the RAID device. The method for creating a file system on the RAID device is the same as that for creating a file system on a partition or disk. Run the following command to create an ext3 File System on the device "/dev/md0:

# Mkfs. ext3/dev/md0

After creating a file system, you can mount the device to use it normally. If you want to create another level of RAID, the steps are basically the same as creating RAID 5. The difference is that when you specify the "-- level" value, you need to set this value to the corresponding level.

Maintain soft RAID

Although soft RAID can guarantee data reliability to a large extent, in daily work, you may need to adjust RAID and avoid problems related to the possibility of damage to the physical media of RAID devices.

In these cases, you can also use the "mdadm" command to complete these operations. The following describes how to replace a RAID faulty disk through an instance.

[Instance 2]

The previous [instance 1] is used as the basis. If the "/dev/sdc1" device fails, a new disk is replaced. The entire process is described as follows:

1. Simulate faulty Disks

In practice, when a disk is detected to be faulty by a soft RAID, the disk is automatically marked as a faulty disk and the read/write operations on the faulty disk are stopped, therefore, mark/dev/sdc1 as a faulty disk. The command is as follows:

# Mdadm/dev/md0 -- fail/dev/sdc1

Mdadm: set/dev/sdc1 faulty in/dev/md0

Because RAID 5 in instance 1 has a backup device, when it is marked as a faulty disk, the backup disk will automatically replace the faulty disk, arrays can also be reconstructed in a short time. You can view the status of the current array through the "/proc/mdstat" file, as shown below:

# Cat/proc/mdstat

Personalities: [raid5]

Md0: active raid5 sde1 [3] sdb1 [0] sdd1 [2] sdc1 [4] (F)

208640 blocks level 5, 64 k chunk, algorithm 2 [3/2] [U_U]

[=====>...] Recovery = 26.4% (28416/104320) finish = 0.0 min speed = 28416 K/sec

Unused devices: <none>

The above information indicates that the array is being reconstructed. When a device fails or is marked as faulty, the square brackets of the corresponding device will be marked as (F ), for example, "sdc1 [4] (F)". The first digit of "[3/2]" indicates the number of devices included in the array, and the second digit indicates the number of active devices, because there is currently a faulty device, the second digit is 2. At this time, the array runs in degraded mode, although the array is still available, it does not have data redundancy; "[U_U]" indicates that the devices that can be normally used by the current array are/dev/sdb1 and/dev/sdd1. If the device "/dev/sdb1" fails, it is changed to [_ UU].

After you rebuild the data and view the array status again, the current RAID device returns to normal again, as shown below:

# Cat/proc/mdstat

Personalities: [raid5]

Md0: active raid5 sde1 [1] sdb1 [0] sdd1 [2] sdc1 [3] (F)

208640 blocks level 5, 64 k chunk, algorithm 2 [3/3] [UUU]

 

Unused devices: <none>

2. Remove the faulty Disk

Since "/dev/sdc1" is faulty, remove the device as follows:

# Mdadm/dev/md0 -- remove/dev/sdc1

Mdadm: hot removed/dev/sdc1

"-Remove" indicates removing a disk from a specified RAID device. You can also use "-r" to replace this parameter.

3. Add a new hard disk

Before adding a new hard disk, you also need to create partitions for the new hard disk. For example, if the device name of the new hard disk is "/dev/sdc1", perform the following operations:

# Mdadm/dev/md0 -- add/dev/sdc1

Mdadm: hot added/dev/sdc1

"-- Add" is the opposite of "-- remove". It is used to add a disk to a specified device and can be replaced by "-.

Because RAID 5 in instance 1 is configured with a backup device, RAID 5 can run normally without any operation. However, if a disk becomes faulty again, RAID 5 does not have data redundancy, which is too insecure for devices that store important data. Then "/dev/sdc1" added to RAID 5 appears as a backup device in the array, as follows:

# Mdadm -- detail/dev/md0

/Dev/md0:

......

......

Number Major Minor RaidDevice State

0 8 17 0 active sync/dev/sdb1

1 8 65 1 active sync/dev/sde1

2 8 49 2 active sync/dev/sdd1

3 8 33-1 spare/dev/sdc1

UUID: b372436a: 6ba09b3d: 2c80612c: efe19d75

Events: 0.133

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.