Operating RAID in Linux

Source: Internet
Author: User

I. What is Raid?

Raid was called in the Early Days: Redundant Arrays of Inexpensive Disks Fault-Tolerant cheap disk Array. Now it is called Redundent Array of Independent Disks and fault-tolerant Independent disk Array. This is a storage module that combines multiple hard disks and features fault tolerance. It can be simply understood as a combination of several small hard disks into a large hard disk with fault tolerance function. Raid is generally divided into several levels, such as Raid 0, Raid 1, Raid 4, Raid 5, Raid 6, Raid 10, and Raid 01. Raid levels are not in the general sense. This level represents a combination of disks with different storage structures. Raid is usually used in projects with high requirements on data storage risks and reading and writing capabilities.

Raid is divided into hardware Raid and software Raid. The two Raid types have the same structure and the implementation mechanism is similar. The difference is that hardware Raid has a dedicated and independent chip responsible for data processing, which has powerful functions and outstanding performance. However, software Raid can only process data by CPU, so its performance is weak, it can be used for emergency handling. In other cases, we do not agree to the use of software Raid.

Ii. Operations on software Raid in Linux

In Linux, mdadm is usually used to perform soft Raid operations on disk partitions. mdadm generally calls the system's md module. When md is used as a soft raid, any block device can be used as a raid device, such as a system partition. When raid0 is used for a system disk, we do not recommend that you use two partitions on the same hard disk, because raid0 is used to distribute data, but in this case, the data is stored on one hard disk, therefore, this situation makes no sense. This operation is performed on a virtual machine for experimental reasons. Therefore, Raid0 operations are not considered.
Mdadm is a model tool and has the following modes:

-A: Assembly Mode
-C: Creation Mode
-F: Monitoring Mode

-C: common parameters in Creation Mode:

-N #: number of devices used to create RAID Devices
-X #: Number of Hot Standby Disks
-L level: the raid level is customized. Either raid0 or 0 can be used.
-A yes: automatically creates a device file for the created raid device;
-C Chunk_Size: Specifies the part size. The default value is 512, in KB.

Other standalone mdadm parameters:

-F: The simulated device is damaged.
-R: Simulate pulling out of bad Disk
-A: plug in the new device
-S: RAID is stopped. You need to uninstall the device.
-D (-- detail): displays the details of the array.

Next, we will create a 12g Raid0
1. First, use the system fdisk tool to partition the hard disk. There are two disks in the 6 GB format: Linux raid autodetect.

Here, my two disks are/dev/sdb1 and/dev/sdb2.

2. Use the kpart and partx commands to refresh the system partition table records.

Kpartx-af/dev/sdb
Partx-a/dev/sdb

Then run the cat command to view the Partition Table records. You can create raid when the new partition appears in the list.

Cat/proc/partitions

3. Run the mdadm-C command to create Raid0.
Mdadm-C/dev/md0-a yes-l 0-n 2/dev/sdb {1, 2}

-C/dev/md0 device name/dev/md0
-A yes: if a problem occurs, the automatic answer is yes.
-L level 0: raid0
-N 2: The number of devices is 2.
The/dev/sdb {1, 2} device is/dev/sdb1 and/dev/sdb2.
In this case, the system sometimes prompts that the partition is in use, but the partition can still be created successfully. You can view the raid status continuously during operations.

4. View Raid status

Cat/proc/mdstat

View Raid details

Mdadm-D/dev/md0

5. Now that the Raid is successfully created, you can use the device normally.

Mke2fs-t ext4/dev/md0
Mount/dev/md0/web

Format, mount, and use the device as a normal partition.

6. Other Raid operations
For computer reasons, when Raid 5 is created, it takes a very long time for Raid to synchronize the structure of the disk.
Therefore, we will not deploy the Raid hot standby and device shelving Diagram for this article, but only paste the code.
-F simulate Device Damage

# The simulated device/dev/sdb1 is damaged. If a hot backup disk exists
# The hot backup disk is automatically used as the backup.
Mdadm/dev/md0-f/dev/sdb1

-R: Unplug the bad disk.

Mdadm/dev/md0-r/dev/sdb1

-Replace a with a new disk

Mdadm/dev/md0-a/dev/sdb2

7. When Raid does not need to be used, you can use-S to stop the Raid. This operation requires detaching the partition.

Mdadm-S/dev/md0

Conclusion: The above is the basic operation of Soft Raid. The use of Soft Raid is not wide-ranging. You just need to know about it.

Recommended reading:

Debian soft RAID Installation notes-use mdadm to install RAID1

Common RAID technology introduction and demo (Multi-chart)

The most common disk array in Linux-RAID 5

RAID0 + 1 and RAID5 Performance Test Results

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.