[How to Create soft RAID in linux]

Source: Internet
Author: User

[How to Create soft RAID in linux]

How to Create soft RAID in linux

[Operating system version:] Red Hat 6.3

Tip: in linux, disk devices can be found in the/dev/directory.

The hard disk of the IDE interface starts with the hd file name in this directory.

The hard drive of the SATA interface starts with the sd file name in this directory.

The hard disk of the SCSI interface starts with the sd file name in this directory.

A usb interface disk starts with the sd file name in this directory.

[Use related commands]

[Fdisk]: disk management, partition command

[Mdadm]: Soft RAID command (supporting any block device into RAID)

1. Create a soft RAID

Note: RAID requires many disks. This document adds only one disk to facilitate simulation, creates partitions in the disk, and then creates RAID for the partitions. This is only for the convenience of demonstration. In practice, using a disk to create a soft RAID by a partition does not make any sense, and it cannot improve the file storage and reading speed, because, the speed of a disk is fixed. If the disk breaks down, the backups used in the soft RAID you create on the disk will be damaged at the same time.

Prerequisites: Prepare a hard disk.

A hard disk sdb has been prepared and read from/dev.

[Example 1: Create a 20 GB RAID0 consisting of two partitions]

The RAID0 model is shown as follows:

Note: Based on the result of RAID0, you must first prepare two 10 Gb partitions. The created RAID 0 can reach 20 GB.

First, create two 10 Gb partitions on the sdb disk.

# Fdisk/dev/sdb

You can enter the partition management mode of the sdb disk.

Enter p to view disk partition information.

Because the disk has not been partitiOned, create an extended partition and specify the partition size as the full size of the disk.

Input: n

The following message is displayed: Enter "E" to create an extended partition. Input "p" to create a primary partition.

Input: e

A maximum of four primary partitions can be created on a disk. To create an extended partition, 16 bytes are required to represent an extended partition.

Input: 1

Enter the start column of the partition. The default value is 1. You can enter 1 or press Enter.

Input: 1

Here is the end column of the input disk. In order to make full use of the disk, all the cylinders are split into extended partitions. The default value shown here is 13054. You can enter 13054 or press Enter.

Press ENTER

Now we can use p to view details.

Input: p

The size of the created extended partition is displayed here.

Then you can create a partition in the extended partition.

Input: n

Here, the e is gone, because only one reference address of the extended partition can be created on one disk. Now, L indicates creating an extended partition in the reference address of the created extended partition. Note: The extended partition number starts from 5 by default.

Input: l

Specify the start column of the extended partition. To prevent disk fragments, press Enter.

Enter

Enter the end cylindrical part of the extended partition. because the size of the partition cannot be determined by the cylindrical part, you can use + 10 Gb to specify the size of the partition to be created as 10 Gb.

Input: + 10G

You can use p to view the created partitions.

Input p

Next, create the second 10 Gb partition as above.

Now we want to adjust the partition type. Since the soft RAID must be created using the Linux raid auto type, we can enter t to adjust the partition type.

Input: t

Here, enter the number of the partition type to be adjusted, that is, the number of the partition type you have just created,

Input: 5

Here is the input type number. If you do not know the type number, enter l to view it.

Next, enter the fd modification type.

Input: fd

Enter p to view the modified result.

Next, modify the partition type of sdb6.

The modification has been completed. Enter w to save and exit.

Input: w

Next, we will notify the kernel to re-read the Partition Table of sdb.

Input: partprobe/dev/sdb

View the/proc/partitions file to verify the verification result.

Input: cat/proc/partitions

Here we can see sdb5 and sdb6.

Input: mdadm-C/dev/md0-l 0-a yes-n 2/dev/sdb5/dev/sdb6

The creation is successful:

Input: cat/proc/mdstat for verification

The device of RAID0 has been created. to use it, you must create a file system on the device.

Note: Because sdb5 and sdb6 have already formed a RAID0 device md0, you must create a file system on md0 to create a file system.

Input: mke2fs-j/dev/md0

Create a file system for md0

The file system has been created. Mount/dev/md0 if you want to use it.

Enter fdisk-l to view the disk information of md0.

As shown in 21.5g, the creation is complete. Because the disk partition is allocated by a cylindrical array, we cannot guarantee that the creation is just 20 GB. There is an error in size.

NOTE: If one of the two shards sdb5 and sdb6 in RAID0 is damaged, the RAID 0 device, consisting of sdb5 and sdb6, cannot be used any more. Because RAID0 does not have Redundancy

 

[Example 2: Create a 2g raid 1]

Analysis: because we want to create a 2g raid 1 based on the structure of RAID 1, we need two 2G hard disks to form a 2g raid 1.

For example, RAID1 Model

For demonstration,

First, create two 2G partitions for use as disks.

Input: fdisk/dev/sdb

The subsequent operations for creating a disk have been demonstrated in Example 1. Here only.

Modify partition type

Save and exit

Input: partprobe/dev/sdb

Notify the kernel to re-read the sdb Partition Table

This error is reported because the virtual machine is used. It is unclear, but you can use partx to reread it.

Input: partx-a/dev/sdb

These errors do not have much impact. You only need to check whether the device has read them.

Input: cat/proc/partitions

As shown in the following figure, the created sdb7 and sdb8 have also created a 2G sdb9 instance for subsequent damage simulation.

Now use mdadm to create RAID1

Input: mdadm-C/dev/md1-a yes-l 1-n 2/dev/sdb7/dev/sdb8

Enter y here to confirm

Input: y

The following figure shows that the md1 device has been created.

Input: cat/proc/mdstat

Verify

It is shown that md1 is created with sdb8 and sdb9.

Input: mke2fs-j/dev/md1

Create a file system (format) for the device)

The format has been completed. Here, a 2G RAID1 has been created successfully. It can be mounted and used.

Input: fdisk-l

You can view the md1 device details.

Here we can see that the capacity of RAID 1 created by using two 2 GB sdb7 and sdb8 is only 2 GB because a disk of the same size must be used in Raid 1 for backup.

 

[Demo of simulated damage to a disk in Raid 1]

A raid Device of md0 and md1 has been created above,

Now mount md1

Input: mount/dev/md1/ftpftp/gandian/gz1

The md1 device has been mounted to the/ftpftp/gandian/gz1 directory.

Input: ls/ftpftp/gandian/gz1

View this directory

Input: cp/etc/inittab/ftpftp/gandian/gz1/

Copy an object to the inittab to the gz1 directory.

Input: mdadm -- detail/dev/md1

You can view the composition of md1 on RAID1.

Here we can see that both sdb7 and sdb8 are active and usable.

Now we can simulate damage to sdb8 and check whether md1 can still be used.

Input: mdadm/dev/md1-f/dev/sdb8

It is shown that the sdb8 has simulated damage.

Input: mdadm -- detail/dev/md1

Here we can see that sdb8 is damaged, and currently only sdb7 is working on the disk.

Use cd/ftpftp/gandain/gz1/to access the md1 Mount directory and view the file

Input: cd/ftpftp/gandian/gz1/

Input: ls-l

It can still be used here. This is the redundancy capability of RAID 1. When a hard disk is faulty, the file can still be used, however, if the hard disk is not replaced in time, the file cannot be accessed when the other hard disk is damaged.

In reality, if a hard disk is a real one, you just need to unplug the hard disk and replace it with a new hard disk of the same size. It is okay because it is currently used for software simulation. The hard disk replacement method is as follows:

Input: mdadm/dev/md1-r/dev/sdb8

The damaged sdb8 disk can be taken away.

Input: mdadm -- detail/dev/md1

View the RAID information of md1

We can see that only one disk, sdb7, is in use.

Now replace the New hard disk and use the previously created sdb9

Input: mdadm/dev/md1-a/dev/sdb9

Add a hard disk to the md1 Device

Input: mdadm -- detail/dev/md1

View the mdadm information again

We can see that sdb9 is automatically synchronizing data.

Input: cat/proc/mdstat

You can view the synchronization progress.

Because my computer is too fast, synchronization is fast. After I enter the command, the device has been synchronized.

Enter mdadm -- detail/dev/md1 again

View md1 Information

You can see that sdb9 can be used.

Now, after the md1 device is damaged, the hard disk is replaced.

If you damage sdb7 again, the file will still be accessible.

Additional :--------------------------------

Run the mdadm command

[Mdadm] RAID any block Device

Modular commands:

Creation Mode

[-C]

Special options

[-L]: level (that is, the level type of RAID)

[-N]: number of devices

-A {yes | no}: automatically creates a device file for it.

[-C]: Specifies the CHUNK size (data block size) 2 ^ n default size is 64 k

-X: specifies the number of idle disks.

For example:

Mdadm-C/dev/md0-a yes-l 0-n 2/dev/sdb {5, 6}

Management Mode

Direct use without specifying the Mode

[-- Add] [-- del] and other representation Management

By default, mdadm works in management mode.

[-D] [-- detail] view RAID array information

[-F] [-- fail] [-- set-faulty] damages the disk simulation.

For example: mdadm/dev/md # -- fail/dev/sdb7 (simulate sdb7 in md # As damaged)

Mdadm/dev/md1-r/dev/sdb7 (you can delete damaged disks)

[-S] [-- stop] stop the Array

Example: mdadm-s/dev/md #

Monitoring Mode

-F]

Growth is

[-G]

Assembly Mode

-]

[Mdadm-D/dev/md #] view RAID array details

-- Detail.

[Mdadm-D -- scan] allows you to view information about all md devices on the current device,

If you save the information in mdadm-D -- scan>/etc/mdadm. conf, you do not need to specify the disk of the device for the next startup.

Prerequisites for installing soft RAID:

1. kernel module required: md:

2. Create a tool on linux: mdadm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.