Implementation of Software raid in Linux

Source: Internet
Author: User
Implementation of Software raid in Linux
Author: Unknown Source: Unknown

As a network operating system, the redundant disk array (RAID) is one of the essential functions. Starting from the Linux 2.4 kernel, Linux provides software raid, eliminating the need to purchase expensive hardware raid controllers and accessories (generally medium and high-end servers provide such devices and hot swapping hard drives ), this greatly enhances the I/O performance and reliability of Linux disks. At the same time, it can combine multiple smaller disk spaces into a larger disk space. The software raid here does not mean implementing the raid function on a single physical hard disk. To improve raid performance, it is best to use multiple hard disks, and the hard disks using the SCSI interface will work better.
Raid functions and main usage types

Raid combines a general hard disk into a disk array. When the host writes data, the RAID Controller splits the data to be written into multiple data blocks and then writes the data to the disk array in parallel. When the Host reads data, the RAID Controller concurrently reads data distributed on each hard disk in the disk array and re-assembles the data to provide the data to the host. The use of parallel read/write operations improves the storage system access. In addition, the raid disk array is mainly used to improve the system's fault tolerance capability and ensure data reliability by using images, parity, and other measures. Generally, you can configure raid installation when installing the Linux operating system.

During the use of the Linux operating system, you can also manually configure raid based on application requirements. You must have installed the raidtools toolkit. The package can download the latest raidtools-1.00.3.tar.gz from http://lele.redhat.com/mingo/raidtools, decompress the package with the root user, and then enter the following command:

# Cd raidtools-1.00.3
#./Configure
# Make
# Make install

The raidtools-1.00.3 is installed so that raid can be installed at any time.

In Linux, RAID 0, RAID 1, and RAID 5 are supported. RAID 0, also known as stripe or striping, is a set of operating methods. The data to be accessed is evenly distributed to multiple hard disks as much as possible in the form of strips. During read and write operations on multiple hard disks at the same time, the data read and write speed is improved. Another purpose of RAID 0 is to obtain a larger "single" disk capacity.

Raid 1 is also known as mirror or flushing ing. This kind of work method is designed for data security. It automatically copies the data written to the hard disk to another hard disk or to a different place (image ). When reading data, the system first reads data from the source disk of RAID 1. If the data is read successfully, the system does not care about the data on the backup disk. If the data reading from the source disk fails, the system automatically reads the data on the backup disk instead of breaking users' work tasks. Raid 1 provides the highest data security protection for all RAID levels. Likewise, because of the data is backed up, backup data accounts for half of the total storage space. Therefore, mirror's disk space utilization is low and storage costs are high.

RAID 5 is a storage solution that combines storage performance, data security, and storage costs. It is also the most widely used RAID technology. Each Independent hard disk is segmented by strip, and the same strip area is used for parity (exclusive or operation). The parity data is evenly distributed on each hard disk. The RAID 5 array built with N hard disks can have n-1 hard disks with high storage space utilization. RAID 5 does not back up the stored data, but stores the data and the corresponding parity information on each disk that makes up RAID 5, in addition, the parity information and the corresponding data are stored on different disks. When data on any hard disk of RAID 5 is lost, it can be calculated by verifying the data. RAID 5 has the advantages of data security, fast read/write speed, and high space utilization, and is widely used. The disadvantage is that if one hard disk fails, the performance of the entire system will be greatly reduced. RAID 5 can provide data security for the system, but it is less secure than mirror, and the disk space utilization is higher than mirror. RAID 5 has a Data Reading Speed similar to RAID 0, but has an additional parity information. The data writing speed is slightly slower than that of a single disk. At the same time, because multiple data correspond to one parity information, the disk space utilization of RAID 5 is higher than that of RAID 1, and the storage cost is relatively low.

Creating raid in Linux

In actual use, raid is usually created using multiple Independent Disks. Of course, raid can also be created using a single disk. The procedure is similar. Here, I will use a single disk to create a raid as an example.

1. Log On As A root user

2. Use the fdisk tool to create a raid Partition

(1) fdisk/dev/hda. It is assumed that the hard disk on the ide1 main interface has sufficient space.

(2) Run Command n to create multiple new partitions with the same size. If RAID 0 or raid 1 is created, the number of partitions must be at least 2, and RAID 5 is at least 3. N-start cylindrical (you can press enter directly)-partition size; repeat the above process until the number of RAID partitions you want to create. The result is as follows:

Disk/dev/hda: 240 heads, 63 sectors, 3876 cylinders.
Units = cylinders of 15120*512 bytes

Device boot start end blocks ID system
/Dev/hda1*1 1221 9230728 + C Win95 FAT32 (LBA)
/Dev/hda2 1222 1229 60480 83 Linux
/Dev/hda3 1230 1906 5118120 83 Linux
/Dev/hda4 1907 3876 14893200 F Win95 ext 'd (LBA)
/Dev/hda5 1907 1960 408208 + 82 Linux swap
/Dev/hda6 1961 2231 2048728 + B Win95 FAT32
/Dev/hda7 2709 3386 5125648 + B Win95 FAT32
/Dev/hda8 3387 3876 3704368 + 7 HPFs/NTFS
/Dev/hda9 2232 2245 105808 + 83 Linux
/Dev/hda10 2246 2259 105808 + 83 Linux
/Dev/hda11 2260 2273 105808 + 83 Linux
/Dev/hda12 2274 2287 105808 + 83 Linux

Run the N command to create four Linux partitions and run the p command to display the partitions. Here,/dev/hda9,/dev/hda10,/dev/hda11, And/dev/hda12 are the four Linux partitions created.

(3) Use the command t to change the partition type to the software raid type. T-Partition Number-FD (partition type); repeat the above process. After the partition type is modified, the following figure is displayed:

/Dev/hda9 2232 2245 105808 + FD Linux raid autodetect
/Dev/hda10 2246 2259 105808 + FD Linux raid autodetect
/Dev/hda11 2260 2273 105808 + FD Linux raid autodetect
/Dev/hda12 2274 2287 105808 + FD Linux raid autodetect

(4) run the command W to save the partition table.

3. Restart to make the Partition Table take effect

4. Use man raidtab to view the configuration file structure

5. Use the edit command to write the configuration file content to/etc/raidtab.

As follows:

Raiddev/dev/md0
Raid-Level 5
NR-raid-Disks 3
NR-Spare-Disks 1
Persistent-superblock 1
Parity-algorithm left-equalric
Chunk-size 8

Device/dev/hda9
Raid-disk 0
Device/dev/hda10
Raid-Disk 1
Device/dev/hda11
Raid-Disk 2
Device/dev/hda12
Spare-disk 0

Create raid-5 and use three raid disks and one backup disk. Note that the "chunk-size 8" clause cannot be specified. The block size used for raid-5 is 8 KB. The raid-5 volume is written into a partition consisting of 8 KB blocks, that is, the first 8kb of the raid volume is on hda9, and the second 8kb is on hda10. The device name can be md0 or md1. The "Spare-disk" disk is mainly used for backup. Once a disk is damaged, it can be immediately topped up.

6. Create a raid Array Using mkraid/dev/md0

Here, MD indicates that a raid disk is created. The result is as follows:

[Root @ localhost root] # mkraid/dev/md0
Handling MD device/dev/md0
Analyzing super-block
Disk 0:/dev/hda9, 105808kb, raid superblock at 105728kb
Disk 1:/dev/hda10, 105808kb, raid superblock at 105728kb
Disk 2:/dev/hda11, 105808kb, raid superblock at 105728kb
Disk 3:/dev/hda12, 105808kb, raid superblock at 105728kb
Md0: Warning: hda10 appears to be on the same physical disk as hda9. true
Protection against single-disk failure might be compromised.
Md0: Warning: hda11 appears to be on the same physical disk as hda10. true
Protection against single-disk failure might be compromised.
Md0: Warning: hda12 appears to be on the same physical disk as hda11. true
Protection against single-disk failure might be compromised.
MD: md0: Raid array is not clean -- Starting background reconstruction
8 regs: 2206.800 MB/sec
32 regs: 1025.200 MB/sec
Pii_mmx: 2658.400 MB/sec
P5_mmx: 2818.400 MB/sec
RAID5: using function: p5_mmx (2818.400 MB/sec)
RAID5: Raid Level 5 set md0 active with 3 out of 3 devices, algorithm 2

7. Use lsraid-A/dev/md0 to view the raid partition status

The result is as follows:

[Root @ localhost root] # lsraid-A/dev/md0
[Dev 9, 0]/dev/md0 86391738.19bedd09.8f02c37b.51584dba online
[Dev 3, 9]/dev/hda9 86391738.19bedd09.8f02c37b.51584dba good
[Dev 3, 10]/dev/hda10 86391738.19bedd09.8f02c37b.51584dba good
[Dev 3, 11]/dev/hda11 86391738.19bedd09.8f02c37b.51584dba good
[Dev 3, 12]/dev/hda12 86391738.19bedd09.8f02c37b.51584dba spare

8. mkfs. ext3/dev/md0 format the raid partition to ext3 format

The result is as follows:

[Root @ localhost root] # mkfs. ext3/dev/md0
Mke2fs 1.27 (8-Mar-2002)
Filesystem label =
OS type: Linux
Block size = 1024 (log = 0)
Fragment size = 1024 (log = 0)
53040 inodes, 211456 Blocks
10572 blocks (5.00%) reserved for the Super User
First data block = 1
26 block groups
8192 blocks per group, 8192 fragments per group
2040 inodes per group
Superblock backups stored on blocks:
8193,245 77, 40961,573 45, 73729,204

RAID5: Switching cache buffer size, 4096 --> 1024
Writing inode tables: Done
Creating Journal (4096 blocks): Done
Writing superblocks and filesystem accounting information:
Done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs-C or-I to override.

9. Mount/dev/md0/mnt/md0

Create the md0 subdirectory In the MNT directory.

Now, after all the creation is complete, the md0 directory becomes a directory with raid function.

Test raid Performance

To verify the raid effect, follow these steps.

1. dd If =/dev/Zero of =/dev/hda9 BS = 100000000 COUNT = 10

Set hda9 In the first disk partition of raid to 0; BS indicates the number of writes at a time, and count indicates the number of writes at a time. The data written here must be larger than the disk partition capacity. Otherwise, the data will be automatically restored due to raid. As follows:

[Root @ localhost root]
# Dd If =/dev/Zero of =/dev/hda9 BS = 100000000 COUNT = 10
DD: Writing '/dev/hda9': no space left on Device
2 + 0 records in
1 + 0 records out

2. Use lsraid-A/dev/md0 to check that all data in/dev/hda9 is 0.

As follows:

[Root @ localhost root] # lsraid-A/dev/md0
Lsraid: Device "/dev/hda9" does not have a valid raid superblock
Lsraid: Device "/dev/hda9" does not have a valid raid superblock
Lsraid: Device "/dev/hda9" does not have a valid raid superblock
Lsraid: Device "/dev/hda9" does not have a valid raid superblock
[Dev 9, 0]/dev/md0 86391738.19bedd09.8f02c37b.51584dba online
[Dev ?, ?] (Unknown) commandid .00000000.00000000.000000000000 missing
[Dev 3, 10]/dev/hda10 86391738.19bedd09.8f02c37b.51584dba good
[Dev 3, 11]/dev/hda11 86391738.19bedd09.8f02c37b.51584dba good
[Dev 3, 12]/dev/hda12 86391738.19bedd09.8f02c37b.51584dba spare

3. raidstop/dev/md0

4. raidstart/dev/md0

The data recovery from/dev/hda9 is normal, indicating that the raid data verification function has taken effect.

When using Linux, you can create raid at any time to improve data reliability and I/O performance. You can even combine the remaining space of multiple hard disks into a large space.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.