Implementation of blog 8:raid array

Source: Internet
Author: User
Tags hex code

The composition principle of RAID arrays:

1.raid:redundant Arrays of independent Disks independent redundant disk array
1) Improved IO capability and increased durability
2) Level: Multiple disk organizations work together in different ways
3) How the raid is implemented:
External disk array: Provides adaptive capability through an extended adapter
Internal raid (soft RAID): Motherboard integrated RAID controller
4) Level of RAID:
RAID-0: Striped Reel
RAID-1: Mirrored volumes
...
RAID-5: Check code disk
RAID10: First mirror from the bottom, then strip (Enterprise-specific)
RAID01: First strip from bottom, rear mirror
JBOD: A simple technology that connects multiple disks into a single market
2.raid-0:
Read and write performance improvement;
Free space: n*min (S1,s2,...)
No fault-tolerant capability
Minimum number of disks: 2,

RAID-1:
Read performance improvement, write performance slightly decreased;
Free space: 1*min (S1,s2,...)
Have redundancy capability
Minimum number of disks: 2,

RAID-4:
1101, 0110, 1011

RAID-5:
Improved read and write performance
Free space: (N-1) *min (s1,s2,...)
Fault tolerance: 1 disks
Minimum number of disks: 3,

RAID-6:
Improved read and write performance
Free space: (N-2) *min (s1,s2,...)
Fault Tolerance: 2 disks
Minimum number of disks: 4, 4+


Mixed type
RAID-10:
Improved read and write performance
Free space: n*min (S1,s2,...) /2
Fault tolerance: Each group of images can only be broken one piece;
Minimum number of disks: 4, 4+

Jbod:just a Bunch of Disks
Function: The space of multiple disks is combined with a large continuous space;
Free space: sum (s1,s2,...)


The implementation principle of soft raid and the introduction of related commands
1. There is a module md:multidisks in the kernel, which realizes the soft raid
2. This module has its own management tools: Mdadm.
This is a modular tool, first of all it has the syntax format:

mdadm [mode] [Raiddevice] [options] <component-devices>

Supported RAID levels: LINEAR, RAID0, RAID1, RAID4, RAID5, RAID6, RAID10;

Mode:
Create:-C
Assembly:-A
Monitoring:-F
Management:-F,-R,-a

<raiddevice>:/dev/md#
<component-devices>: any block device


-C: Create pattern
-N #: Create this raid with # blocks of devices;
-L #: Indicates the level of RAID to be created;
-A {Yes|no}: Automatically create device files for target RAID devices;
-C Chunk_size: Indicates the block size;
-X #: Indicates the number of free disks;


-D: Displays details of the raid;
Mdadm-d/dev/md#

Management mode:
-F: flag specifies that the disk is damaged;
-A: Adding disks
-R: Remove disk

Observe the status of MD:
Cat/proc/mdstat

To stop the MD device:
Mdadm-s/dev/md#

Watch command:
-N #: Refresh interval, unit is seconds;

watch-n# ' COMMAND '


Soft RAID Practice Operation:

1. First we have to prepare 4 partitions, of which three are used for RAID5 components, the remaining one for the idle RAID disk, to prevent RAID5 in a hard disk in the case of errors, can be replaced in a timely manner.
[Email protected] ~]# FDISK/DEV/SDA
Command (M for help): N
First cylinder (8513-15665, default 8513): 5
Value out of range.
First cylinder (8513-15665, default 8513):
Using Default Value 8513
Last cylinder, +cylinders or +size{k,m,g} (8513-15665, default 15665): +5g


Command (M for help): t
Partition number (1-7): 5
Hex code (type L to list codes): FD
Changed system type of partition 5 to FD (Linux raid AutoDetect)


Command (M for help): P

disk/dev/sda:128.8 GB, 128849018880 bytes
255 heads, Sectors/track, 15665 cylinders
Units = Cylinders of 16065 * 8225280 bytes
Sector size (logical/physical): bytes/512 bytes
I/O size (minimum/optimal): bytes/512 bytes
Disk identifier:0x0008ea5a

Device Boot Start End Blocks Id System
/DEV/SDA1 * 1 204800 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 7859 62914560 8e Linux LVM
/dev/sda3 7859 8512 5252256 fd Linux raid AutoDetect
/DEV/SDA4 8513 15665 57456472+ 5 Extended
/dev/sda5 8513 9166 5253223+ fd Linux raid AutoDetect
/dev/sda6 9167 9820 5253223+ fd Linux raid AutoDetect
/dev/sda7 9821 10474 5253223+ fd Linux raid AutoDetect

Prepare the partition we will use with the above command.

[Email protected] ~]# partx-a/DEV/SDA


2. Then we're going to create the RAID5.
[Email protected] ~]# mdadm-c/dev/md0-a yes-n 3-x 1-l 5/dev/sda{3,5,6,7}

View MD devices in the system
[Email protected] ~]# Cat/proc/mdstat
In this process will do the recovery process, the effect is
To do a bitwise alignment of the RAID disks, which makes the raid redundant, capable of bitwise XOR or processing

So our hardware will be processed, and then we'll have to format it.
[Email protected] ~]# mke2fs-t ext4/dev/md0

to mount
[Email protected] ~]# Mkdir/mydata
[Email protected] ~]# Mount/dev/md0/mydata

View information about the creation
[Email protected] ~]# DF-LH
[Email protected] ~]# blkid/dev/md0

Such a complete, able-to-work RAID5 is done.

2. What happens if we artificially destroy one of the partitions?
[Email protected] ~]# mdadm/dev/md0-f/DEV/SDA7

At this point, the raid is automatically synchronized, and the View command is:
[Email protected] ~]# Cat/proc/mdstat
If you want a dynamic query, you can use the Watch command
[[email protected] ~]# watch-n 1 ' cat/proc/mdstat '
When you check this raid again, you will find that the broken hard drive has been replaced by the free disk.
[Email protected] ~]# mdadm-d/dev/md0
Note that at this point the RAID5 can still hold a piece of hard disk damage, at which point a RAID device is used for demotion
[Email protected] ~]# mdadm/dev/md0-r/dev/sda6

3. In the already set raid on the home hard disk, then can only be used as a free disk, to make it a part of the RAID5, need to use grow self-command, but here will not repeat, after all, not very important!



This article is from the "Fante" blog, make sure to keep this source http://8755097.blog.51cto.com/8745097/1690279

Implementation of blog 8:raid array

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.