Implementation of RAID function with software under Linux-server other

Source: Internet
Author: User
Tags uuid
Mdadm is also a MD driver, because it has a variety of modes, and a single tool that does not rely on all settings files is a good tool to replace Raidtools. The tool is currently used by almost all releases.
First, installation and compilation
SOURCE Download:
http://www.cse.unsw.edu.au/~neilb/source/mdadm/
Compile:
Tar xzvf./mdadm-1.6.0.tgz
CD mdadm-1.6.0
Make install
RPM Installation:
RPM-IVH mdadm-1.6.0-3.rpm
※ The latest version of the source code is 2.5, because I use the 1.6.0 rpm package, so still take this version as an example.
Second, the model
Mdadm has 6 modes, the first two modes: Create, assemble for setting and activating the array, manage mode for manipulating devices in the active array, follow or monitor mode allows administrators to set event reminders and actions for the active array The build mode is used to use the older version of the MD driver for the old array, and the Grow mode can extend the array; the rest is the misc mode, which includes a variety of internal tasks and operations that do not specify a special pattern.
System platform: Red Hat AS4 for x86
Third, deployment
1, prepare the disk
Only use sofware RaidThe format of the disk to make up the array, so first we want to do a good disk format. As mentioned above, in addition to the system disk SDA, we need to operate on SDB, SDC, SDD
A) partitioning of the SDB
Fdisk/dev/sdb
N, dividing area:

p, dividing area:

W, write exit:

2, create the array
Mdadm can support linear,Raid0 (striping),Raid1 (mirroring),Raid4,Raid5,RaidArray mode for 6 and multipath.
Create the command format as follows:
mdadm [mode] [options]
For example, create aRaid0 Equipment:
Mdadm--create--verbose/dev/md0--level=0--Raid-devices=3/dev/sdb1/dev/sdc1/dev/sdd1
--level represents the array pattern created--raid-devices represents the number of disks for the parameters and arrays.
Can be expressed in this way, meaning the same:
Mdadm-cv/dev/md0-l0-n3/dev/sd[bcd]1
can also add-c128 parameters, specify chunk size is 128K (default 64K)
3. setup file
Mdadm does not use/etc/mdadm.conf as the primary setup file, and he can completely rely on the file without affecting the array's normal functioning.
The primary role of this setup file is to facilitate the tracking of soft raid settings. Setting the settings file is good, but not necessary. This file is recommended for setting.
This is usually the way to build:

echo device/dev/sd[bcd]1 >/etc/mdadm.conf
Mdadm-ds >>/etc/mdadm.conf
Mdadm--detail--scan >>/etc/mdadm.conf

4. Format array
Follow up, as long as you take the/dev/md0 as a stand-alone device to operate it:

Mkfs.ext3/dev/md0
Mkdir/mnt/test
Mount/dev/md0/mnt/test
5, to boot automatically mount, please join the/etc/fstab:
/dev/md0/mnt/tes Auto Defaults 0 0
Iv. Monitoring and management
Mdadm can be very convenient to monitor and manage the operation of the array, but also includes the stop and start the array and other common maintenance.
1, view
Cat/proc/mdstat
To view all the status of the array using the MD driver:
Mdadm--detail/dev/md0
View detailed information for the specified array (-D):

2. Stop
Mdadm-s/dev/md0
Stop specifying the array and release the disk (--stop):
※ Note: After the stop, the original composition of the array of the disk will be idle, once I operate these disks, will not be able to restart the activation of the original array.
3, start
Mdadm-a/dev/md0/dev/sd[bcd]1
Start the specified array, or you can understand that a new array is assembled into the system (--assemble):
If you have set up the/etc/mdadm.conf file on the above, you can also use the-s to find:
Mdadm-as/dev/md0

4, testing
If you don't have the/etc/mdadm.conf file set and you forget that the disk belongs to that array, you can use detection: (--examine)
Mdadm-e/DEV/SDB1
After you obtain the UUID, you can also activate the array like this:
Mdadm-av/dev/md0--uuid=8ba81579:e20fb0e8:e040da0e:f0b3fec8/dev/sd*
As long as the disk is not damaged, this assembly is very convenient:
5. Add and remove disks
Mdadm can add and remove disks from the running array in manage mode. Often used to identify failed disks, increase spare (redundant) disks, and replace disks.
For example: The original state is:
You can specify a bad disk using--fail and--remove go:
Mdadm/dev/md0--FAIL/DEV/SDC1--REMOVE/DEV/SDC1

※ It should be noted that for some array patterns, such as RAID0, can not be used--fail and--remove.
Add a new array disk
Mdadm/dev/md0--ADD/DEV/SDC1
※ It should be noted that for some array patterns, such as RAID0, can not be used--add.

6, monitoring
In follow or monitor states, you can use the Mdadm to monitor the array, such as sending a message to an administrator when there is a problem with the array, or automatic disk replacement when there is a problem with the disk.
Nohup mdadm--monitor--mail=sysadmin--delay=300/dev/md0 &
This definition: No 300 seconds to monitor, and when an array error occurs, a message is sent to the sysadmin user. Since monitor does not automatically exit after startup, you need to add nohup and so that it continues to run in the background.
In follow mode, a redundant disk is allowed to be shared.
For example, we have two arrays:/dev/md0,/DEV/MD1, and/DEV/MD0 has a spare disk. When we define a similar in/etc/mdadm.conf:
device/dev/sd*
Array/dev/md0 level= raid1 num-devices=3 spare-group=databa se
uuid=410a299e:4cdd535e:169d3df4:48b7144a
ARR AY/DEV/MD1 level= raid1 num-device=2 spare-group=databa se
uuid=59b6e564:739d4d2 8:ae0aa308:71147fe7
is defined as a spare-group group. and run the monitor mode command above. In this way, when a problem occurs on one of the disks that make up the/DEV/MD1, Mdadm automatically removes the spare disk from the/dev/md0 and joins/DEV/MD1 without having to manually intervene . (Note that this can be done only if the array supports redundancy, such as RAID1, RAID5, and so on.) For array modes such as raid0
Five, other
1, adding spare Disk
can specify redundant disks when created: the
mdadm-cv/dev/md0-l1-n2-x1/dev/sd[bcd]1
-X (--spare-devices) parameter is used to Specify the number of redundant disks, as a result:
Additionally, for full arrays (for example, RAID1 with 2 disks), the-add parameter is used directly, and MDADM automatically uses the redundant disk as spare disk.

2, delete array
mdadm-s/dev/md0
or
rm/dev/md0
Modify settings files such as/etc/mdadm.conf,/etc/fstab, and remove the relevant places;
Finally, Use Fdisk to repartition the disk.
3, rebuilding the array
We can also divide the disks that have been used without FDISK, but are not currently part of the array into the new array:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.