Examples of configuring soft RAID and maintenance under Linux systems

Source: Internet
Author: User
Tags uuid

Configure soft raid under Linux system


One, set up disk
Here we take RAID 1 as an example, and the other RAID level settings are similar. Riad 1 requires two hard drives, I added 2 80G hard drives to the VPS. In the system, use the Fdisk–l command to view
Second, install Mdadm
MDADM is the abbreviation of multiple devices admin, which is a standard software RAID management tool under Linux.
1, first check whether the Mdadm software installed

# Rpm-qa|grep Mdadm

2, if not installed, then use the Yum way to install.


# yum Install Mdadm
Third, create raid

1. New section
The command to view the newly created two hard drives is/DEV/SDB and/DEV/SDC, first partitioning them.

# FDISK/DEV/SDB//partitioning of SDB
PS: Detailed zoning procedures refer to the previous written article CentOS disk partitions and mount, you need to note that the last new good partition to enter "W" Save, but enter "P" to view the partition.
2, modify the partition type
The default new partition type is Linux, code 83, and we need to modify it to a raid type. Enter "T" and enter "L" to list all the file formats, where we select "FD Linux RAID Auto", enter "FD", and then enter "P" to view the partition, which is a partitioned format that has become a Linux raid AutoDetect.
3, Save the partition
Enter "W" to save the partition.
The same method is used to partition the/DEV/SDC.
4, synchronized partition situation
Use the Partprobe command to synchronize partition conditions.


# Partprobe

Warning:unable to Open/dev/hdc read-write (read-only file system). /DEV/HDC has been opened read-only.
5, view the partition situation

# fdisk-l/DEV/SDB/DEV/SDC

disk/dev/sdb:85.8 GB, 85899345920 bytes
255 heads, Sectors/track, 10443 cylinders
Units = Cylinders of 16065 * 8225280 bytes

Device Boot Start End Blocks Id System
/DEV/SDB1 1 10443 83883366 fd Linux raid AutoDetect

disk/dev/sdc:85.8 GB, 85899345920 bytes
255 heads, Sectors/track, 10443 cylinders
Units = Cylinders of 16065 * 8225280 bytes

Device Boot Start End Blocks Id System
/DEV/SDC1 1 10443 83883366+ fd Linux raid AutoDetect
6. Start creating raid


# mdadm-c/dev/md0-ayes-l1-n2/dev/sd[b,c]1

Mdadm:largest Drive (/DEV/SDC1) exceed size (83883264K) by more than 1%

Continue creating array?

Continue creating array? (y/n) Y

Mdadm:array/dev/md0 started.
Ps:
-C--create to create an array;
-A--auto agrees to create a device, such as a RAID device that must be created using the Mknod command without this parameter, but is recommended to be created once using the-a Yes parameter;
-L--level array mode, supported array patterns are linear, RAID0, RAID1, Raid4, RAID5, Raid6, RAID10, multipath, faulty, container;
-N--raid-devices the number of active disks in the array, plus the number of spare disks should equal the total number of disks in the array;
The device name of the/dev/md0 array;
/dev/sd{b,c}1 the disk name participating in the creation of the array;
Similarly, create additional RAID commands as follows:
Raid 0

# mdadm-c/dev/md0-ayes-l0-n2/dev/sd[b,c]1
Raid 5

# mdadm-c/dev/md0-ayes-l5–n3-x1/dev/sd[b,c,d,e]1
PS: "-x1" or "--spare-devices=1" indicates that there is only one hot spare in the current array, and if there is more than one hot spare, set the value of "--spare-devices" to the appropriate number.
7. View RAID Status

# Cat/proc/mdstat
Personalities: [RAID1]
Md0:active RAID1 sdc1[1] sdb1[0]
83883264 blocks [2/2] [UU]
[...]........... Resync = 2.8% (2396800/83883264) finish=6.2min speed=217890k/sec

Unused devices: <none>

# mdadm-d/dev/md0
/DEV/MD1:
version:0.90
Creation Time:wed OCT 28 11:12:48 2015
Raid LEVEL:RAID1
Array size:83883264 (80.00 GiB 85.90 GB)
Used Dev size:83883264 (80.00 GiB 85.90 GB)
Raid Devices:2
Total Devices:2
Preferred minor:1
Persistence:superblock is persistent

Update time:wed Oct 28 11:12:48 2015
State:clean, resyncing
Active Devices:2
Working Devices:2
Failed devices:0
Spare devices:0

Rebuild status:5% Complete

Uuid:6c1ffaa0:53fc5fd9:59882ec1:0fc5dd2b
events:0.1

Number Major Minor Raiddevice state
0 8 0 Active SYNC/DEV/SDB1
1 8 1 Active SYNC/DEV/SDC1
Ps:
Raid level: Array levels
Array size: Array capacity size
Number of Raid Devices:raid members
The total number of subordinate members in the totals Devices:raid, because there are redundant hard drives or partitions, that is, spare, for RAID positive Chang, can be pushed up at any time to join raid
State:clean, degraded, recovering state, including three states, Clean is normal, degraded indicates a problem, recovering indicates that it is recovering or building
Active Devices: Number of active RAID members
Working Devices: Number of RAID members with normal work
Failed Devices: A problem RAID member
Spare Devices: The number of alternate RAID members, when a member of a raid problem, with another hard disk or partition to replace, RAID to build, when not built to complete, this member will also be considered as a Spare device
The UUID value of the Uuid:raid, which is unique in the system
8. Create RAID configuration file
The RAID configuration file is/etc/mdadm.conf and is not present by default and needs to be created manually.
The main role of this profile is to automatically load soft raid when the system is started, and also to facilitate future management. But it is not necessary, it is recommended to configure the file. If you do not have this file, then after reboot, the md0 that has been created will automatically become md127.

# echo Device/dev/sd{b,c}1 >>/etc/mdadm.conf

# Mdadm–ds >>/etc/mdadm.conf
9. Modify RAID configuration file
The current generated/etc/mdadm.conf file content does not conform to the specified format, so it is not effective. To modify the mdadm.conf file into the following format (that is, remove the metadata parameters):

# cat/etc/mdadm.conf

Device/dev/sdb1/dev/sdc1

array/dev/md0 level=raid1 num-devices=2 uuid=5160ea40:cb2b44f1:c650d2ef:0db09fd0
10. Format the disk array
1
# mkfs.ext4/dev/md0
11, the establishment of mount points and Mount

# MKDIR/RAID1

# mount/dev/md0/raid1/
12. Write to/etc/fstab
We need to write the mounted information to the/etc/fstab file in order to be able to use our RAID device properly the next time you boot.
1
/DEV/MD0/RAID1 EXT4 Defaults 0 0

Examples of soft raid and maintenance

First, hard disk failure recovery
1. When soft RAID detects a disk failure, it automatically marks the disk as a failed disk and stops reading and writing to the failed disk.

# mdadm/dev/md0-f/DEV/SDB1

MDADM:SET/DEV/SDB1 Faulty in/dev/md0
2. View RAID Status

# Cat/proc/mdstat

Personalities: [RAID1]

Md0:active RAID1 sdb1[2] (F) sdc1[1]

16771712 blocks [2/1] [_u]

Unused devices: <none>
Ps:
A, found a sdb1 (F), indicating that the hard drive is damaged.
B, "[_u]" means the device that the current array can use is/DEV/SDC1, and if the device "/DEV/SDC1" fails, it becomes [u_].
3, remove the failure disk

# Mdadm/dev/md0-r/DEV/SDB1

Mdadm:hot REMOVED/DEV/SDB1
4, look at the md0 state, you can see the total turned into 1,sdb also removed, but the total volume has not changed

# mdadm-d/dev/md0
/dev/md0:
version:0.90
Creation Time:thu Oct 14:32:00 2015
Raid level:raid 1
Array size:16771712 (15.99 GiB 17.17 GB)
Used Dev size:16771712 (15.99 GiB 17.17 GB)
Raid devices:2
Total devices:1
Preferred minor:0
Persistence:superblock is persistent
 
Update Time:thu Oc T 15:35:16 2015
State:clean, degraded
Active devices:1
Working devices:1
Failed devices:0
S Pare devices:0
 
uuid:c136a5bf:590fd311:e20a494f:f3c508b2
events:0.26
 
Number Major M Inor Raiddevice State
0 0 0 0 removed
1 8 sync/dev/sdc1 active
1, adding new hard disk
If you add a new hard drive to the actual production, you also need to create partitions on the new hard disk The operation, here we for convenience, will just simulate the damaged hard drive again add new to RAID1
1
mdadm/dev/md0-a/dev/sdb1
View raid again, found RAID1 is recovering, wait for completion

# Cat/proc/mdstat
Personalities: [RAID1]
Md0:active RAID1 sdb1[2] sdc1[1]
16771712 blocks [2/1] [_u]
[...]........... Recovery = 4.0% (672640/16771712) finish=2.7min speed=96091k/sec
Unused devices: <none>
Second, RAID expansion
If the available RAID space is still not enough, then we can add a new hard disk to the inside to increase the raid space.
1, add a hard disk, and then the same as the previous steps partition
2. Add a hard drive to the RAID1

Mdadm/dev/md0-a/DEV/SDD1
3. View RAID Status

# mdadm-d/dev/md0
/dev/md0:
version:0.90
Creation Time:thu Oct 14:32:00 2015
Raid level:raid 1
Array size:16771712 (15.99 GiB 17.17 GB)
Used Dev size:16771712 (15.99 GiB 17.17 GB)
Raid devices:2
Total Devices:3
Preferred minor:0
Persistence:superblock is persistent
 
Update Time:thu Oc T 16:13:13 2015
State:clean
Active devices:2
Working devices:3
Failed devices:0
Spare Devic Es:1
 
uuid:c136a5bf:590fd311:e20a494f:f3c508b2
events:0.34
 
Number Major Minor Evice State
0 8 0 Active SYNC/DEV/SDB1
1 8 sync/dev/sdc1 1 active
 
2 8 49-spare/dev/sdd1< br> by default, the disks that we add to raid are treated as hot spares by default, and we need to add a hot spare to a raid active disk.
4, hot spare converted to Active disk
1
# mdadm-g/dev/md0-n3
Ps:-n3 indicates that 3 active disks are used, the system automatically rebuilding
5, Expansion file system
RAID is built, and the array capacity increases , but the file system has not been increased, we also need to expand the file system.

# df–th

# resize2fs/dev/md0

6. Modify RAID configuration file

You need to add a new hard disk to the configuration file/etc/mdadm.conf sdd1

Device/dev/sdb1/dev/sdc1/dev/sdd1

array/dev/md0 level=raid1 num-devices=2 uuid=c136a5bf:590fd311:e20a494f:f3c508b2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.