First, the purpose of the experiment
1. Mastering the implementation method of soft raid in Linux system;
2. Master the configuration process of RAID5;
3. Be familiar with RAID, 5 characteristics through experiment.
Second, the contents and steps of the experiment
1. Create a Linux in VMware. 2. Add the Linux 4 virtual disk (select SCSI type, 2G). 3, the use of Madam in Linux to create RAID5, three disk to do RAID5, a disk to do spare. 4. Format and mount the RAID5. 5. Create some files and folders in RAID5 for fault detection. 6, modify the configuration file, let RAID5 boot automatically loaded. 7, shut down the system. |
8. Delete a hard drive (a second hard drive that can be added after deletion).
9. Restart the system to see if the data in the RAID5 volume is missing.
10, according to the experimental results to summarize the contents of the experiment.
Third, the experimental requirements
1. The experimental results are carefully observed, recorded and compared, and the reasons should be found if the inconsistency is inconsistent.
2. All the places that need to be named in the experiment are named by their own names, and can be distinguished by different suffixes. Two servers, such as Zhang San, can be named: Zhangsans1,zhangsans2.
Experimental steps and experimental process:
1. Install red Hat Enterprise Linux 6 in VMware first.
2. Add 4 virtual disks (SCSI,2G) to red Hat Enterpriselinux 6, three blocks for RAID5, one for spare disks.
3. Start Red Hat enterpriselinux 6.
4, to configure the RAID5.
5. Close Red Hat enterpriselinux 6.
6. Delete the added disk (you can delete the second disk you added).
7. Restart Red hatenterprise Linux 6.
8, to test the RAID5 after a broken disk, you can find that the data will still be able to access the normal.
Number of disks required: Three blocks or more we add six hard drives here, four of them do disk arrays, 1 are prepared (spare) disks, and one is reserved for standby Note: The RAID5 disk usage is N-1, which means that four 100G of hard disk free space is 300G.
Experimental steps:
1 Check the disk devices in the system fdisk–l
2 Next Create raid the command used is mdadm, if not please install the MDADM package first
Rhel5 in the CD!
Mdadm--create--auto=yes/dev/md0--level=5--raid-devices=4--spare-devices=1/dev/sd[b-f]
Parameter explanation:
--create//indicates to create raid
--AUTO=YES/DEV/MD0//Newly established software disk display device for MD0,MD serial number can be 0-9
--LEVEL=5//disk array level, here is REID5
--raid-devices//Add the number of blocks as a prestaged (spare) disk
/DEV/SD[B-F]//disk array used by the device, also can be written as "/DEV/SDB/DEV/SDD/DEV/SDE/DEV/SDF" can also be abbreviated as: Mdadm–c/dev/md0–l5–n4–x1/de V/SD[B-F]
There are two ways to see if the raid was successful and whether the creation is working properly
View more information: Mdadm--detail/dev/md0 command to view raid details
More simple view: You can view the/proc/mdstat files directly to see the raid running situation
Cat/proc/mdstat
Format and mount and use raid created
MKFS.EXE3/DEV/MD0//Format RAID5 disk as Mkfs.exe
MIKDIR/MNT/RAID5//Create/raid5 folder under/mnt to mount the md0
MOUNT/DEV/MD0/MNT/RAID5//Mount the md0 to RAID5
Check the Mount status
Df–ht
Try a new raid is not available, write files to it
Set boot auto-start raid and auto Mount
Let raid boot up, riad configuration file name is mdadm.conf, this file
It does not exist, it should be established by itself. The primary role of this configuration file is when the system starts to
Automatic loading of soft raid, but also easy to manage later. Note, the mdadm.conf file is mainly made up of
The next section consists of:
The Devices option is formulated to compose the raid all devices, and the array option specifies the
Device name, RAID level, number of active devices in the array, and the device's UUID number.
Auto-start raid
First set up/etc/mdadm.conf this file
Mdadm--detail--scan >/etc/mdadm.conf
To make a change to this file: vi/etc/mdadm.conf
When setting up auto mount for raid
Modify File Vi/etc/fstab
Add a line/dev/md0/mnt/raid5 ext3 defaults 0 0
Simulate disk corruption in RAID5, verify the functionality of the spare disk (a disk is allowed in RAID5
Damage, that is, we set the spare disk to replace the broken disk immediately to raid the heavy
Security of the data):
MDADM–MANAGE/DEV/MD0–FAIL/DEV/SDD//Use this command to set the status of SDD as an error
Check out the disk information: mdadm–-detail/dev/md0
Simply look at the raid scenario:
The completion of the creation means that RAID5 has been automatically restored.
Check to see if RAID5 is working properly!
Write the data inside.
Remove the faulted disk and add a new disk to the backup spare disk
First delete the damaged disk SDD
MDADM–MANAGE/DEV/MD0--REMOVE/DEV/SDD//will be broken disk SDD from raid in addition to adding a new piece as a spare disk:
MDADM–MANAGE/DEV/MD0--ADD/DEV/SDG//Add a new disk for SDG
RAID5 expansion: Using Grow mode
Mdadm–manage/dev/md0--ADD/DEV/SDG Add a new hard drive
True members of active SYNC/DEV/SDB1 # raid
SPARE/DEV/SDC1 #raid备用成员
Mdadm–g/dev/md0–n "x" #-g is grow Mode "x" is the number of RAID true members
Command parameters
-A =--assemble active
-S =--stop stop
-D =--detail View raid details
-C =--Create a RAID device
-V =--verbose Show details of the build process
-L =--level RAID level
-n = number of--raid-devices RAID devices
-s =--scan Scan raid device
-F =--fail marked bad hard drive
-A =--add add hard drive
-r =-remove removing bad hard drive
Now look at the disk information situation: MDADM–-DETAIL/DEV/MD0
How to turn off raid:
> Uninstall the/dev/md0 directly and comment out the configuration in the/etc/fstab file
UMOUNT/DEV/MD0//Lift Hook up
Vi/etc/fstab//Fstab The inside of the boot automatically mount comments out
#/dev/md0/mnt/raid5 exit3 Defaults 0 0
Experimental results:
The following points were accomplished through experiments:
1. Mastering the implementation method of soft raid in Linux system;
2. Master the configuration process of RAID5; 3. Be familiar with RAID, 5 characteristics through experiment. Create a Linux in VMware. Add the Linux 4 virtual disk (select SCSI type, 2G). Use Madam in Linux to create RAID5, three disks to do RAID5, one disk to make the spare. Format and mount the RAID5. Create some files and folders in RAID5 for fault detection. Modify the configuration file, let RAID5 boot automatically load. Shut down the system. Delete a hard drive (a second hard drive that you can delete and add). Restart the system to see if the data in the RAID5 volume is missing. According to the experimental results, the contents of the experiment were summarized. |
Experiment Summary:
Learned some basic operations and understood the basic principles of RAID implementations:
The steps to configure RAID1 are relatively RAID5, but when using mdadm, it is important to note that you do not partition multiple partitions on a hard disk, and then make multiple partitions into an array, which not only does not increase the speed of access to the hard drive, but reduces the overall system performance. The correct approach is to divide a hard disk into one or more partitions, and then make the partitions of several different hard disks into an array. In addition, the system directory such as/usr should not be placed in the array, because the system will not function properly once the array problem occurs.
Set boot auto-start raid and auto Mount
Let raid boot up, riad configuration file name is mdadm.conf, this file is not exist by default, to establish itself. The main function of this configuration file is to automatically load soft raid when the system starts up, and to manage it later.
The mdadm.conf file consists mainly of the following parts:
The Devices option is formulated to compose the raid all devices, and the array option specifies the
Device name, RAID level, number of active devices in the array, and the device's UUID number.
Auto-start raid
First set up/etc/mdadm.conf this file
Mdadm--detail--scan >/etc/mdadm.conf
To make a change to this file: vi/etc/mdadm.conf
Data in today's enterprise occupies an important position, the security of data storage is one of the important issues that people use computers to pay attention to. Often people use a variety of redundant disk array RAID technology on the server side to protect data, high-end servers generally provide expensive hardware RAID controller, but many small and medium-sized enterprises do not have sufficient funds to withstand the cost. Do we have a way to implement raid through software? In fact, under Linux can be software to implement the RAID function of the hardware, so as to save investment, but also achieve good results.
command to start and stop the RAID1 array. Start the direct execution of "mdadm-as/dev/md0". Executing mdadm-s/dev/md0 will stop the RAID1 array. In addition, after adding command mdadm-as/dev/md0 to the Rc.sysinit startup script file, the array will be set to start with the system boot.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
RAID implementation under Linux