Linux soft RAID configuration 1

Source: Internet
Author: User

Data plays an important role in today's enterprises. The security of data storage is one of the important issues that people need to pay attention to when using computers. Usually, people use a variety of redundant disk array RAID technology on the server side to protect data. High-end servers generally provide expensive hardware raid controllers, however, many small and medium-sized enterprises do not have enough funds to bear the expenses. Is there a way to implement raid through software?

In fact, in Linux, hardware raid can be implemented through software, which saves both investment and achieves good results. Today, I will introduce how to implement a soft RAID 1 (data image) array with a spare-disk in the network environment.
TIPS: What is raid1 (data image )? Raid 1 is a reliable data storage method. Each disk has a corresponding image disk. Writing data to any disk will be copied to the image disk. The system can read data from any disk in a group of image disks, that is, the same data will be repeatedly written twice, such a disk image will definitely increase the system cost. Because the space we can use is only half of the total disk capacity.
Because mdadm is used in this article, and the software is usually integrated in RedHat Linux, you can directly use it. If the system is not installed, go to the [url] http://www.cse.unsw.edu.au /~ Neilb/source/mdadm [/url] to download the mdadm-1.8.1.tgz for compilation and installation, you can also go to the [url] http://www.cse.unsw.edu.au /~ Neilb/source/mdadm/RPM [/url] download the mdadm-1.8.1-1.i386.rpm for direct installation.

As a server-oriented network operating system, Linux attaches great importance to data security and access speed. Linux has implemented raid support for software since the kernel version 2.4, this allows us to enjoy enhanced disk I/O performance and reliability without buying expensive hardware raid devices, further reducing the overall cost of ownership of the system. Next let's look at a software RAID configuration instance under RedHat Linux as 4.

● The operating system is RedHat Linux as 4;
● Kernel version 2.6.9-5.el;
● Support for raid0, raid1, raid4, RAID5, and raid6;
● Five 36 gb scsi interface disks, where Redhat as 4 is installed on the first disk, and the other four constitute RAID 5 to store the Oracle database.
In RedHat as 4, software raid is implemented through the mdadm tool. Its version is 1.6.0. It is a single program, which is very convenient and stable to create and manage raid. Raidtools used in early Linux were not supported in RedHat as 4 due to its difficult maintenance and limited performance.

Implementation Process

-: Configure raid1
Step 1: log on to the system as the root user and partition the disk.
# Fdisk/dev/SDB
Divide all disk space on the device/dev/SDB into a primary partition, create a/dev/sdb1 partition, and change the partition type to FD (Linux raid auto ), then, perform the same operation on the remaining disks. Create three partitions:/dev/sdb1,/dev/sdc1, And/dev/sdd1.

Step 2: Create a raid Array
# Mdadm-CV/dev/md0-L1-N2-x1/dev/SD {B, c, d} 1
# Mdadm -- create -- verbose/dev/md0 -- level = 1 -- raid-devices = 2 \
/Dev/hda1/dev/hdc1
Tip: The-C parameter is used to create an array. /Dev/md0 is the device name of the array. -L1 is the array mode. You can select different array modes such as, and 5, which correspond to RAID 0, RAID 1, raid 4, and RAID 5 respectively. -N2 indicates the number of active disks in the array, and the number of additional disks must be equal to the total number of disks in the array. -X1 is the number of slave disks in the array. Because we use raid1, we set that the current array contains a backup disk. /Dev/SD {B, c, d} 1 is the name of the disk that participates in the creation of the array. The array consists of three disks, two of which are the active disks of the image, A backup disk provides replacement after failure.

Step 3: View raid Arrays
It takes a long time to create a raid. Because the disk needs to be synchronized, you can view the/proc/mdstat file. The file displays the current raid status and the time required for synchronization.
# Cat/proc/mdstat
The system displays --
Personalities: [raid1]
Read_ahead 1024 sectors
Event: 1
Md0: Active raid1 sdb1 [0] sdc1 [1] sdd1 [2]
18432000 blocks [2/2] [UU]
Unused devices:
After the preceding prompt is displayed, the created RAID 1 is ready for use.

Step 4: edit the array configuration file
The configuration file of mdadm mainly provides daily management. editing this file can make raid work better for us. Of course, this step is not necessary. Raid can also work without editing the configuration file.
First, scan all arrays in the system.
# Mdadm -- detail-Scan
The scan result displays the array name, mode, and disk name, and lists the uuid numbers of the array. The UUID also exists in each disk of the array, A disk without this number cannot be composed of arrays.
Next, edit the configuration file/etc/mdadm. conf of the array, modify the scan display result according to the file format, and add it to the end of the file.
# Vi/etc/mdadm. conf
Add the following content to the mdadm. conf file:
Device/dev/sdb1/dev/sdc1/dev/sdd1
Array/dev/md0 level = raid1 num-devices = 2 UUID = 2ed2ba37: d952280c: a5a9c282: a51b48da spare-group = group1
The array name and mode are defined in the configuration file, as well as the number and name of active disks in the array, and a backup disk group group1.

Step 5: start and stop the raid1 Array
The command to start and stop the Raid 1 array is very simple. Run "mdadm-as/dev/md0" directly at startup. Run mdadm-S/dev/md0 to stop the raid1 array. In addition, add the command mdadm-as/dev/md0 to the RC. sysinit STARTUP script file and set the array to start with the system startup.

Summary: The steps for configuring raid1 are not very cumbersome compared with RAID5. However, when using mdadm, be sure not to divide multiple partitions on one hard disk and then combine multiple partitions into an array, this method not only does not increase the access speed of the hard disk, but also reduces the overall system performance. The correct method is to divide a hard disk into one or more partitions, and then combine multiple partitions of Different Hard disks into an array. In addition, it is recommended that the system directory such as/usr be not placed in the array, because the system will not run normally once an array is faulty.

2. Configure RAID 5

1. Create a partition

Five SCSI disks correspond to/dev/SDA,/dev/SDB,/dev/SDC,/dev/SDD, And/dev/SDE. The first disk/dev/SDA is divided into two zones for installing Redhat as 4 and performing swap partitioning. The other four disks are divided into only one primary partition, /dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1, and specify the partition type as "FD ", this will enable the Linux kernel to recognize them as raid partitions and automatically detect and start each boot. Run the fdisk command to create a partition.

# Fdisk/dev/SDB
After entering the fdisk command line, Run Command n to create a partition, command t to change the partition type, command W to save the partition table and exit, and command m to help.

2. Create RAID 5

Here, RAID 5 is created on four devices:/dev/sdb1,/dev/sdc1,/dev/sdd1, And/dev/sde1./dev/sde1 is used as the backup device, other devices are active devices. Backup devices are mainly used for backup. Once a device is damaged, it can be immediately replaced by a backup device. Of course, you can also choose not to use the backup device. The command format is as follows:

# Mdadm-CV/dev/md0-L5-N3-x1-c128/dev/SD [B, c, d, e] 1
# Mdadm -- create -- verbose/dev/MD5 -- level = RAID5 -- raid-devices = 3 -- chunk = 32/dev/hda3/dev/HDB3/dev/hdc3
Parity-algorithm left-equalric
Parity-algorithm indicates the algorithm of RAID5's parity check. Available options include:
Left-outer Ric left-asypolicric right-outer Ric right-asypolicric
The best performance is: Left-semi ric
The parameters in the Command indicate the following functions: "-c" indicates creating a new array, "/dev/md0" indicates the name of the array device, and "-L5" indicates setting the array mode, you can select 0, 1, 4, 5, and 6, which correspond to raid0, raid1, raid4, RAID5, and raid6, respectively. Here the mode is set to RAID5; "-N3" indicates the number of active devices in the array. The number of active devices plus the number of standby devices should be equal to the total number of devices in the array. "-X1" indicates the number of backup devices in the array, the current array contains one backup device. "-c128" indicates that the block size is kb and the default value is 64 KB. "/dev/SD [B, c, d, e] 1 "indicates all device identifiers contained in the current array. It can also be separated by spaces. The last one is the backup device.

3. view the array status

When creating a new array or array reconstruction, the device needs to perform synchronization. This process takes some time. You can view the/proc/mdstat file, to display the current status, synchronization progress, and required time of the array.

# More/proc/mdstat
Personalities: [RAID5]
Md0: Active RAID5 sdd1 [3] sde1 [4] sdc1 [1] sdb1 [0]
75469842 blocks level 5,128 K chunk, algorithm 2 [3/2] [UU _]
[> ......] Recovery = 4.3% (1622601/37734912) finish = 1.0 min speed = 15146 K/sec

Unused devices:
After the creation or reconstruction is complete, view the/proc/mdstat file again:
# More/proc/mdstat
Personalities: [RAID5]
Md0: Active RAID5 sdd1 [2] sde1 [3] sdc1 [1] sdb1 [0]
75469842 blocks level 5,128 K chunk, algorithm 2 [3/3] [uuu]
Unused devices:
Through the above content, we can clearly see the status of the current array. The meaning of each part is as follows: the first digit in "[3/3]" indicates the number of devices contained in the array, the second digit indicates the number of active devices. If one device is damaged, the second digit minus 1. "[uuu]" indicates the devices that can be normally used by the current array, if/dev/sdb1 fails, the mark will be changed to [_ UU]. Then the array runs in degraded mode, that is, the array is still available, but there is no redundancy; "sdd1 [2]" indicates that the number of devices contained in the array is N. If the value in square brackets is less than N, it indicates that the device is active. If the value is greater than or equal to N, the device is a backup device. When a device fails, the square brackets of the corresponding device are marked as (f ).

4. Generate a configuration file

The default configuration file of mdadm is/etc/mdadm. conf, which is mainly set to facilitate routine management of arrays. It is not necessary for arrays, but to reduce unnecessary troubles in future management, we should stick to this step.

In the mdadm. the conf file must contain two types of rows: one is the row starting with device, which indicates the list of devices in the array; the other is the row starting with array, it details the name, mode, number of active devices in the array, and UUID of the device. The format is as follows:

Device/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1
Array/dev/md0 level = RAID5 num-devices = 3 UUID = 8f128343: 715a42df: baece2a8: a5b878e0

The preceding information can be obtained by scanning the system array. The command is:
# Mdadm-Ds
Array/dev/md0 level = RAID5 num-devices = 3 UUID = 8f128343: 715a42df: baece2a8: a5b878e0

Devices =/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1
Use the VI command to edit the/etc/mdadm. conf file in the specified format.
# Vi/etc/mdadm. conf

5. Create a file system and mount it.

RAID5 is started and running. Now you need to create a file system on it. Here, run the mkfs command. The file system type is ext3. The command is as follows:
# Mkfs-T ext3/dev/md0
After the new file system is generated, you can mount/dev/md0 to the specified directory. The command is as follows:
# Mount/dev/md0/mnt/raid
To enable the system to automatically mount/dev/md0 to/mnt/raid at startup, you also need to modify the/etc/fstab file and add the following content:
/Dev/md0/mnt/raid ext3 defaults 0 0

Fault Simulation

The above example gives us a certain understanding of the software raid function of RedHat Linux as 4, and explains how to create RAID5 through detailed steps. With raid, the data in the computer seems to be safe, but the current situation still cannot leave us alone. Think about it. What should we do in case of a disk failure? Next we will simulate a complete process of replacing a faulty RAID5 disk, hoping to enrich your experience in dealing with RAID5 faults and improve the level of management and maintenance.

We still use the above RAID5 configuration. First, we copy some data to the array, and then we start to simulate/dev/sdb1 device failure. However, the simulation process of RAID 5 without a backup device also takes the following three steps, except that array reconstruction and data recovery occur after the new device is added to the array, rather than when the device is damaged.

1. Mark/dev/sdb1 as a damaged device.
# Mdadm/dev/md0-F/dev/sdb1
View the current array status
# More/proc/mdstat
Personalities: [RAID5]
Md0: Active RAID5 sdd1 [2] sde1 [3] sdc1 [1] sdb1 [4] (f)
75469842 blocks level 5,128 K chunk, algorithm 2 [3/2] [_ UU]
[=> ......] Recovery = 8.9% (3358407/37734912) finish = 1.6 min speed = 9382 K/sec

Unused devices:
Because there is a backup device, when the device in the array is damaged, the array can be reconstructed and restored in a short time. From the current status, we can see that the array is being restructured and is running in the downgrade mode. (f) has been marked behind sdb1 [4], and the number of active devices has also dropped to 2.

After several minutes, check the current array status again.
# More/proc/mdstat
Personalities: [RAID5]
Md0: Active RAID5 sdd1 [2] sde1 [0] sdc1 [1] sdb1 [3] (f)
75469842 blocks level 5,128 K chunk, algorithm 2 [3/3] [uuu]
Unused devices:
Now the array reconstruction is complete and the data recovery is complete. The original backup device sde1 becomes the active device.

2. Remove the damaged device.
# Mdadm/dev/md0-r/dev/sdb1
View the status of the current array:
# More/proc/mdstat
Personalities: [RAID5]
Md0: Active RAID5 sdd1 [2] sde1 [0] sdc1 [1]
75469842 blocks level 5,128 K chunk, algorithm 2 [3/3] [uuu]
Unused devices:
The damaged sdb1 has been removed from the array.

3. Add new devices to the array
Because it is a simulated operation, you can use the following command to add/dev/sdb1 to the array again. For actual operations, pay attention to two points: first, correct partition of the new disk before adding; second, replace/dev/sdb1 with the device name of the added device.

# Mdadm/dev/md0-A/dev/sdb1
View the status of the current array:
# More/proc/mdstat
Personalities: [RAID5]
Md0: Active RAID5 sdb1 [3] sdd1 [2] sde1 [0] sdc1 [1]
75469842 blocks level 5,128 K chunk, algorithm 2 [3/3] [uuu]
Unused devices:
In this case, sdb1 appears in the array again as a backup device.
Common array maintenance commands

1. Start the Array
# Mdadm-as/dev/md0
This command starts the/dev/md0 array, where "-a" indicates loading an existing array, and "-s" indicates searching for mdadm. the configuration information in the conf file, and the array is started based on this.
# Mdadm-
This command starts all arrays in the mdadm. conf file.
# Mdadm-A/dev/md0/dev/SD [B, c, d, e] 1
If the mdadm. conf file is not created, use the above startup mode.

2. Stop the Array
# Mdadm-S/dev/md0

3. display details of the specified array
# Mdadm-D/dev/md0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.