Implement software RAID in Redhat Linux AS 4

Source: Internet
Author: User

Implement software RAID in Redhat Linux AS 4

1. system configuration information:

● The operating system is RedHat Linux AS 4;

● Kernel version 2.6.9-5.EL;

● Support for RAID0, RAID1, RAID4, RAID5, and RAID6;

● Five 36 gb scsi interface disks, where RedHat AS 4 is installed on the first disk, and the other four constitute RAID 5 to store the Oracle database.

In RedHat AS 4, software RAID is implemented through the mdadm tool. Its version is 1.6.0. It is a single program, which is very convenient and stable to create and manage RAID. Raidtools used in early Linux were not supported in RedHat AS 4 due to its difficult maintenance and limited performance.

1. Create a partition

Five SCSI disks correspond to/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd, And/dev/sde. The first disk/dev/sda is divided into two zones for installing RedHat AS 4 and performing swap partitioning. The other four disks are divided into only one primary partition, /dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1, and specify the partition type as "fd ", this will enable the Linux kernel to recognize them as RAID partitions and automatically detect and start each boot. Run the fdisk command to create a partition.

# Fdisk/dev/sdb

After entering the fdisk command line, Run Command n to create a partition, command t to change the partition type, command w to save the partition table and exit, and command m to help.

2. Create RAID 5

Here, RAID 5 is created on four devices:/dev/sdb1,/dev/sdc1,/dev/sdd1, And/dev/sde1./dev/sde1 is used as the backup device, other devices are active devices. Backup devices are mainly used for backup. Once a device is damaged, it can be immediately replaced by a backup device. Of course, you can also disable Backup Settings. The command format is as follows:

# Mdadm-CV/dev/md0-L5-N3-x1-c128/dev/SD [B, c, d, e] 1

The parameters in the Command indicate the following functions: "-c" indicates creating a new array, "/dev/md0" indicates the name of the array device, and "-L5" indicates setting the array mode, you can select 0, 1, 4, 5, and 6, which correspond to raid0, raid1, raid4, RAID5, and raid6, respectively. Here the mode is set to RAID5; "-N3" indicates the number of active devices in the array. The number of active devices plus the number of standby devices should be equal to the total number of devices in the array. "-X1" indicates the number of backup devices in the array, the current array contains one backup device. "-c128" indicates that the block size is kb and the default value is 64 KB. "/dev/SD [B, c, d, e] 1 "indicates all device identifiers contained in the current array. It can also be separated by spaces. The last one is the backup device.

3. view the array status

When creating a new array or array reconstruction, the device needs to perform synchronization. This process takes some time. You can view the/proc/mdstat file, to display the current status, synchronization progress, and required time of the array.


      
       # more /proc/mdstat 
       

Personalities : [raid5]

md0 : active raid5 sdd1[3] sde1[4] sdc1[1] sdb1[0]

75469842 blocks level 5, 128k chunk, algorithm 2 [3/2] [UU_]

[>....................] recovery = 4.3% (1622601/37734912) finish=1.0min speed=15146K/sec

unused devices:

After the creation or reconstruction is complete, view the/proc/mdstat file again:


      
       # more /proc/mdstat 
       

Personalities : [raid5]

md0 : active raid5 sdd1[2] sde1[3] sdc1[1] sdb1[0]

75469842 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

unused devices:

Through the above content, we can clearly see the status of the current array. The meaning of each part is as follows: the first digit in "[3/3]" indicates the number of devices contained in the array, the second digit indicates the number of active devices. If one device is damaged, the second digit minus 1. "[UUU]" indicates the devices that can be normally used by the current array, if/dev/sdb1 fails, the mark will be changed to [_ UU]. Then the array runs in degraded mode, that is, the array is still available, but there is no redundancy; "sdd1 [2]" indicates that the number of devices contained in the array is n. If the value in square brackets is less than n, it indicates that the device is active. If the value is greater than or equal to n, the device is a backup device. When a device fails, the square brackets of the corresponding device are marked as (F ).

4. Generate a configuration file

The default configuration file of mdadm is/etc/mdadm. conf, which is mainly set to facilitate daily management of arrays. It is not necessary for arrays, but it should be fixed to reduce unnecessary troubles in future management.

To finish this step.

In the mdadm. the conf file must contain two types of rows: one is the row starting with DEVICE, which indicates the list of devices in the ARRAY; the other is the row starting with ARRAY, it details the name, mode, number of active devices in the array, and UUID of the device. The format is as follows:


      
       DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 
       

ARRAY /dev/md0 level=raid5 num-devices=3 UUID=8f128343:715a42df: baece2a8: a5b878e0

The preceding information can be obtained by scanning the system array. The command is:


      
       # mdadm -Ds 
       

ARRAY /dev/md0 level=raid5 num-devices=3 UUID=8f128343:715a42df: baece2a8: a5b878e0

devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1

Use the vi command to edit the/etc/mdadm. conf file in the specified format.

# Vi/etc/mdadm. conf

5. Create a file system and mount it.

RAID5 is started and running. Now you need to create a file system on it. Here, run the mkfs command. The file system type is ext3. The command is as follows:

# Mkfs-t ext3/dev/md0

After the new file system is generated, you can mount/dev/md0 to the specified directory. The command is as follows:

# Mount/dev/md0/mnt/raid

To enable the system to automatically mount/dev/md0 to/mnt/raid at startup, you also need to modify the/etc/fstab file and add the following content:

/Dev/md0/mnt/raid ext3 defaults 0 0

2. Fault Simulation

The above example gives us a certain understanding of the software RAID function of Redhat Linux AS 4, and explains how to create RAID5 through detailed steps. With RAID, the data in the computer seems to be safe, but the current situation still cannot leave us alone. Think about it. What should we do in case of a disk failure? Next we will simulate a complete process of replacing a faulty RAID 5 disk. We hope to enrich your experience in dealing with the RAID 5 Fault and improve the level of management and maintenance.

We still use the above RAID5 configuration. First, we copy some data to the array, and then we start to simulate/dev/sdb1 device failure. However, the simulation process of RAID 5 without a backup device also takes the following three steps, except that array reconstruction and data recovery occur after the new device is added to the array, rather than when the device is damaged.

1. Mark/dev/sdb1 as a damaged device.

# Mdadm/dev/md0-f/dev/sdb1

View the current array status

# More/proc/mdstat

Personalities: [raid5]

Md0: active raid5 sdd1 [2] sde1 [3] sdc1 [1] sdb1 [4] (F)

75469842 blocks level 5,128 k chunk, algorithm 2 [3/2] [_ UU]

[=> ......] Recovery = 8.9% (3358407/37734912) finish = 1.6 min speed = 9382 K/sec

Unused devices:

Because there is a backup device, when the device in the array is damaged, the array can be reconstructed and restored in a short time. From the current status, we can see that the array is being restructured and is running in the downgrade mode. (f) has been marked behind sdb1 [4], and the number of active devices has also dropped to 2.

After several minutes, check the current array status again.


      
       # more /proc/mdstat 
       

Personalities : [raid5]

md0 : active raid5 sdd1[2] sde1[0] sdc1[1] sdb1[3](F)

75469842 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

unused devices:

Now the array reconstruction is complete and the data recovery is complete. The original backup device sde1 becomes the active device.

2. Remove the damaged device.

# Mdadm/dev/md0-r/dev/sdb1

View the status of the current array:

# More/proc/mdstat

Personalities: [RAID5]

Md0: Active RAID5 sdd1 [2] sde1 [0] sdc1 [1]

75469842 blocks level 5,128 K chunk, algorithm 2 [3/3] [uuu]

Unused devices:

The damaged sdb1 has been removed from the array.

3. Add new devices to the array

Because it is a simulated operation, you can use the following command to add/dev/sdb1 to the array again. For actual operations, pay attention to two points: first, correct partition of the new disk before adding; second, replace/dev/sdb1 with the device name of the added device.

# Mdadm/dev/md0-A/dev/sdb1

View the status of the current array:


      
       # more /proc/mdstat 
       

Personalities : [raid5]

md0 : active raid5 sdb1[3] sdd1[2] sde1[0] sdc1[1]

75469842 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

unused devices:

In this case, sdb1 appears in the array again as a backup device.

Common array maintenance commands

1. Start the Array

# Mdadm-as/dev/md0

This command starts the/dev/md0 array, where "-a" indicates loading an existing array, and "-s" indicates searching for mdadm. the configuration information in the conf file, and the array is started based on this.

# Mdadm-

This command starts all arrays in the mdadm. conf file.

# Mdadm-A/dev/md0/dev/SD [B, c, d, e] 1

If the mdadm. conf file is not created, use the above startup mode.

2. Stop the Array

# Mdadm-S/dev/md0

3. display details of the specified array

# Mdadm-D/dev/md0

Iii. Introduction to raid

Raid is short for redundant disk array (Redundant Array of Inexpensive Disk. It combines multiple disks into an array and is used as a single disk. It distributes data in different disks in a segmented manner and reduces the data access time by reading and writing multiple disks at the same time, in addition, different technologies can be used to achieve data redundancy. Even if one disk is damaged, all data can be recovered from other disks. Simply put, the benefits are high security, fast speed, and large data capacity.

Disk arrays are classified into RAID levels based on the technology they use. Currently, it is recognized as RAID 0 ~ RAID 5. The level does not represent the technical level. RAID 5 is not higher than RAID 4, and RAID 0 is not lower than raid 2. The selection of RAID type depends on the user's needs. The following describes commonly used RAID 0, RAID 1, and RAID 5.

1. RAID 0

Feature: it is used to combine multiple disks to form a large hard disk. When accessing data, the data is segmented by the number of disks and written into these disks at the same time. In all levels, RAID 0 is the fastest speed. But there is no data redundancy. If any disk in the array breaks down, all data is lost.

Disk utilization: n (assume there are n disks ).

Configuration condition: the minimum two disks and the partition size are the same as possible.

Application field: it has special requirements for high disk capacity and high-speed disk access without worrying about its high failure rate. Of course, if you are using a cluster, RAID 0 is undoubtedly the best way to improve disk I/O performance, because in this case, you don't have to worry about redundancy.

2. RAID 1

Features: Using disk mirroring Technology, you can store data on one disk and write the same data on another disk. With the backup disk, the data security of RAID 1 is the best at all RAID levels. Although its Data Writing speed is relatively slow, because its data is stored in segments, it has almost the same performance as RAID 0 when reading data.

Disk utilization: n/2.

Configuration condition: the minimum two disks and the partition size are the same as possible.

Application fields: databases, financial systems, and other fields that require high data reliability. In addition, this mode can be used when the number of writes in the system is small and the number of reads is large.

3. RAID 5

Features: data security is ensured by the Data check bit, but it does not store the data check bit on a separate hard disk, but stores the check bit of the Data Segment on each disk. In this way, damaged data can be rebuilt based on the check bit on other disks. Read and Write Data in parallel, with high performance.

Disk utilization: n-1.

Configuration conditions: the minimum size of three hard disks is the same as possible.

Application field: Suitable for the transaction processing environment, such as the Civil Aviation ticket office and sales system.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.