Implement software RAID in RedhatLinuxAS4 (figure)

Source: Internet
Author: User
Article Title: implement software RAID under RedhatLinuxAS4 (figure ). Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Generally, redundant disk arrays (RAID) are used on servers to protect data. High-end servers generally provide expensive hardware RAID controllers. For small and medium-sized enterprises with limited financial strength, using software in Linux to implement the hardware RAID function not only saves investment, but also achieves good results. Why not?
  
As a server-oriented network operating system, Linux attaches great importance to data security and access speed, linux has implemented support for software RAID since the 2.4 kernel (refer to the appendix for background knowledge about RAID), so that we do not have to buy expensive hardware RAID devices, enhanced disk I/O performance and reliability further reduce the total cost of ownership of the system. Next let's look at a software RAID configuration instance under Redhat Linux AS 4.
  
   System Configuration
  
Assume that a unit has a new energy collection system that uses the Oracle database. The system has a large amount of data and reads and writes frequently, which requires high real-time performance. at peak times, nearly 40 users are online, high requirements are raised for the disk subsystem of the database server. Due to the tight budget, after multi-party comparison, we finally chose the Linux system as the RAID5 software solution.
  
The configuration is as follows:
  
● The operating system is RedHat Linux AS 4;
  
● Kernel version 2.6.9-5.EL;
  
● Support for RAID0, RAID1, RAID4, RAID5, and RAID6;
  
● Five 36 gb scsi interface disks, where RedHat AS 4 is installed on the first disk, and the other four constitute RAID 5 to store the Oracle database.
  
In RedHat AS 4, software RAID is implemented through the mdadm tool. Its version is 1.6.0. It is a single program, which is very convenient and stable to create and manage RAID. Raidtools used in early Linux were not supported in RedHat AS 4 due to its difficult maintenance and limited performance.
  
   Implementation Process
  
1. Create a partition
  
Five SCSI disks correspond to/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd, And/dev/sde. The first disk/dev/sda is divided into two zones for installing RedHat AS 4 and performing swap partitioning. The other four disks are divided into only one primary partition, /dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1, and specify the partition type as "fd ", this will enable the Linux kernel to recognize them as RAID partitions and automatically detect and start each boot. Run the fdisk command to create a partition.
  
# Fdisk/dev/sdb
  
After entering the fdisk command line, Run Command n to create a partition, command t to change the partition type, command w to save the partition table and exit, and command m to help.
  
2. Create RAID 5
  
Here, RAID 5 is created on four devices:/dev/sdb1,/dev/sdc1,/dev/sdd1, And/dev/sde1./dev/sde1 is used as the backup device, other devices are active devices. Backup devices are mainly used for backup. Once a device is damaged, it can be immediately replaced by a backup device. Of course, you can also choose not to use the backup device. The command format is as follows:
  
# Mdadm-Cv/dev/md0-l5-n3-x1-c128/dev/sd [B, c, d, e] 1
  
The parameters in the Command indicate the following functions: "-C" indicates creating a new array, "/dev/md0" indicates the name of the array device, and "-l5" indicates setting the array mode, you can select 0, 1, 4, 5, and 6, which correspond to RAID0, RAID1, RAID4, RAID5, and RAID6, respectively. Here the mode is set to RAID5; "-n3" indicates the number of active devices in the array. The number of active devices plus the number of standby devices should be equal to the total number of devices in the array. "-x1" indicates the number of backup devices in the array, the current array contains one backup device. "-c128" indicates that the block size is kb and the default value is 64 KB. "/dev/sd [B, c, d, e] 1 "indicates all device identifiers contained in the current array. It can also be separated by spaces. The last one is the backup device.
  
3. view the array status
  
When creating a new array or array reconstruction, the device needs to perform synchronization. This process takes some time. You can view the/proc/mdstat file, to display the current status, synchronization progress, and required time of the array.
  
# More/proc/mdstat
  
Personalities: [raid5]
  
Md0: active raid5 sdd1 [3] sde1 [4] sdc1 [1] sdb1 [0]
  
75469842 blocks level 5,128 k chunk, algorithm 2 [3/2] [UU _]
  
[> ......] Recovery = 4.3% (1622601/37734912) finish = 1.0 min speed = 15146 K/sec
  
Unused devices:
  
After the creation or reconstruction is complete, view the/proc/mdstat file again:
  
# More/proc/mdstat
  
Personalities: [raid5]
  
Md0: active raid5 sdd1 [2] sde1 [3] sdc1 [1] sdb1 [0]
  
75469842 blocks level 5,128 k chunk, algorithm 2 [3/3] [UUU]
  
Unused devices:
  
Through the above content, we can clearly see the status of the current array. The meaning of each part is as follows: the first digit in "[3/3]" indicates the number of devices contained in the array, the second digit indicates the number of active devices. If one device is damaged, the second digit minus 1. "[UUU]" indicates the devices that can be normally used by the current array, if/dev/sdb1 fails, the mark will be changed to [_ UU]. Then the array runs in degraded mode, that is, the array is still available, but there is no redundancy; "sdd1 [2]" indicates that the number of devices contained in the array is n. If the value in square brackets is less than n, it indicates that the device is active. If the value is greater than or equal to n, the device is a backup device. When a device fails, the square brackets of the corresponding device are marked as (F ).
  
4. Generate a configuration file
  
The default configuration file of mdadm is/etc/mdadm. conf, which is mainly set to facilitate routine management of arrays. It is not necessary for arrays, but to reduce unnecessary troubles in future management, we should stick to this step.
  
In the mdadm. the conf file must contain two types of rows: one is the row starting with DEVICE, which indicates the list of devices in the ARRAY; the other is the row starting with ARRAY, it details the name, mode, number of active devices in the array, and UUID of the device. The format is as follows:
  
DEVICE/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1
  
ARRAY/dev/md0 level = raid5 num-devices = 3 UUID = 8f128343: 715a42df: baece2a8: a5b878e0
  
The preceding information can be obtained by scanning the system array. The command is:
  
# Mdadm-Ds
  
ARRAY/dev/md0 level = raid5 num-devices = 3 UUID = 8f128343: 715a42df: baece2a8: a5b878e0
  
Devices =/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1
  
Use the vi command to edit the/etc/mdadm. conf file in the specified format.
  
# Vi/etc/mdadm. conf
  
5. Create a file system and mount it.
  
RAID5 is started and running. Now you need to create a file system on it. Here, run the mkfs command. The file system type is ext3. The command is as follows:
  
# Mkfs-t ext3/dev/md0
  
After the new file system is generated, you can mount/dev/md0 to the specified directory. The command is as follows:
  
# Mount/dev/md0/mnt/raid
  
To enable the system to automatically mount/dev/md0 to/mnt/raid at startup, you also need to modify the/etc/fstab file and add the following content:
  
/Dev/md0/mnt/raid ext3 defaults 0 0
  
   Fault Simulation
  
The above example gives us a certain understanding of the software RAID function of Redhat Linux AS 4, and explains how to create RAID5 through detailed steps. With RAID, the data in the computer seems to be safe, but the current situation still cannot leave us alone. Think about it. What should we do in case of a disk failure? Next we will simulate a complete process of replacing a faulty RAID5 disk, hoping to enrich your experience in dealing with RAID5 faults and improve the level of management and maintenance.
  
We still use the above RAID5 configuration. First, we copy some data to the array, and then we start to simulate/dev/sdb1 device failure. However, the simulation process of RAID 5 without a backup device also takes the following three steps, except that array reconstruction and data recovery occur after the new device is added to the array, rather than when the device is damaged.
  
1. Mark/dev/sdb1 as a damaged device.
  
# Mdadm/dev/md0-f/dev/sdb1
  
View the current array status
  
# More/proc/mdstat
  
Personalities: [raid5]
  
Md0: active raid5 sdd1 [2] sde1 [3] sdc1 [1] sdb1 [4] (F)
  
75469842 blocks level 5,128 k chunk, algorithm 2 [3/2] [_ UU]
  
[=> ......] Recovery = 8.9% (3358407/37734912) finish = 1.6 min speed = 9382 K/sec
  
Unused devices:
  
Because there is a backup device, when the device in the array is damaged, the array can be reconstructed and restored in a short time. From the current status, we can see that the array is being restructured and is running in the downgrade mode. (F) has been marked behind sdb1 [4], and the number of active devices has also dropped to 2.
  
After several minutes, check the current array status again.
  
# More/proc/mdstat
  
Personalities: [raid5]
  
Md0: active raid5 sdd1 [2] sde1 [0] sdc1 [1] sdb1 [3] (F)
  
75469842 blocks level 5,128 k chunk, algorithm 2 [3/3] [UUU]
  
Unused devices:
  
Now the array reconstruction is complete and the data recovery is complete. The original backup device sde1 becomes the active device.
  
2. Remove the damaged device.
  
# Mdadm/dev/md0-r/dev/sdb1
  
View the status of the current array:
  
# More/proc/mdstat
  
Personalities: [raid5]
  
Md0: active raid5 sdd1 [2] sde1 [0] sdc1 [1]
  
75469842 blocks level 5,128 k chunk, algorithm 2 [3/3] [UUU]
  
Unused devices:
  
The damaged sdb1 has been removed from the array.
  
3. Add new devices to the array
  
Because it is a simulated operation, you can use the following command to add/dev/sdb1 to the array again. For actual operations, pay attention to two points: first, correct partition of the new disk before adding; second, replace/dev/sdb1 with the device name of the added device.
  
# Mdadm/dev/md0-a/dev
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.