Linux Soft RAID configuration

Source: Internet
Author: User
Tags uuid disk usage

RAID cards are generally divided into hard raid card and soft RAID card two kinds of hardware to achieve the RAID function is hard raid, independent RAID card, the motherboard integrated RAID chip is hard raid. RAID cards using the software and CPU are commonly used to perform raid using CPU, the software RAID consumes high CPU resources and most of the server devices are hardware RAID.


One: RAID level

RAID0: The capacity size of multiple disks is added. The final capacity is the size of multiple disk capacities. Advantages: Increased disk space capacity disadvantage: No data redundancy, a disk is broken, so that all data is not properly accessed, at this time lost a disk of data. Disk usage: N capacity per disk
RAID1: The utilization of the disk is 50%. 4 blocks of 80G of hard disk make up RAID1, then the available disk space is 160G. The data that is inserted is synchronized to another disk in real time, and the same data is called mirroring. Data and its security. It is generally used in the place of data security. Disadvantage: The cost is very high. A disk breaks down, The disk must be replaced in time. Disk usage: (N/2) * Capacity size per disk
RAID5: The parity code exists on all disks. The readout is very efficient and writes data generally. Because parity codes are on different disks, reliability is improved. However, it is not good for the parallelism of data transmission, and the design of the controller is very difficult. For RAID 5, most data transfers operate on only one disk, which can be done in parallel. In RAID 5 There is a "write loss", that is, each write operation, will produce four actual read/write operations, two reads old data and parity information, two times write new data and parity information. Disk usage: (n-1) * Capacity size per disk


II: Linux Soft RAID production

For all current operating systems, including Windows, Mac OS, Linux and other operating systems, it has a software RAID implementation, and our Linux operating system software RAID is through the MDADM program to achieve the use of Linux under the Mdadm Some points to note about this software: The RAID levels supported by ①MDADM are: RAID0, RAID1, RAID4, RAID5, and RAID6. We see that for the four RAID levels commonly used, MDADM can support ②mdadm to create raid based on multiple hard disks, partitions, and logical volumes. For a hardware-implemented raid, it can only be based on multiple hard drives. ③ created a good software RAID corresponding to/dev/mdn,n represents the raid, such as the first raid corresponding/dev/md0 created, the second created raid corresponds to/DEV/MD1, Of course, it's a name you can take. ④raid information is stored in the/proc/mdstat file, or through the MDADM command to view


Three: Test environment configuration


Next, I'm going to create our software RAID on my CentOS system.

Before creating a software RAID, I first simulated 4 1G virtual hard disks through the virtual machine, of course, in the actual environment, the use of the specific hard disk.

The disks under the virtual machine are: 16G/DEV/SDA/DEV/SDC/DEV/SDD/DEV/SDE per disk


1. Partitioning an existing disk

Fdisk/dev/sdafdisk/dev/sdcfdisk/dev/sddfdisk/dev/sde

Get partition:

/dev/sda1/dev/sdc1/dev/sdd1/dev/sde1


2. Create raid

[Email protected] yujianglei]# mdadm-cv/dev/md0-l5-n3-x1-c 128/dev/sd[a,c,d,e]1mdadm:layout defaults to Le Ft-symmetricmdadm:layout defaults to Left-symmetricmdadm:size set to 16763520kmdadm:defaulting to version 1.2 Metadatam Dadm:array/dev/md0 started.
Parameter explanation:-C Create md0-v display create process-L RAID level-N active disk-x standby disk-C block size, default is 64KB. Commands can also be written as follows: Mdadm--create-v/dev/md0--le Vel 5-n3-x1--chunk 128/dev/sda1/dev/sdc1/dev/sdd1/dev/sde1


3. Check the RAID status:

[[email protected] yujianglei]# mdadm  -d  /dev/md0/dev/md0:         version : 1.2  creation time : wed  dec  2 15:57:47 2015     raid level : raid5      Array Size : 33527040  (31.97 GIB 34.33 GB)   Used Dev Size : 16763520  (15.99 GIB 17.17 GB)     Raid Devices : 3  Total Devices : 4     persistence : superblock is persistent    update time :  Wed Dec  2 15:59:12 2015           State : clean Active Devices : 3Working Devices : 4  Failed devices : 0  spare devices : 1         layout :  left-symmetric     Chunk Size : 128K            NAME : OS6---224:0   (local to host  OS6---224)            uuid : ae905c03 :6e4b3312:393d3d2e:0c7ec68d         events : 18     number   major   minor   raiddevice state        0       8         1        0       active sync   /dev/sda1       1        8       33        1       active sync   /dev/sdc1       4        8       49         2      active sync   /dev/sdd1        3       8        65        -      spare    /dev/sde1
[Email protected] yujianglei]# cat/proc/mdstatpersonalities: [Raid6] [RAID5] [raid4]md0:active RAID5 sdd1[4] sde1[3] (S) sdc1[1] sda1[0] 33527040 blocks Super 1.2 level 5, 128k Chunk, Algorithm 2 [3/3] [uuu]unused devices: <none> ;


4. "Note:" After we create the raid, we need to save the raid information to the/etc/mdadm.conf file so that the system will automatically load this file to enable our raid when the next operating system restarts.

[[email protected] yujianglei]# mdadm-d--scan >/etc/mdadm.conf[[email protected] yujianglei]# Cat/etc/mdadm.confar ray/dev/md0 metadata=1.2 Spares=1 NAME=OS6---224:0 uuid=ae905c03:6e4b3312:393d3d2e:0c7ec68d


5. Format disk

[[email protected] yujianglei]# mkfs.ext4 /dev/md0mke2fs 1.41.12  (17-May-2010) File System label = Operating system: Linux block Size =4096  (log=2) chunked size =4096  (log=2) stride=32 blocks, stripe width=64  blocks2097152 inodes, 8381760 blocks419088 blocks  (5.00%)  reserved  For the super user first block of data =0maximum filesystem blocks=4294967296256 block  Groups32768 blocks per group, 32768 fragments per group8192 inodes  per groupsuperblock backups stored on blocks:    32768,  98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,     4096000, 7962624 Writing inode table:  completion creating journal  (32768 blocks) :  Complete writing superblocks and filesystem accounting information:  Complete this  Filesystem will be&nbsP;automatically checked every 37 mounts or180 days, whichever comes  first.  use tune2fs -c or -i to override.

6. Set up the mount point and Mount

Mkdir/raid5mount/dev/md0/raid5


7. View the size of the MD0 partition, total 32G. Exactly match (n-1) *16g, where n is 3 because one disk is a spare disk.

[[email protected] yujianglei]# df -hfilesystem             size  used avail use% mounted on/dev/mapper/ volgroup-lv_root                        30g  2.1g   27g   8%  /tmpfs                  497M     0  497M   0% /dev/shm/dev/sdb1              477M   52M   400m  12% /boot/dev/mapper/volgroup-lv_home                        31g  2.4g    27g   9% /home/dev/md0                32g   48m   30g   1% /raid5


8. Configure Boot Boot

VI/ETC/FSTAB/DEV/MD0/RAID5 EXT4 Defaults 0 0

9. Restart the machine test


Daily maintenance of 10.raid

Now simulate a disk failure, review the disk rebuild process we can also simulate a raid failure via the Mdadm command via the mdadm/dev/md0-f/dev/sda1 command [[email protected] raid5]# mdadm/dev/md0- F/DEV/SDA1MDADM:SET/DEV/SDA1 Faulty in/dev/md0
Check the status of/dev/md0 again [[EMAIL&NBSP;PROTECTED]&NBSP;RAID5]#&NBSP;MDADM&NBSP;&NBSP;-D&NBSP;&NBSP;/DEV/MD0/DEV/MD0:         version : 1.2  creation time :  wed dec  2 15:57:47 2015     raid level :  raid5     Array Size : 33527040  (31.97 gib 34.33 &NBSP;GB)   Used Dev Size : 16763520  (15.99&NBSP;GIB&NBSP;17.17&NBSP;GB)    Raid Devices : 3  Total Devices : 4     persistence : superblock is persistent    update time  : Wed Dec  2 17:48:31 2015           State : clean, degraded, recovering Active Devices :  2working devices : 3 Failed Devices : 1  Spare Devices : 1          layout : left-symmetric     chunk size  : 128K Rebuild Status : 42% complete             //here the Rebuild status state is the process of reconstruction, reached 100%, OK      &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;NAME&NBSP;:&NBSP;OS6---224:0   (local to host &NBSP;OS6---224)            uuid : ae905c03 :6e4b3312:393d3d2e:0c7ec68d         events : 30     number   major   minor   raiddevice state        3       8        65        0      spare rebuilding    /dev/sde1       1       8        33        1       active sync   /dev/sdc1       4        8       49         2      active sync   /dev/sdd1        0       8         1        -      faulty    /dev/sda1


Check if data is missing

[[email protected] raid5]# ll/raid5/total dosage 245776drwx------. 2 root root 16384 December 2 16:34 lost+found-rw-r--r--. 1 root root 251658240 December 2 16:54 test.img

11. Remove the damaged disk

Mdadm/dev/md0-r/dev/sda1

12. Add a new disk to RAID5, by default, the disk that we add to the raid will be used by default as a hot spare

Mdadm/dev/md0-a/dev/sda1 At this point sda1 the disk will immediately enter the standby state.

13. We need to add the hot spare to the RAID's active disk. The spare disk is converted to an active disk, and it will be rebuild again.

Mdadm-g/dev/md0-n4


14. RAID capacity increased at this time, but the file system capacity is still original, need to expand the size of the file system

[[email protected] raid5]# resize2fs  /dev/md0resize2fs 1.41.12  (17- May-2010) filesystem at /dev/md0 is mounted on /raid5; on-line resizing  requiredold desc_blocks = 2, new_desc_blocks = 3Performing an  on-line resize of /dev/md0 to 12572640  (4k)  blocks. The filesystem on /dev/md0 is now 12572640 blocks long. [[email protected] raid5]# df -hfilesystem             size  used avail use% mounted on/dev/mapper/volgroup-lv_ root                        30g  2.1g   27g   8% /tmpfs                  497m     0  497m   0%  /dev/shm/dev/sdb1             477m    52M  400M  12% /boot/dev/mapper/VolGroup-lv_home                         31G  2.4G   27G   9% /home/dev/md0                48G  292M    45g   1% /raid5


15. Restart the machine test, all OK


This article from the "Do not ask for the best, only better" blog, please be sure to keep this source http://yujianglei.blog.51cto.com/7215578/1727319

Linux Soft RAID configuration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.