Linux Fundamentals-disk array (RAID) examples

Source: Internet
Author: User

    • Disk array (RAID) instances

    • RAID Technology classification

      • Soft RAID Technology

      • Hard RAID Technology

    • The difference between RAID and LVM

    • Why choose to use raid

    • RAID detailed

      • RAID-0

      • RAID-1

      • RAID-5

      • Raid-10

    • Management of RAID

      • Case: Creating a raid10+ redundant disk

Disk array (RAID) instances

Raid (disk array) level introduction

RAID has a "redundant array of Inexpensive disks" means that the use of a number of inexpensive hard disks to form a disk group, so that the data division stored in these drives, so as to achieve read and write acceleration, can also be used as data redundancy, when a hard disk damaged, the other hard disk can be redundant data to calculate the damaged disk data, This improves the security of data storage. can provide higher speed and security than a normal disk, so the server chooses to create a raid when it is installed. There are two ways to create a raid: Soft RAID (implemented via operating system software) and hard raid (using hardware array cards).

RAID Technology classification

Common RAID technologies are divided into two categories
Hardware-based RAID technology and software-based RAID technology

Soft RAID Technology

In the process of installing the system under Linux or after installing the system through the software can be implemented RAID function, the use of soft raid can eliminate the purchase of hardware RAID controller and accessories can greatly enhance the disk's IO performance and reliability, because it is a software-implemented RAID function, it is flexible configuration, Easy to manage while using software RAID, you can combine several physical disks into a larger virtual appliance for performance improvements and data redundancy

Hard RAID Technology

Hardware-based RAID solutions are more likely to use performance and service performance than software-based RAID technology, specifically in terms of the ability to detect and repair multiple errors, error disk detection and array reconstruction, and, from security considerations, hardware-based RAID solutions that are more secure. In the actual production scenario work, hardware-based RAID solutions should be our first choice, internet companies commonly used to produce Dell servers, the default will support raid0,1. If raid5,10 need to buy RAID card (or individual configuration comes with, before purchase, see parameters)

The difference between RAID and LVM

LVM (Logical Volume Manager) is a Linux environment for the management of hard disk partitions, can realize the dynamic partition and adjustment of multi-block hard disk space, storage files and other functions across the hard disk. An environment that is often used to equip large volumes of hard disks and to add or remove hard drives at any time, and also for environments with only one or two hard drives. (Flexibility to manage disk capacity, make disk partitions arbitrarily large and small, easy to manage the remaining capacity of the disk) if you focus too much on performance and backup, or if you prefer the hardware RAID feature

Why choose to use raid

The disk array can connect multiple disk drives through different connections to work together, greatly improve the read and write speed, while the reliability of the disk system to close to the realm of flawless, so that its reliability is extremely high.
The most immediate benefits of using RAID are:

Improve data security
Improve data read and write performance
Provides a larger single logical disk data capacity storage

Raid detailed RAID-0

D1

D2

Data 1

Data 2

Data 3

Data 4

Data 5

Data 6

RAID-0: Striping (stripe mode) features: When reading and writing can be implemented concurrently, so relative to its best read and write performance, each disk has saved a part of the full data, read also in parallel mode, the more disks, read and write faster. Because there is no redundancy, one hard disk loses all data loss. At least two hard disks can form a RAID0 array.
Capacity: The sum of all drives. Disk utilization is 100%.

Production Application Scenarios
1. Multiple identical RS node servers under load Balancer cluster
2. Distributed file storage The following master node or chunk server
3, MySQL master-slave replication of multiple slave server
4, high performance requirements, low redundancy requirements of the relevant business

RAID-1

D1

D2

Data 1

Data 1

Data 2

Data 2

Data 3

Data 3

RAID-1: Mirroring (mirrored volume), requires at least two hard disks, the raid size is equal to the minimum capacity of two RAID partitions (preferably partition size is divided into the same), the data is redundant, at the same time when the storage of two hard disks, the implementation of data backup;
Disk utilization is 50%, that is, 2 100G of disk composition RAID1 can only provide 100G of free space.

RAID-5

D3

valign= width= "Top", "" height= "+" >

D1

D2

Data 1

Data 2

Checksum"

Checksum 2

Data

Checksum 3

Data 6

Features: With parity, reliability, disk checksum is hashed into different disks, increasing the read and write rate. The data cannot be recovered until the two disks are lost at the same time, at least three hard drives and the hard disk size should be equal to make up the RAID5 array.

Capacity: the sum of all hard drives minus the capacity of one of the hard drives, the subtracted capacity is allocated to different areas of the three hard drives to hold the data verification information.

Raid-10

d1  & nbsp;

D2

D3

D4

Data 1

" /td>
Data 2

Data 2

" /td>
Data

Data 4

Data 4

RAID10 (RAID1+RAID0) is one of the most commonly used disk array level, its fault-tolerant, read-write data is more efficient, but the relatively high funding.

Features: Backup and concurrent access data, high reliability. D1, D2 constitute an array Raid1, wherein the D1 is a data disk, D2 is a backup disk, D3, D4 also constitute a Raid1, wherein D3 is a data disk, D4 is a backup disk; On this basis D1, D2 as a whole, D3, D4 as a whole, A RAID0 array is formed between the two whole. This will not only read the data very quickly, but also the speed of concurrent writes will increase as the disk grows faster. At least four hard drives and each hard disk size should be equal to make up the RAID10 array.

Capacity: Half of the sum of all hard disk capacity (half write data, half to back up data).

Management of RAID

The main command used: Mdadm;
Parameters:
-COr –creat set up a new array-R removal device
-AActivate disk array-L or –level= set the level of the disk array
-Dor –detail for more information on the print array device
-nOr –raid-devices= specifies the number of array members (partitions/disks)
-sor –scan scan configuration file or/proc/mdstat to get array missing information
-xOr –spare-devicds= specifies the number of spare disks in the array
-fSet the device status as a fault
-cOr –chunk= set the block chunk size of the array in kilobytes
-aor –add add device to array
-Gor –grow change the size or shape of the array
-v–verbose Show Details

CentOS7 in Mdadm is installed by default, if not installed, you can use Yum to install online, as follows:
First make sure you're connected to the Internet and search for mdadm on the Yum server.

[root@localhost ~]# yum search mdadm
Case: Creating a raid10+ redundant disk

A total of 7 disks, create a RAID 10 using 4 disk, hot standby 2 blocks, using the 1 block failure, hot standby top, plus a hot spare, put the bad t down

1. Create a RAID array

[[email protected] ~]# mdadm -C /dev/md0 -a yes -l 10 -n 4 -x 2 /dev/sd{b,c,d,e,f,g}

2. View RAID Array Information



md0 : active raid10 sdg[5](S) sdf[4](S) sde[3] sdd[2] sdc[1] sdb[0]
     41910272 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
     [=====>...............]  resync = 27.5% (11542784/41910272) finish=2.4min speed=206250K/sec

unused devices: <none>
// 四块盘在用,两块S热备

3. Generate Configuration file

[root@localhost ~]# mdadm -D --scan > /etc/mdadm.conf

4. Format Mount use

[[Email protected] ~]# mkfs.ext4/dev/md0
[[Email protected] ~]# Mkdir/data
[[Email protected] ~]# mount/dev/md0/data/
[[Email protected] ~]# df-h
Filesystem Size used Avail use% mounted on
/dev/mapper/vg0-root20G333M19G2%/
Tmpfs491M0491M0%/dev/shm
/dev/sda1             190m   34m  147M  < Span class= "Hljs-number" >19%/boot
/dev/mapper/vg0-usr   9.8g  1.9g  7.4g  21%/usr
/dev/mapper/ Vg0-var    20g  113m   19g   1%/var
/dev/md0           & nbsp   40g   48m   38g   1%/data
//I piece 20g,raid10 half data, half backup data

5, Analog fault one disk

First copy the point file to the data directory to see if the data is missing.

[Email protected] ~]# du-sh/data/
1.4m/data/
Enter a command or disconnect the hard drive directly
[Email protected] ~]# mdadm/dev/md0-f/dev/sdb
Mdadm:set/dev/sdb Faulty in/dev/md0
[Email protected] ~]# Cat/proc/mdstat
Personalities: [RAID10]
Md0:active RAID10 sdg[5] sdf[4] (S) sde[3] sdd[2] sdc[1] sdb[0] (F)
41910272 Blocks Super 1.2 512K chunks 2 near-copies [4/3] [_uuu]
[=>..........] Recovery = 6.6% (1400064/20955136) finish=1.6min speed=200009k/sec

Unused devices: <none>
SDB (F) fault state, the original SDG s did not have, the description, has gone to the top
[Email protected] ~]# du-sh/data/
1.4m/data/
Data not lost

6, put the bad t down, add the new hard drive up

The hard drive is broken, you must add a new hot spare to go up, the old to unload

Unload the failed drive
[[Email protected] ~]# Mdadm-r/dev/md0/dev/sdb
Mdadm:hot Removed/dev/sdb from/dev/md0
Add a new hard disk
[[Email protected] ~]# mdadm-a/DEV/MD0/DEV/SDH
Mdadm:added/dev/sdh

[[Email protected] ~]# Cat/proc/mdstat
Personalities: [RAID10]
Md0:active RAID10 sdh[6] (S) sdg[5] sdf[4] (S) sde[3] sdd[2] sdc[1]
41910272 Blocks Super1.2512K chunks2 Near-copies [4/4] [uuuu]
View status, the new hard drive has been found, the old hard drive is gone
Unused devices: <none>
[[Email protected] ~]# mdadm-d/dev/md0
Number Major Minor Raiddevice state
58960 Active SyncSet-a/DEV/SDG
18321 Active SyncSet-b/DEV/SDC
28482      active sync set-a  /DEV/SDD
      3       8       64        3      active sync set-b  /DEV/SDE

      4       8       80        -     spare  /DEV/SDF
      6       8      112        -     spare & nbsp /DEV/SDH
//array still original, unchanged

Linux Fundamentals-disk array (RAID) instance detailed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.