RAID Learning Notes

Source: Internet
Author: User


1.raid

Early raid:a casefor redundant arrays of inexpensive disk

Redundant Array of Inexpensive disks

Now raid:a casefor redundant arrays of independent disk

Standalone disk array

raid level: Does not represent the level of high and low, only means that the disk organization is different, there is no upper and lower points.

RAID0 : Stripe or striping. Represents the highest RAID level storage performance.

The principle of improving storage performance is to spread the continuous data across multiple disks, so that the system has data requests that can be executed by multiple disks in parallel, and each disk performs its own portion of the data request. The parallel operation on this data can make full use of the bus bandwidth and significantly improve the overall disk access performance.

Performance: Read, write both improve

Redundancy capability: None

Space utilization: N

Disk Required: More than 2 blocks

RAID1:

data Redundancy is achieved through disk data mirroring, resulting in mutually backed-up data on paired independent disks. When raw data is busy, data can be read directly from the mirrored copy, so RAID 1 can improve read performance. RAID 1 is the highest unit cost in a disk array , but provides high data security and availability. When a disk fails, the system can automatically switch to read and write on the mirrored disk without having to reorganize the failed data.

Performance: Read promotion, write down.

Redundancy capability: Available

Space utilization: 1/2

Disk Required: More than 2 blocks

RAID5:

RAID 5 is a storage solution that combines storage performance, data security , and storage costs. RAID 5 can be understood as a compromise between RAID 0 and RAID 1. RAID5 can provide data security for the system, but the level of protection is lower than mirror and disk space utilization is higher than mirror. RAID 5 has a similar data read speed as RAID0, with only one parity information, which is slower than writing to a single disk . At the same time, because of multiple data corresponding to a parity information, RAID 5 disk space utilization is higher than RAID 1, the storage cost is relatively low, is the use of more than a solution. RAID 5 does not specify a single parity disk, but instead accesses data and parity information across all disks. On RAID 5, the read/write pointer can operate against a list of devices at the same time, providing higher data traffic. RAID 5 is more suitable for small data blocks and random read and write data.

Performance: Read, write both improve

Redundancy capability: Available

Space utilization: (n-1)/n

Disk Required: More than 3 blocks


RAID10:

Also known as raid standard, the result is a combination of RAID 0 and RAID 1 standards, in which the data is continuously segmented in bits or bytes and parallel to read/write multiple disks, making disk mirroring redundant for each disk. It has the advantage of having RAID 0 at the same time with the extraordinary speed and RAID 1 data high reliability, but the same CPU usage is also higher, and the disk utilization ratio is low.

Logical raid:

/dev/md#

Module of the kernel: MD

Management control: MDADM,MD Manager, you can make any block device into raid.

Create a pattern

-C

Dedicated options:

- L : Level

-n# : Number of devices

-ayes|no : Whether to automatically create a device file for it

- C : Chunk size designation

-x# : Specify the number of free disks

Partitioning requirements for LVM: partition type Linuxraid auto,

Mdadm-c/dev/md0-a yes-l 0-n 2/DEV/SDA (5,6)

Cat/proc/mdstat

mke2fs-j/dev/md0 format md0,

To view MD Details:

Mdadm-d/dev/md0

Mdadm--detail/dev/md0

Assembly mode:

-A

Monitoring mode

-F

Growth model

-G

Management mode

--add|-a

--remove|-r

- F Simulated Damage

--fail Simulated Damage

--set-faulty Simulated Damage

mdadm/dev/md#--fail/dev/sda5 Simulated Damage

mdadm/dev/md#-r/dev/sda5 Removing a device

mdadm/dev/md#-a/dev/sda8 Join the device

Stop array:

Mdadm-s/dev/md#

--stop

To start the array:

Mdadm-a/DEV/MD1/DEV/SDA (7,9)

How to add a disk hot spare:

mdadm/dev/md#-a/dev/sda8 Join the device

Automatic read configuration to save current RAID information to the configuration file for assembly

mdadm-d--scan >/etc/mdadm.conf

Lvm:

DM : Device Mapper

by logical device

Snapshot: Snapshot

multipath : Multi-Path

Data backup:

Logical Devices:


This article is from the "Elder Don" blog, please be sure to keep this source http://zhanglaotang.blog.51cto.com/3196967/1586373

RAID Learning Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.