1. What is RAID0?
RAID0, which divides data into several halves and writes data to multiple disks A: A1 A2 ...... Performance improvement. At least two disks are required.
Merge multiple disks into a large disk with no redundancy and parallel I/O, with the fastest speed. RAID 0 is also called a zone set. It associates multiple disks to form a large disk. When storing data, the data is segmented by the number of disks and written into these disks at the same time. Therefore, RAID 0 is the fastest among all levels. However, RAID 0 does not have redundancy. If a disk (physical) is damaged, all data will be lost, which is similar to JBOD.
Theoretically, the higher the disk performance is equal to "single disk performance" × "Number of disks", but it is actually affected by bus I/O bottlenecks and other factors, RAID efficiency decreases along with the margin. That is to say, if the performance of a disk is 50 MB per second, the RAID 0 performance of two disks is about 96 MB per second, the RAID 0 of the three disks may be MB per second rather than MB per second. Therefore, the RAID 0 of the two disks can greatly improve the efficiency.
However, if RAID is implemented using Software, the disk space is not necessarily limited (such as Linux Software RAID ), all disk space can be utilized through different combinations through software implementation.
2. RAID0 demonstration
Step 1: partition the disk
[Root @ serv01/] # ls/dev/sdb
/Dev/sdb
[Root @ serv01/] # ls/dev/sdc
/Dev/sdc
[Root @ serv01/] # ls/dev/sdb */dev/sdc *-l
Brw-rw ----. 1 root disk 8, 16 Jul 31 23:20/dev/sdb
Brw-rw ----. 1 root disk 8, 17 Jul 31 23:20/dev/sdb1
Brw-rw ----. 1 root disk 8, 32 Jul 31 23:21/dev/sdc
Brw-rw ----. 1 root disk 8, 33 Jul 31 23:21/dev/sdc1
# Partition: Only one partition (/dev/sdb). Change the partition type to fd (t, fd)
# Linux raid autodetect: fd
[Root @ serv01/] # fdisk/dev/sdb
# Partition: Only one partition (/dev/sdc). Change the partition type to fd (t, fd)
[Root @ serv01/] # fdisk/dev/sdc
[Root @ serv01/] # fdisk-l | grep-e sdb-esdc
Disk/dev/sdb: 2147 MB, 2147483648 bytes
/Dev/sdb1 1 261 2096451 fd Linux raid autodetect
Disk/dev/sdc: 2147 MB, 2147483648 bytes
/Dev/sdc1 1 261 2096451 fd Linux raid autodetect
# Install mdadm-Implementation of soft RAID
[Root @ serv01/] # yum install/sbin/mdadm-y
[Root @ serv01/] # ls/dev/sdb *
/Dev/sdb/dev/sdb1
[Root @ serv01/] # ls/dev/sdc *
/Dev/sdc/dev/sdc1
# Creating RAID
[Root @ serv01/] # mdadm -- create/dev/md0 -- level 0 -- raid-devices = 2/dev/sdb1/dev/sdc1
Mdadm:/dev/sdb1 appears to contain anext2fs file system
Size = 208812 K mtime = Wed Jul 3122: 17: 43 2013
Mdadm:/dev/sdb1 appears to be part of araid array:
Level = raid0 devices = 0 ctime = Thu Jan 1 07:00:00 1970
Mdadm: partition table exists on/dev/sdb1but will be lost or
Meaningless after creating array
Continue creating array? Y
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array/dev/md0 started.
[Root @ serv01/] # ls/dev/md0
/Dev/md0
# View Details of/dev/md0
[Root @ serv01/] # mdadm -- detail/dev/md0
/Dev/md0:
Version: 1.2
Creation Time: Wed Jul 31 23:30:26 2013
Raid Level: raid0
Array Size: 4190208 (4.00 GiB 4.29 GB)
Raid Devices: 2
Total Devices: 2
Persistence: Superblock is persistent
Updated Time: Wed Jul 31 23:30:26 2013
State: clean
Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0
Chunk Size: 512 K
Name: serv01.host.com: 0 (localto host serv01.host.com)
UUID: 1f1a007f: 7ed82aa0: 49722d2f: 1e664330
Events: 0
Number Major Minor RaidDevice State
0 8 17 0 active sync/dev/sdb1
1 8 33 1 active sync/dev/sdc1
[Root @ serv01/] # cat/proc/mdstat
Personalities: [raid0]
Md0: active raid0 sdc1 [1] sdb1 [0]
4190208 blocks super 1.2 512 k chunks
Unused devices: <none>
Next, let's take a look at the highlights of page 2nd:
Recommended reading:
Debian soft RAID Installation notes-use mdadm to install RAID1
Is RAID too hard?
Common RAID technology introduction and demo (Multi-chart)
The most common disk array in Linux-RAID 5
RAID0 + 1 and RAID5 Performance Test Results