Today's operating systems, whether Windows or Linux have RAID capabilities, RAID is divided into hardware RAID and software RAID, hardware RAID is through the RAID card, software RAID is implemented by software,
Today's commonly used raid is:
RAID0: At least two hard drives;
RAID1: At least two hard drives;
RAID5: At least three pieces of hard drive;
RAID6: At least four pieces of hard drive;
1, Virtual machine environment preparation
Virtual machine environment, add 4 blocks of 1GB IDE disk, plan to do RAID0 and RAID5.
2. View hard disk information
# fdisk-l
RAID Device creation and management
1. Create software RAID 0
# Mdadm-c-V/DEV/MD1-L0-N2/DEV/HDB/DEV/HDD
2 Scan RAID Information
# Mdadm-ds
3 Stop/DEV/MD1
# MDADM-SS
4 Start RAID/DEV/MD1
# Mdadm-a/DEV/MD1/DEV/HDB/DEV/HDD
5 Viewing hard disk RAID information
# Mdadm--examine/dev/hdb
6 Viewing array/dev/md1 information
# mdadm-d/DEV/MD1
7 Creating a RAID profile/etc/mdadm.conf
Profile/etc/mdadm.conf does not exist and needs to be created manually, creating this file facilitates the maintenance of RAID devices.
# Mdadm-ds
ARRAY/DEV/MD1 level=raid0 num-devices=2 metadata=0.90 UUID=4140C28C:ACE28B95:93C51A55:8451FBC3
# Mdadm-ds >>/etc/mdadm.conf
Modify the file/etc/mdadm.conf, add device content, modify the content as follows
ARRAY/DEV/MD1 level=raid0 num-devices=2 metadata=0.90 UUID=4140C28C:ACE28B95:93C51A55:8451FBC3 devices=/dev/hdb,/ Dev/hdd
Note: After you create a/etc/mdadm.conf file, you do not need to specify RAID devices and RAID members when you start raid.
8 Restart Array/dev/md1 test
# Mdadm-ss
Mdadm:stopped/dev/md1
# Mdadm-as
Mdadm:/dev/md1 have been started with 2 drives.
# Mdadm-ds
ARRAY/DEV/MD1 level=raid0 num-devices=2 metadata=0.90 UUID=4140C28C:ACE28B95:93C51A55:8451FBC3
9 Mdadm Common parameters
-A,--assemble
Activate raid
-C,--create
Create raid
-S,--scan
Scan RAID Devices
-S,--stop
Stop a running RAID device
RAID Device usage (RAID device partition, file system format, directory mount)
1 Check if the RAID device exists
# fdisk-l/DEV/MD1
disk/dev/md1:2147 MB, 2147352576 bytes
2 heads, 4 sectors/track, 524256 cylinders
Units = Cylinders of 8 * 4096 = bytes
Disk/dev/md1 doesn ' t contain a valid partition table
2 RAID File system formatting
# mkfs-t ' ext3 '-c/dev/md1
MKE2FS 1.39 (29-may-2006)
Filesystem label=
OS Type:linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262144 inodes, 524256 blocks
26212 blocks (5.00%) reserved for the Super user
First Data block=0
Maximum filesystem blocks=536870912
Block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Checking for Bad blocks (read-only test): Done
Writing Inode Tables:done
Creating Journal (8192 blocks): Done
Writing superblocks and filesystem accounting Information:done
This filesystem would be automatically checked every mounts or
Whichever comes first. Use Tune2fs-c or-i to override.
3 Directory Mount
# mkdir-p/database/pgdata1
[Email protected] ~]# mount-t ' ext3 '/dev/md1/database/pgdata1
[Email protected] ~]# DF-HV
Filesystem Size used Avail use% mounted on
/DEV/HDA1 14G 5.0G 8.0G 39%/
Tmpfs 217M 0 217M 0%/dev/shm
None 217M 104K 217M 1%/var/lib/xenstored
/DEV/MD1 2.0G 36M 1.9G 2%/database/pgdata1
[Email protected] ~]# chown-r postgres:postgres/database
Note: The device/dev/md1 successfully mounted with a mount point of/database/pgdata1 and a capacity of 2 GB.
4 Setting up auto mount on boot
/dev/md1/database/pgdata1 ext3 defaults 0 0
5. Create other types of raid in a manner that is basically consistent with the above
Learn about RAID (disk array) creation and management under Linux