CentOS6.5 soft RAID 0 Creation

Source: Internet
Author: User
Tags hex code

CentOS6.5 soft RAID 0 Creation
CentOS6.5


1. yum install-y mdadm parted


[Root @ mytest1 ~] # Yum install mdadm parted


========================================================== ==========================================================
Package Arch Version Repository Size
========================================================== ==========================================================
Installing:
Mdadm x86_64 3.2.6-7. el6_5.2 updates 337 k
Parted x86_64 2.1-21. el6 base 606 k


Transaction Summary
========================================================== ==========================================================




2. For sdb and sdc disk partitions, Set


[Root @ mytest1 ~] # Fdisk/dev/sdc


WARNING: DOS-compatible mode is deprecated. It's stronugly recommended
Switch off the mode (command 'C') and change display units
Sectors (command 'U ').


Command (m for help): n
Command action
E extended
P primary partition (1-4)
P
Partition number (1-4): 1
First cylinder (1-1305, default 1 ):
Using default value 1
Last cylinder, + cylinders or + size {K, M, G} (1-1305, default 1305 ):
Using default value 1305


Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)


Command (m for help): p


Disk/dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk identifier: 0x000c7662


Device Boot Start End Blocks Id System
/Dev/sdc1 1 1305 10482381 fd Linux raid autodetect


Command (m for help): w
The partition table has been altered!


Calling ioctl () to re-read partition table.
Syncing disks.


3. reboot-free partition Synchronization
[Root @ mytest1 ~] # Partprobe
Warning: WARNING: the kernel failed to re-read the partition table on/dev/sda (Device or resource busy ). as a result, it may not reflect all of your changes until after reboot.


4. view the status of the two disks
[Root @ mytest1 ~] # Fdisk-l/dev/sdb/dev/sdc


Disk/dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk identifier: 0x0af7aca1


Device Boot Start End Blocks Id System
/Dev/sdb1 1 1305 10482381 fd Linux raid autodetect


Disk/dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065*512 = 8225280 bytes
Sector size (logical/physical): 512 bytes/512 bytes
I/O size (minimum/optimal): 512 bytes/512 bytes
Disk identifier: 0x000c7662


Device Boot Start End Blocks Id System
/Dev/sdc1 1 1305 10482381 fd Linux raid autodetect


5. Create raid0
[Root @ mytest1 ~] # Mdadm-C/dev/md0-ayes-l0-n2/dev/sd {B, c} 1
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array/dev/md0 started.


6. view the status
[Root @ mytest1 ~] # Cat/proc/mdstat
Personalities: [raid0]
Md0: active raid0 sdc1 [1] sdb1 [0]
20964352 blocks super 1.2 512 k chunks

Unused devices: <none>


[Root @ mytest1 ~] # Mdadm-D/dev/md0
/Dev/md0:
Version: 1.2
Creation Time: Sun Aug 10 14:15:50 2014
Raid Level: raid0
Array Size: 20964352 (19.99 GiB 21.47 GB)
Raid Devices: 2
Total Devices: 2
Persistence: Superblock is persistent


Update Time: Sun Aug 10 14:15:50 2014
State: clean
Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0


Chunk Size: 512 K


Name: mytest1: 0 (local to host mytest1)
UUID: c62a98dc: 7f802f31: 6e4435ae: 168a6829
Events: 0


Number Major Minor RaidDevice State
0 8 17 0 active sync/dev/sdb1
1 8 33 1 active sync/dev/sdc1


6. Create the mdadm configuration file
[Root @ mytest1 ~] # Echo DEVICE/dev/sd {B, c} 1>/etc/mdadm. conf
[Root @ mytest1 ~] # Mdadm-Ds>/etc/mdadm. conf
[Root @ mytest1 ~] # Vi/etc/mdadm. conf


DEVICE/dev/sdb1/dev/sdc1
ARRAY/dev/md0 level = raid0 num-devices = 2 UUID = c62a98dc: 7f802f31: 6e4435ae: 168a6829


7. Create a File System
[Root @ mytest1 ~] # Mkfs. ext4/dev/md0
Mke2fs 1.41.12 (17-May-2010)
Filesystem label =
OS type: Linux
Block size = 4096 (log = 2)
Fragment size = 4096 (log = 2)
Stride = 128 blocks, Stripe width = 256 blocks
1310720 inodes, 5241088 blocks
262054 blocks (5.00%) reserved for the super user
First data block = 0
Maximum filesystem blocks = 4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768,983 04, 163840,229 376, 294912,819 200, 884736,160 5632, 2654208,
4096000


Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done


This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune2fs-c or-I to override.


8. automatic mounting upon startup
[Root @ mytest1 ~] # Mkdir/raid0disk
[Root @ mytest1 ~] # Vi/etc/fstab
#
#/Etc/fstab
# Created by anaconda on Sun Aug 10 02:32:47 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab (5), findfs (8), mount (8) and/or blkid (8) for more info
#
/Dev/mapper/vg_mytest1-root/ext4 ults 1 1
UUID = af6ed5f7-9181-440e-9b49-fff643caba9d/boot ext4 ults 1 2
/Dev/mapper/vg_mytest1-tmp/tmp ext4 defaults 1 2
Tmpfs/dev/shm tmpfs defaults 0 0
Devpts/dev/pts devpts gid = 5, mode = 620 0 0
Sysfs/sys sysfs defaults 0 0
Proc/proc defaults 0 0
/Dev/md0/raid0disk ext4 defaults 0 0
~
~
"/Etc/fstab" 16L, 816C written


9. Confirm after mounting
[Root @ mytest1 ~] # Mount-
[Root @ mytest1 ~] # Df-h
Filesystem Size Used Avail Use % Mounted on
/Dev/mapper/vg_mytest1-root 6.8G 781 M 5.7G 12%/
Tmpfs 939 M 0 939 M 0%/dev/shm
/Dev/sda1 97 M 28 M 64 M 31%/boot
/Dev/mapper/vg_mytest1-tmp 1008 M 34 M 924 M 4%/tmp
/Dev/md0 20G 172 M 19G 1%/raid0disk

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.