Disk management-RAID 10

Source: Internet
Author: User

Disk management-RAID 10 What is RAID 10 RAID 10/01 is subdivided into RAID 1 + 0 or RAID 0 + 1. RAID 1 + 0 is the first mirror and then split the data, and then all the hard disks are divided into two groups, as the lowest combination of RAID 0, then the two groups are considered as RAID 1 operations. RAID 0 + 1 is the opposite of RAID 1 + 0 programs. It is split first and then the data is mirrored to two hard disks. It divides all hard disks into two groups and becomes the lowest combination of RAID 1. The two hard disks are regarded as RAID 0. In terms of performance, RAID 0 + 1 has a faster read/write speed than RAID 1 + 0. Reliability: when one hard disk in RAID 1 + 0 is damaged, the other three hard disks will continue to operate. RAID 0 + 1 as long as one hard disk is damaged, another hard disk of RAID 0 in the same group will also stop operating, with only two hard disks operating, with low reliability. Therefore, RAID 10 is far more commonly used than RAID 01. Most retail boards support RAID 0/1/5/10, but RAID 01 is not supported. 2. RAID10: the first step is to partition the disk [plain] # Separately partition the sdb sdc sde [root @ serv01 ~] # Fdisk/dev/sdb [root @ serv01 ~] # Fdisk/dev/sdc [root @ serv01 ~] # Fdisk/dev/sdd [root @ serv01 ~] # Fdisk/dev/sde [root @ serv01 ~] # Ls/dev/sd sda sda1 sda2 sda3 sda4 sda5 sdb sdb1 sdc sdc1 sdd sdd1 sde sde1 sdf sdg [root @ serv01 ~] # Mdadm-C/dev/md0-l 1-n2/dev/md0/dev/md1 mdadm: device/dev/md0 not suitable for anystyle of array Step 2 create RAID10 [plain] # create/dev/md0, RAID1 [root @ serv01 ~] # Mdadm-C/dev/md0-l 1-n2/dev/sdb1/dev/sdc1 mdadm: Note: this array has metadata at thestart and may not be suitable as a boot device. if you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use -- metadata = 0.90 Continue creating array? Y mdadm: Defaulting to version 1.2 metadata mdadm: array/dev/md0 started. # create/dev/md1, RAID1 [root @ serv01 ~] # Mdadm-C/dev/md1-l 1-n2/dev/sdd1/dev/sde1 mdadm: Note: this array has metadata at thestart and may not be suitable as a boot device. if you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use -- metadata = 0.90 Continue creating array? Y mdadm: Defaulting to version 1.2 metadata mdadm: array/dev/md1 started. # create/dev/md10, RAID0 [root @ serv01 ~] # Mdadm-C/dev/md10-l 0-n2/dev/md0/dev/md1 mdadm: Defaulting to version 1.2 metadata mdadm: array/dev/md10 started. [root @ serv01 ~] # Cat/proc/mdstat Personalities: [raid1] [raid0] md10: active raid0 md1 [1] md0 [0] 4188160 blocks super 1.2 512 k chunks md1: active raid1 sde1 [1] sdd1 [0] 2095415 blocks super 1.2 [2/2] [UU] md0: active raid1 sdc1 [1] sdb1 [0] 2095415 blocks super 1.2 [2/2] [UU] unused devices: <none> [root @ serv01 ~] # Mkfs. ext4/dev/md10 mke2fs 1.41.12 (17-May-2010) Filesystem label = OS type: Linux Block size = 4096 (log = 2) Fragment size = 4096 (log = 2) stride = 128 blocks, Stripe width = 256 blocks 262144 inodes, 1047040 blocks 52352 blocks (5.00%) reserved for the superuser First data block = 0 Maximum filesystem blocks = 1073741824 32 block groups 32768 blocks per group, 32768 fragments pergroup 8192 inodes per group Su Perblock backups stored on blocks: 32768,98304, 163840,229 376, 294912,819, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystemaccounting information: done This filesystem will be automaticallychecked every 39 mounts or 180 days, whichever comes first. use tune2fs-c or-I to override. [root @ serv01 ~] # Mkdir/web [root @ serv01 ~] # Mount/dev/md10/web [root @ serv01 ~] # Df-h Filesystem Size Used Avail Use % Mounted on/dev/sda2 9.7G 1.1G 8.1G 12%/tmpfs 385 M 0 385 M 0%/dev/shm/dev/sda1 194 M 25 M 160 M 14%/boot/dev/sda5 4.0G 137 M 3.7G 4%/opt/dev/sr0 3.4G 3.4G 0 100%/iso/dev/md10 4.0G 72 M 3.7G 2%/web [root @ serv01 ~] # Mdadm -- detail -- scan ARRAY/dev/md0 metadata = 1.2 name = serv01.host.com: 0 UUID = 78656148: metadata: a758c84e: f8927ae0 ARRAY/dev/md1 metadata = 1.2 name = serv01.host.com: 1 UUID = fingerprint: a6451cd4: Fingerprint: 847b51b7 ARRAY/dev/md10 metadata = 1.2 name = serv01.host.com: 10 UUID = 0428d240: Fingerprint: 80bfe439: 2802ff3e [root @ serv01 ~] # Mdadm -- detail -- scan>/etc/mdadm. conf [root @ serv01 ~] # Vim/etc/fstab [root @ serv01 ~] # Echo "/dev/md10/webext4 defaults 1 2">/etc/fstab, after the third step is restarted, the system is damaged, apparently, RAID10 cannot be used normally [plain] [root @ serv01 ~] # Reboot # After restart, the system cannot be started normally. Delete the line about RAID 10 added to fstab # Hard RAID can be used to implement this function. Soft RAID can only be used for demonstration # Note: by default, the following partitions are mounted in read-only mode and need to be remounted [root @ serv01 ~] # Mount-o remount, rw/step 4 fix the name, and [plain] [root @ serv01 ~] cannot be used normally # Mdadm -- assemble/dev/md10/dev/md0/dev/md1 mdadm: no correct container type:/dev/md0 mdadm: /dev/md0 has no superblock-assembly aborted [root @ serv01 ~] # Mdadm -- assemble/dev/md0/dev/sdb1/dev/sdc1 mdadm: cannot open device/dev/sdb1: Deviceor resource busy mdadm: /dev/sdb1 has no superblock-assembly aborted [root @ serv01 ~] # Mdadm -- assemble/dev/md0/dev/sdc1/dev/sdc1 [root @ serv01 ~] # Mdadm -- manage/dev/md0 -- stop mdadm: stopped/dev/md0 [root @ serv01 ~] # Mdadm -- manage/dev/md1 -- stop mdadm: stopped/dev/md1 [root @ serv01 ~] # Mdadm -- assemble/dev/md0/dev/sdb1/dev/sdc1 mdadm:/dev/md0 has been started with 2drives. [root @ serv01 ~] # Mdadm -- assemble/dev/md1/dev/sdd1/dev/sde1 mdadm:/dev/md1 has been started with 2drives. [root @ serv01 ~] # Mdadm -- assemble/dev/md10/dev/md0/dev/md1 mdadm:/dev/md10 has been started with 2drives. [root @ serv01 ~] # Cat/proc/mdstat Personalities: [raid1] [raid0] md10: active raid0 md0 [0] md1 [1] 4188160 blocks super 1.2 512 k chunks md1: active raid1 sdd1 [0] sde1 [1] 2095415 blocks super 1.2 [2/2] [UU] md0: active raid1 sdb1 [0] sdc1 [1] 2095415 blocks super 1.2 [2/2] [UU] unused devices: <none> the fifth step is completed. Stop the hard disk, clear disk [plain] [root @ serv01 ~] # Mdadm -- manage/dev/md0 -- stop mdadm: Cannot get exclusive access to/dev/md0: Perhaps a running process, mounted filesystem or active volume group? [Root @ serv01 ~] # Mdadm -- manage/dev/md10 -- stop mdadm: stopped/dev/md10 [root @ serv01 ~] # Mdadm -- manage/dev/md0 -- stop mdadm: stopped/dev/md0 [root @ serv01 ~] # Mdadm -- manage/dev/md1 -- stop mdadm: stopped/dev/md1 [root @ serv01 ~] # Mdadm -- misc -- zero-superblock/dev/sdb1 [root @ serv01 ~] # Mdadm -- misc -- zero-superblock/dev/sdc1 [root @ serv01 ~] # Mdadm -- misc -- zero-superblock/dev/sdd1 [root @ serv01 ~] # Mdadm -- misc -- zero-superblock/dev/sde1 [root @ serv01 ~] # Mdadm-E/dev/sdb1 mdadm: No md superblock detected on/dev/sdb1. [root @ serv01 ~] # Mdadm-E/dev/sdd1 mdadm: No md superblock detected on/dev/sdd1. [root @ serv01 ~] # Mdadm-E/dev/sdc1 mdadm: No md superblock detected on/dev/sdc1. [root @ serv01 ~] # Mdadm-E/dev/sde1 mdadm: No md superblock detected on/dev/sde1.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.