Disk management-RAID 5

Source: Internet
Author: User
1. What is RAID5?

Raid Level 5 is a storage solution that combines storage performance, data security, and storage costs. It uses the Disk Striping technology. RAID 5 requires at least three hard disks. Instead of backing up the stored data, RAID 5 stores the data and the corresponding parity information on each disk that makes up RAID 5, in addition, the parity information and the corresponding data are stored on different disks. When a disk data in RAID 5 is damaged, you can use the remaining data and the corresponding parity information to restore the damaged data. RAID 5 can be understood as raid.
0 and RAID 1 compromise. RAID 5 can provide data security protection for the system, but its security level is lower than that of the image, and the disk space utilization is higher than that of the image. RAID 5 has a Data Reading Speed similar to RAID 0, but the data writing speed is quite slow due to an additional parity information, if "Write-back cache" is used, the efficiency can be improved significantly. At the same time, because multiple pieces of data correspond to one parity information, the disk space utilization of RAID 5 is higher than that of RAID 1, and the storage cost is relatively low.



 

2. RAID5 demonstration

Step 1 prepare a disk

[root@serv01 ~]# ls /dev/sdb1 /dev/sdc1/dev/sdd1/dev/sdb1 /dev/sdc1  /dev/sdd1

Step 2 create RAID5

[root@serv01 ~]# mdadm --C /dev/md5 —l 5 —n3 /dev/sdb1 /dev/sdc1/ /dev/sdd1[root@serv01 ~]# mdadm --create /dev/md5--level 5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md5 started.[root@serv01 ~]# cat /proc/mdstatPersonalities : [raid1] [raid0] [raid6][raid5] [raid4]md5 : active raid5 sdd1[3] sdc1[1] sdb1[0]     4190208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]     unused devices: <none>

Step 3 Format

[root@serv01 ~]# mkfs.ext4 /dev/md5

Step 4 modify the configuration file

# Append an object [root @ serv01 ~] # Echo "/dev/MD5/webext4 defaults 1 2">/etc/fstab # create a configuration file [root @ serv01 ~] # Mdadm -- detail -- Scan>/etc/mdadm. conf [root @ serv01 ~] # Cd/Web [root @ serv01 web] # lsconfig-2.6.32-131.0.15.el6.x86_64 system. map-2.6.32-131.0.15.el6.x86_64efi lost + found vmlinuz-2.6.32-131.0.15.el6.x86_64grub symvers-2.6.32-131.0.15.el6.x86_64.gz # View Details [root @ serv01 web] # mdadm-D/dev/MD5/dev/MD5: Version: 1.2 Creation Time: thu Aug 119:49:56 2013 raid level: RAID5 array size: 4190208 (4.00 gib 4.29 GB) used Dev size: 2095104 (2046.34 MIB 2145.39 MB) raid devices: 3 Total devices: 3 Persistence: superblock is persistent Update Time: Thu Aug 1 20: 24: 482013 state: Clean active devices: 3 Working devices: 3 failed devices: 0 spare devices: 0 layout: left-direction Ric chunk size: 512 K name: serv01.host.com: 5 (localto host serv01.host.com) UUID: a738b211: 987ef2b2: e6ce9eb3: 58724db1 events: 20 Number major minor raiddevice state 0 8 17 0 Active Sync/dev/sdb1 1 1 8 33 1 Active Sync/dev/sdc1 3 8 49 2 Active Sync/dev/sdd1

Step 5 simulate disk failure

# Unpartition o [root @ serv01 web] # fdisk/dev/SDB warning: DOS-compatible mode is deprecated. it's stronugly recommended to switch off the mode (command 'C') and change display units to sectors (command 'U '). command (M for help): obuilding a new dos disklabel with diskidentifier 0xc785ce7b. changes will remain in memory only, untilyou decide to write them. after that, of course, the previous contentwon't be re Coverable. warning: Invalid flag 0x0000 of partitiontable 4 will be corrected by W (RITE) Warning: DOS-compatible mode is deprecated. it's stronugly recommended to switch off the mode (command 'C') and change display units to sectors (command 'U '). command (M for help): wthe partition table has been altered! Calling IOCTL () to re-read partition table. warning: re-reading the partition tablefailed with error 16: device or resource busy. the kernel still uses the old table. theNew table will be used atthe next reboot or after you runpartprobe (8) or kpartx (8) syncing disks. # view [root @ serv01 web] # mdadm-D/dev/MD5/dev/MD5: Version: 1.2 Creation Time: Thu Aug 11:49:56 2013 raid level: RAID5 array size: 4190208 (4.00 gib 4.29 GB) used Dev size: 2095104 (2046.34 MIB 2145.39 MB) raid devices: 3 Total devices: 3 Persistence: superblock is persistent Update Time: Thu Aug 1 20: 25: 162013 state: Clean active devices: 3 Working devices: 3 failed devices: 0 spare devices: 0 layout: Left-blank Ric chunk size: 512 K name: serv01.host.com: 5 (localto host serv01.host.com) UUID: a738b211: 987ef2b2: E6ce9eb3: 58724db1 events: 20 Number major minor raiddevice state 0 8 17 0 Active Sync/dev/sdb1 1 1 8 33 1 Active Sync/dev/sdc1 3 8 49 2 Active Sync/dev/sdd1 # restart [root @ serv01 web] # reboot # partition/dev/SDE [root @ serv01 web] # fdisk/dev/SDE warning: DOS-compatible mode is deprecated. it's stronugly recommended to switch off the mode (command 'C') and change display units to sectors (command 'U '). command (M for help): p Disk/dev/SDE: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersunits = cylinders of 16065*512 = 8225280 bytessector size (logical/physical): 512 bytes/512 bytesi/o size (minimum/optimal): 512 bytes/512 bytesdisk identifier: 0x26eb36f1 device boot start end blocks ID system command (M for help): ncommand Action E extended P primary partition (1-4) ppartition num BER (1-4): 1 first cylinder (1-261, default 1): Using default value 1 last cylinder, + cylinders or + size {K, M, g} (1-261, default 261): Using default value 261 command (M for help): wthe partition table has been altered! Calling IOCTL () to re-read Partition Table. syncing disks.

Step 6 Add a disk

[Root @ serv01 web] # mdadm -- manage/dev/MD5 -- add/dev/sde1mdadm: added/dev/sde1 [root @ serv01 web] # Cat/proc/mdstatpersonalities: [raid6] [RAID5] [raid4] MD5: Active RAID5 sde1 [4] sdc1 [1] sdd1 [3] 4190208 blocks super 1.2 Level 5,512 K chunk, algorithm 2 [3/2] [_ UU] [=====================>...] recovery = 85.8% (1800064/2095104) finish = 0.0 min speed = 200007 K/sec unused devices: <none> # View Details again [root @ serv01 Web] # mdadm -- d/dev/md5mdadm: Unrecognized option '-- d' usage: mdadm -- help forhelp [root @ serv01 web] # mdadm-D/dev/MD5/dev/MD5: Version: 1.2 Creation Time: Thu Aug 119:49:56 2013 raid level: RAID5 array size: 4190208 (4.00 gib 4.29 GB) used Dev size: 2095104 (2046.34 MIB 2145.39 MB) raid devices: 3 Total devices: 3 Persistence: superblock is persistent Update Time: Thu Aug 1 20: 28: 292013 State: Clean active devices: 3 Working devices: 3 failed devices: 0 spare devices: 0 layout: Left-blank Ric chunk size: 512 K name: serv01.host.com: 5 (localto host serv01.host.com) UUID: a738b211: 987ef2b2: e6ce9eb3: 58724db1 events: 45 number major minor raiddevice state 4 8 65 0 Active Sync/dev/sde1 1 8 33 1 Active Sync/dev/sdc1 3 8 49 2 Active Sync/dev/sdd1 # log on after restart root @ serv01 web] # Rebo Ot [root @ larrywen disk] # SSH 192.168.1.11root@192.168.1.11's password: Last login: Thu Aug 1 20:26:07 2013 from 192.168.1.1 [root @ serv01 ~] # Cat/proc/mdstatpersonalities: [raid6] [RAID5] [raid4] MD5: active RAID5 sdc1 [1] sde1 [4] sdd1 [3] 4190208 blocks super 1.2 Level 5,512 K chunk, algorithm 2 [3/3] [uuu] unused devices: <none>

3. References

Http://zh.wikipedia.org/wiki/RAID

4. Related Articles

  • Disk management-RAID 0
  • Disk management-Raid 1
  • Disk management-raid 10
My mailbox: wgbno27@163.com Sina Weibo: @ wentasy27 public platform: justoracle (No.: justoracle) database technology exchange group: 336882565 (when adding group verification from csdn XXX) Oracle Exchange discussion group: https://groups.google.com/d/forum/justoracleBy Larry Wen
 
@ Wentasy blog is for your reference only. Welcome to visit. I hope to criticize and correct any mistakes. If you need to repost the original blog post, please indicate the source. Thank you for the [csdn blog]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.