1. What is raid1?
Raid1, which can be called an image, writes data to multiple disks. For example, if a group of data is called a, data exists like this: A and A. In this way, data security is high, but performance is poor.
Two or more groups of N disks are used as images for each other. In some multi-threaded operating systems, the read speed is good, and the write speed is slightly reduced. Unless the primary disk and image with the same data are damaged at the same time, the operation can be maintained as long as a disk is normal, with the highest reliability. Raid 1 is an image. The principle is to store data on the primary hard disk and write the same data on the image hard disk. When the primary hard disk (physical) is damaged, the image hard disk replaces the primary hard disk. Because there is an image hard disk for data backup, the data security of RAID 1 is the best at all RAID levels. However, no matter how many disks are used as RAID 1, only the capacity of one disk is counted, which is the lowest level of disk utilization on all raid.
2. raid1 demonstration
Step 1: partition the disk
# Partition/dev/SDB root @ serv01 ~] # Fdisk/dev/SDB # partition/dev/SDC root @ serv01 ~] # Fdisk/dev/SDC [root @ serv01 ~] # Ls/dev/sdbsdb sdb1 [root @ serv01 ~] # Ls/dev/sdcsdc sdc1
Step 2: Create raid1
[Root @ serv01 ~] # Mdadm-C/dev/md1-L 1-N2/dev/sdb1/dev/sdc1mdadm: /dev/sdb1 appears to contain anext2fs file system size = 208812 K mtime = Wed Jul 3122: 17: 43 2013 mdadm:/dev/sdb1 appears to be part of araid array: level = raid0 devices = 0 ctime = Thu Jan 1 07:00:00 1970 mdadm: Partition Table exists on/dev/sdb1but will be lost or meaningless after creating arraymdadm: Note: this array has metadata at thestart and may Not be suitable as a boot device. if you plan to store '/boot' on this device Please ensure that your boot-loader understands MD/v1.x metadata, or use -- metadata = 0.90 continue creating array? Ymdadm: defaulting to version 1.2 metadatamdadm: array/dev/md1 started. # format [root @ serv01 ~] # Mkfs. ext4/dev/md1mke2fs 1.41.12 (17-may-2010) filesystem label = OS type: linuxblock size = 4096 (log = 2) fragment size = 4096 (log = 2) stride = 0 blocks, stripe width = 0 blocks131072 inodes, 523853 blocks26192 blocks (5.00%) reserved for the superuserfirst data block = 0 maximum filesystem blocks = 53687091216 block groups32768 blocks per group, 32768 fragments pergroup8192 inodes per groupsuperblock backups stored on blocks: 32768,98304, 163840,229 376, 294912 writing inode tables: Done creating Journal (8192 blocks): donewriting superblocks and filesystemaccounting information: done this filesystem will be automaticallychecked every 36 mounts or180 days, whichever comes first. use tune2fs-C or-I to override.
Step 3 Mount
[root@serv01 ~]# mount /dev/md1 /web[root@serv01 ~]# cat /etc/fstab ## /etc/fstab# Created by anaconda on Tue Jul 2300:54:37 2013## Accessible filesystems, by reference, aremaintained under '/dev/disk'# See man pages fstab(5), findfs(8),mount(8) and/or blkid(8) for more info#UUID=110fab7c-85c4-4bae-9114-98bc2ada24d8/ ext4 defaults 1 1UUID=ab434325-bf02-48e9-8ce7-78494a8ac71e/boot ext4 defaults 1 2UUID=02ed2b3b-b7e1-493d-9a43-8e1dcac8aa6f/opt ext4 defaults 1 2UUID=a088a35a-16d8-456a-a177-95c769c16e41swap swap defaults 0 0tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0
Step 4 modify the configuration file
# Append content to the/etc/fstab file [root @ serv01 ~] # Echo "/dev/md1/webext4 defaults 1 2">/etc/fstab # view the file content again [root @ serv01 ~] # Cat/etc/fstab # Created by Anaconda on TUE Jul 2300: 54: 37 2013 # accessible filesystems, by reference, aremaintained under '/dev/disk' # See man pages fstab (5), findfs (8), Mount (8) and/or blkid (8) for more info # UUID = 110fab7c-85c4-4bae-9114-98bc2ada24d8/ext4 defaults 1 1 UUID = ab434325-bf02-48e9-8ce7-78494a8ac71e/boot ext4 defaults 1 2 UUID = Nobody/OPT ext4 d Efaults 1 2 UUID = a088a35a-16d8-456a-a177-95c769c16e41swap swap defaults 0 0 tmpfs/dev/SHM tmpfs defaults 0 0 devpts/dev/PTS devpts gid = 5, mode = 620 0 0 sysfs/sys sysfs defaults 0 0 proc/proc defaults 0 0/dev/md1/Web ext4 defaults 1 2 # create mdadm. CONF file [root @ serv01 ~] # Mdadm -- detail-scan>/etc/mdadm. conf
Step 5 simulate hard disk failure
# Restart [root @ serv01 ~] # Reboot # copy content [root @ serv01 ~] # Cp/boot/*/web/-RVF # Clear disk, O, and save [root @ serv01 ~] # Fdisk/dev/SDB # copy content [root @ serv01 ~] # Cp/etc/*/web/-RVF # restart [root @ larrywen desktop] # SSH 192.168.1.11root@192.168.1.11's password: Last login: thu Aug 1 17:55:43 2013 from 192.168.1.1 [root @ serv01 ~] # Cat/proc/mdstatpersonalities: [raid1] md1: Active raid1 sdc1 [1] sdb1 [0] 2095415 blocks super 1.2 [2/2] [UU] unused devices: <none> [root @ larrywen desktop] # SSH 192.168.1.11root@192.168.1.11's password: Last login: Thu Aug 1 18:05:58 2013 from 192.168.1.1 # view status [root @ serv01 ~] # Cat/proc/mdstatpersonalities: [raid1] md1: Active raid1 sdc1 [1] 2095415 blocks super 1.2 [2/1] [_ u] unused devices: <none> [root @ serv01 ~] # Mdadm -- detail/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 1 Persistence: superblock is persistent Update Time: Thu Aug 1 522013 state: clean, degraded active devices: 1 working devices: 1 faileddevices: 0 spare devices: 0 name: serv01.host.com: 1 (localto host serv01.host.com) UUID: a8930aef: a5ddcdde: 789a11bf: incluevents: 23 Number major minor raiddevice state 0 0 0 0 0 removed 1 8 33 1 Active Sync/dev/sdc1 # copy content [root @ serv01 web] # cp/boot /*. -RVF [root @ serv01 web] # Mount/dev/sda2 on/type ext4 (RW) proc on/proc type proc (RW) sysfs on/sys type sysfs (RW) devpts on/dev/PTS type devpts (RW, gid = 5, mode = 620) tmpfs on/dev/SHM type tmpfs (RW, rootcontext = "system_u: object_r: tmpfs_t: s0 ")/dev/sda1 on/boot type ext4 (RW)/dev/sda5 on/OPT type ext4 (RW)/dev/md1 on/Web type ext4 (RW) none on/proc/sys/fs/binfmt_misc type binfmt_misc (RW)/dev/sr0 on/ISO Type iso9660 (RO) [root @ serv01 web] # DF-hfilesystem size used avail use % mounted on/dev/sda2 9.7g 1.1G 8.1g 12%/tmpfs 188 M 0 188 m 0%/dev/SHM /dev/sda1 194 m 25 m 160 m 14%/boot/dev/sda5 4.0g 137 m 3.7g 4%/opt/dev/md1 2.0g 54 m 1.9g 3%/web/ dev/sr0 3.4g 3.4g 0 100%/ISO
Step 6 Add a disk
# Add another disk [root @ serv01 web] # fdisk/dev/SDD [root @ serv01 web] # ls/dev/sddsdd sdd1 [root @ serv01 web] # mdadm -- manage/ dev/md1 -- add/dev/sdd1mdadm: added/dev/sdd1 [root @ serv01 web] # Cat/proc/mdstatpersonalities: [raid1] md1: active raid1 sdd1 [2] sdc1 [1] 2095415 blocks super 1.2 [2/1] [_ u] [==========> ........ ...] recovery = 47.7% (1000064/2095415) finish = 0.0 min speed = 250016 K/sec unused devices: <none> [Root @ serv01web] # Cat/proc/mdstatpersonalities: [raid1] md1: Active raid1 sdd1 [2] sdc1 [1] 2095415 blocks super 1.2 [2/2] [UU] unused devices: <none> [root @ serv01web] # reboot [root @ larrywendesktop] # SSH 192.168.1.11root@192.168.1.11's password: Last login: Thu Aug 1 18:08:05 2013 from 192.168.1.1 [root @ serv01 ~] # Cat/proc/mdstatpersonalities: [raid1] md1: Active raid1 sdd1 [2] sdc1 [1] 2095415 blocks super 1.2 [2/2] [UU] unused devices: <none> [root @ serv01 ~] # DF-hfilesystem size used avail use % mounted on/dev/sda2 9.7g 1.1G 8.1g 12%/tmpfs 188 M 0 188 m 0%/dev/SHM/dev/sda1 194 m 25 m 160 m 14%/boot/dev/sda5 4.0g 137 m 3.7g 4%/opt/dev/md1 2.0g 54 m 1.9g 3%/web/dev/sr0 3.4g 3.4G 0 100%/ISO [root @ serv01 ~] # Mdadm -- detail/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 2 Persistence: superblock is persistent Update Time: Thu Aug 1 18: 18: 482013 state: clean active devices: 2 working devices: 2 failed devices: 0 spare devices: 0 Name: serv01.host.com: 1 (localto host serv01.host.com) UUID: a8930aef: a5ddcdde: 789a11bf: 40f7eed6 events: 60 number major minor raiddevice state 2 8 49 0 Active Sync/dev/sdd1 1 8 33 1 Active Sync/dev/sdc1 # Add disk without restarting # destroy/dev/SDC, O [root @ serv01 ~] # Fdisk/dev/SDC # Check the detailed information and find that it is not broken [root @ serv01 ~] # Mdadm-D/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 2 Persistence: superblock is persistent Update Time: Thu Aug 1 :24:242013 state: Clean active devices: 2 working devices: 2 failed devices: 0 spare devices: 0 name: Serv01.host.com: 1 (localto host serv01.host.com) UUID: a8930aef: a5ddcdde: 789a11bf: 40f7eed6 events: 60 number major minor raiddevice state 2 8 49 0 Active Sync/dev/sdd1 1 8 33 1 Active Sync/dev/sdc1 # copy files cannot be seen [root @ serv01 ~] # Cp/etc/inittab/Web # View Details [root @ serv01 ~] # Mdadm-D/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 2 Persistence: superblock is persistent Update Time: Thu Aug 1 :27:312013 state: Clean active devices: 2 working devices: 2 failed devices: 0 spare devices: 0 name: Serv01.host.com: 1 (localto host serv01.host.com) UUID: a8930aef: a5ddcdde: 789a11bf: 40f7eed6 events: 60 number major minor raiddevice state 2 8 49 0 Active Sync/dev/sdd1 1 8 33 1 Active Sync/dev/sdc1 # flag/dev/sdc1 error root @ serv01 ~] # Mdadm -- manage/dev/md1 -- fail/dev/sdc1mdadm: Set/dev/sdc1 faulty in/dev/md1 # view the status [root @ serv01 ~] # Mdadm-D/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 2 Persistence: superblock is persistent Update Time: Thu Aug 1 :28:312013 state: clean, degraded active devices: 1 working devices: 1 failed devices: 1 spare Devices: 0 name: serv01.host.com: 1 (localto host serv01.host.com) UUID: a8930aef: a5ddcdde: 789a11bf: 40f7eed6 events: 61 number major minor raiddevice state 2 8 49 0 Active Sync/dev/sdd1 1 0 0 1 removed 1 8 33-Faulty spare/dev/sdc1 [root @ serv01 ~] # Fdisk/dev/SDE # Add/dev/sde1 hard disk [root @ serv01 ~] # Mdadm -- manage/dev/md1 -- add/dev/sde1mdadm: added/dev/sde1 # View Details [root @ serv01 ~] # Mdadm-D/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 3 Persistence: superblock is persistent Update Time: Thu Aug 1 18: 29: 562013 state: clean, degraded, recovering active devices: 1 working devices: 2 failed devices: 1 s Pare devices: 1 rebuild status: 85% complete name: serv01.host.com: 1 (localto host serv01.host.com) UUID: a8930aef: a5ddcdde: 789a11bf: 40f7eed6 events: 80 number major minor raiddevice state 2 8 49 0 Active Sync/dev/sdd1 3 8 65 1 spare rebuilding/dev/sde1 1 1 8 33-Faulty spare/dev/sdc1 # Remove disk [root @ serv01 ~] # Mdadm -- manage/dev/md1 -- remove/dev/sdc1mdadm: Hot removed/dev/sdc1 from/dev/md1 [root @ serv01 ~] # Mdadm-D/dev/md1/dev/md1: Version: 1.2 Creation Time: Thu Aug 117:58:09 2013 raid level: raid1 array size: 2095415 (2046.65 MIB 2145.70 MB) used Dev size: 2095415 (2046.65 MIB 2145.70 MB) raid devices: 2 Total devices: 2 Persistence: superblock is persistent Update Time: Thu Aug 1 18: 30: 092013 state: clean active devices: 2 working devices: 2 failed devices: 0 spare devices: 0 name: serv01.host.com: 1 (Local to host devices) UUID: a8930aef: a5ddcdde: 789a11bf: incluevents: 87 Number major minor raiddevice state 2 8 49 0 Active Sync/dev/sdd1 3 8 65 1 Active Sync/dev/sde1
Step 7: complete the experiment and clear the disk
[Root @ serv01 ~] # Umount/dev/md1 # Stop [root @ serv01 ~] # Mdadm -- manage/dev/md1 -- stopmdadm: stopped/dev/md1 # Delete the configuration file [root @ serv01 ~] # Rm-RF/etc/mdadm. conf [root @ serv01 ~] # Fdisk-L | grep-e SDB-ESDC-e SDD-e sdedisk/dev/SDF doesn't contain a validpartition tabledisk/dev/SDG doesn' t contain a validpartition tabledisk/ dev/SDB: 2147 MB, 2147483648 bytesdisk/dev/SDC: 2147 MB, 2147483648 bytesdisk/dev/SDD: 2147 MB, 2147483648 bytesdisk/dev/SDE: 2147 MB, 2147483648 bytes # Clear disk [root @ serv01 ~] # Mdadm -- MISC-Zero-superblock/dev/sdb1 [root @ serv01 ~] # Fdisk/dev/SDB # Clear a disk [root @ serv01 ~] # Mdadm -- MISC-Zero-superblock/dev/sdc1 [root @ serv01 ~] # Fdisk/dev/SDC
3. References
Http://zh.wikipedia.org/wiki/RAID
4. Related Articles
My mailbox: wgbno27@163.com Sina Weibo: @ wentasy27 public platform: justoracle (No.: justoracle) database technology exchange group: 336882565 (when adding group verification from csdn XXX) Oracle Exchange discussion group: https://groups.google.com/d/forum/justoracleBy Larry Wen
|
|
@ Wentasy blog is for your reference only. Welcome to visit. I hope to criticize and correct any mistakes. If you need to repost the original blog post, please indicate the source. Thank you for the [csdn blog] |