Linux Performance Tuning 5: tuning an Important Role in soft raid storage. RAID independent redundant disk arrays are classified into two types: raid card-based hard raid (hardware implementation, high speed, suitable for large-scale applications), system-based soft raid (generally included in the kernel, performance is not as good as hard raid, but can be optimized, small servers) features: data integrity, fault prevention, capacity breakthrough, Performance Improvement below I will quickly introduce: RAID 0: no verification, data is written to the disk in segments, throughput increases, no fault tolerance, 100% use, at least two test: mdadm -- create/dev/md0 -- level = 0 -- raid-devices = 2 -- chunk = 64/dev/sd {, b} 1 mke2fs-j-B 4096-E stribe = 16/dev/md0 notice: -- chunk is specified during creation and stribe is specified during format, stribe = chunk/stribe RAID 1: Image, fault tolerance, read performance, at least 2 blocks, multiples of 2, utilization (100/n) % test: mdadm -- create/dev/md0 -- level = 1 -- raid-devices = 2/dev/sda4, 5 RAID 5: distributed parity strip, fault tolerance, performance, if a disk is damaged and a new disk is inserted, at least three disks will be downgraded (the verification value needs to be re-calculated). The hot backup disk is supported and the utilization rate is 100 * (1-1/n) % test: mdadm -- create/dev/md0 -- level = 5 -- raid-device = 3/dev/sd {a, B, c} 2 when data is frequently updated in raid 5, large overhead RAID 6: two copies of the check, two disks can be broken at the same time, at least four, the utilization rate is 100 * (1-2/n) test: mdadm -- create/dev/md0 -- level = 6 -- raid-device = 4/dev/sd {a, B, c, d} 1 RA ID 10: Perform raid1 first, and then perform raid0 for at least four mdadm -- create/dev/md0 -- level = 10 -- raid-devices = 4/dev/sd {a, B, c, d} 1 mdadm-C/dev/md0-l10-n4/dev/sdb {5, 6, 7, 8} email job:/etc/mdadm. conf MAILADDR root@example.com MAILFROM root@node1.example.com PROGRAM/usr/bin/myscripts test: mdadm-Fslt Summary: from their respective characteristics can be seen: raid0 large capacity, high read/write performance, but poor security. Raid1 is safe and has low utilization. raid5 is a distributed parity check with good security and good read/write performance. However, if data is frequently modified, we recommend that you do not use it. You can see the figure above. raid6 is similar to raid5, supports integrating RAID 1 and RAID 0 with raid 10. Compared with raid 5, raid 5 has a low overhead for data modification. If raid is used, raid-level selection is very important! Optimization suggestions: 1. if you use two or more raid, we recommend that you configure "Hot migration of Soft raid" to first save the cost of redundant disks, and then use the email notification mechanism to protect data. there are two important concepts for raid0, 5, and 6: (chunk size), (Stripe size) chunk size: the data volume written to each member disk in RAID, the chunk size is greater than the chunk size before reading and writing to the next disk. Stripe size = (chunk size)/(filesystem blocksize) indicates that chunk size = page size (4 K) is written to one disk at a time) * N chunk size = avgrq-sz * 512/1024/disks (iostat-x/dev/sda) Stripe size = (chunk size)/(filesystem blocksize) Note: when chunk size is created, use -- chunk = 8 stripe size to format it with-E stripe = 2 3. write bitmap: Only for RAID1, when the operation fails, you can perform basic operations on the basis of the failure, rather than starting from scratch (for example, we copy a large file) -- write-bebind; -- write-mostly; 4. you can also adjust it in/sys/block/mdX/md.