RAID disk array
RAID is an abbreviation for the English Redundant array of independent disks, which is known as a standalone redundant disk array. Simply put, RAID is a combination of multiple separate hard disks (physical hard disks) in different ways to form a hard disk group (logical hard disk), providing higher storage performance than a single hard drive and providing data backup technology. different ways of composing a disk array are called RAID levels (RAID levels). As the user looks, the group of disks is like a hard disk, which the user can partition, format, and so on. In summary, the operation of the disk array is identical to a single hard drive. The difference is that the disk array is much faster to store than a single drive and can provide automatic data backup. The function of data backup is that when the user data is damaged, the data can be recovered by using the backup information, thus guaranteeing the security of user data.
Specification
RAID technology mainly includes several specifications, such as RAID 0~raid 7, which have different emphases, and the common specifications are as follows:
RAID 0: Split data sequentially in bits or bytes, read/write on multiple disks in parallel, and therefore has a high data transfer rate, but it has no data redundancy and is not considered a real raid structure. RAID 0 simply improves performance and does not guarantee the reliability of the data, and one of the disk failures will affect all data. Therefore, RAID 0 cannot be applied to situations where data security requirements are high.
RAID 1: It is data redundancy through disk data mirroring, resulting in mutually backed-up data on paired independent disks. When raw data is busy, data can be read directly from the mirrored copy, so RAID 1 can improve read performance. RAID 1 is the highest unit cost in a disk array, but provides high data security and availability. When a disk fails, the system can automatically switch to read and write on the mirrored disk without having to reorganize the failed data.
RAID 0+1: Also known as the raid 10 standard, is actually the product of the combination of RAID 0 and RAID 1 standards, which is used for redundancy of disk mirroring for each disk while continuously splitting the data in bits or bytes and concurrently reading/writing multiple disks. It has the advantage of having RAID 0 at the same time with the extraordinary speed and RAID 1 data high reliability, but the same CPU usage is also higher, and the disk utilization ratio is low.
RAID 2: The data is striped across different hard disks, in bits or bytes, and is used to provide error checking and recovery using coding techniques called "Weighted average error correction code (cleartext)". This coding technique requires multiple disk storage checks and recovery information, making RAID 2 technology more complex to implement and therefore rarely used in a business environment.
RAID 3: It is very similar to Raid 2, where data is striped across different hard disks, except that RAID 3 uses simple parity and holds parity information with a single disk. If a disk fails, the parity disk and other data disks can regenerate the data, and if the parity disk fails, it does not affect data usage. RAID 3 provides a good transfer rate for large amounts of continuous data, but for random data, parity disks can be a bottleneck for write operations.
RAID 4:raid 4 also data is striped and distributed across different disks, but the bars are in blocks or records. RAID 4 uses a single disk as the parity disk, and each write operation requires access to the parity disk, where the parity disk becomes the bottleneck of the write operation, so RAID 4 is rarely used in a commercial environment.
RAID 5:raid 5 does not specify the parity disks separately, but instead accesses data and parity information across all disks. On RAID 5, the read/write pointer can operate against a list of devices at the same time, providing higher data traffic. RAID 5 is more suitable for small data blocks and random read and write data.
The main difference between RAID 3 and RAID 5 is that RAID 3 involves all array disks for each data transfer, whereas for RAID 5, most data transfers operate on only one disk and can be performed in parallel. In RAID 5 There is a "write loss", that is, each write operation will produce four actual read/write operations, two reads the old data and parity information, two times write new data and parity information.
RAID 6: Raid 6 Adds a second independent parity information block compared to RAID 5. Two independent parity systems use different algorithms, and the data is very reliable, even if two disks fail at the same time without affecting the use of the data. However, RAID 6 requires more disk space allocated to parity information and a greater write loss than RAID 5, so "write performance" is very poor. Poor performance and complex implementations allow RAID 6 to be rarely used in practice.
RAID 7: This is a new RAID standard, with its own intelligent real-time operating system and software tools for storage management, can be completely independent of the host running, do not occupy the host CPU resources. RAID 7 can be seen as a storage computer (Storage computer), which differs significantly from other RAID standards. In addition to the various criteria above, we can use RAID 0+1 as a combination of various RAID specifications to build the required RAID array, such as RAID 5+3 (RAID 53) is a more widely used array form. Users typically have the flexibility to configure disk arrays to obtain more disk storage systems that meet their requirements.
Initially, the RAID scheme is primarily for SCSI hard disk systems, and the system costs are more expensive. In 1993, Highpoint introduced the first Ide-raid control chip, which was able to use relatively inexpensive IDE hard drives to build a RAID system, greatly reducing the "threshold" of raid.
Since then, individual users have also begun to focus on this technology, as hard drives are the most "slow" and least secure device in modern personal computers, and the data that users store in them is often far more expensive than the computer itself. With relatively few costs, RAID technology can enable individual users to enjoy multiple disk speeds and higher data security, and now the Ide-raid control chips in the PC market are mostly from highpoint and promise, and some from Ami.
Ide-raid chips for individual users generally provide support for RAID specifications such as RAID 0, RAID 1, and RAID 0+1 (RAID 10), although they are technically not comparable to commercial systems, but are sufficient for average users to provide speed improvements and security assurances.
With the continuous improvement of the transmission rate of the hard disk interface, Ide-raid chip has been constantly updated, the chip market mainstream chip has all supported the ATA 100 standard, and Highpoint Company's new HPT 372 chip and promise the latest PDC20276 chip, You can even support the ATA 133 standard IDE hard drive. In the motherboard manufacturers competition intensified, personal computer user requirements gradually improved today, on the motherboard on board raid chip manufacturers have a few, the user can not purchase a RAID card, directly set up their own disk array, feel the speed of the disk.
Example
Let's take a look at creating RAID5 as an example of how to implement raid on Linux systems using software
The first is to create a disk partition, and of course you can use the entire disk directly
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/86/A1/wKiom1fGOHiDhL_3AALARp-DC8A839.png-wh_500x0-wm_3 -wmp_4-s_2772494125.png "style=" Float:none; "title=" 1 Create a disk partition. png "alt=" wkiom1fgohidhl_3aalarp-dc8a839.png-wh_50 "/ >
Use disk partitioning remember to modify SystemID to FD
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M00/86/A1/wKioL1fGOHqR4nKHAAIEB9EbNDw339.png-wh_500x0-wm_3 -wmp_4-s_4185966209.png "style=" Float:none; "title=" 2 adjust partition Idsysstem to Fd.png "alt=" Wkiol1fgohqr4nkhaaieb9ebndw339.png-wh_50 "/>
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/86/A1/wKiom1fGOHyxTsNcAAICf8GJa6M019.png-wh_500x0-wm_3 -wmp_4-s_956829066.png "style=" Float:none; "title=" 3 view partition. png "alt=" wkiom1fgohyxtsncaaicf8gja6m019.png-wh_50 "/>
Fast creation of three additional partitions with DD command
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/86/A1/wKioL1fGOH_DrvEoAAGzOeBzluQ243.png-wh_500x0-wm_3 -wmp_4-s_1106708164.png "style=" Float:none; "title=" 4 Create an additional three partitions using the DD command. png "alt=" Wkiol1fgoh_ Drveoaagzoebzluq243.png-wh_50 "/>
Synchronize partitions and view
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M02/86/A1/wKiom1fGOIHgWOSUAAK3nXSuuCM200.png-wh_500x0-wm_3 -wmp_4-s_975458488.png "style=" Float:none; "title=" 5 synchronize partitions and view. png "alt=" wkiom1fgoihgwosuaak3nxsuucm200.png-wh_50 "/ >
Create RAID5
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/86/A1/wKiom1fGOIajQHgvAACkmfF3CX0129.png-wh_500x0-wm_3 -wmp_4-s_984756594.png "style=" Float:none; "title=" 6 Create Raid5.png "alt=" wkiom1fgoiajqhgvaackmff3cx0129.png-wh_50 "/ >
viewing procedures
650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M02/86/A1/wKioL1fGOInyJHfWAAFUva391nw140.png-wh_500x0-wm_3 -wmp_4-s_1876502308.png "style=" Float:none; "title=" 7 View creation Process 1.png "alt=" wkiol1fgoinyjhfwaafuva391nw140.png-wh_50 "/ >
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M01/86/A1/wKiom1fGOIrhZhx4AAEZqhbuJrU774.png-wh_500x0-wm_3 -wmp_4-s_624376596.png "style=" Float:none; "title=" 8 View creation process 2.png "alt=" wkiom1fgoirhzhx4aaezqhbujru774.png-wh_50 "/ >
Created successfully
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M00/86/A1/wKioL1fGOI3BhJxlAAJah9coAVE103.png-wh_500x0-wm_3 -wmp_4-s_290388475.png "style=" Float:none; "title=" 9 created successfully. png "alt=" wkiol1fgoi3bhjxlaajah9coave103.png-wh_50 "/>
Blkid viewing the information for a partition
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/86/A1/wKioL1fGOJGibGpBAAU8uP8u9MM015.png-wh_500x0-wm_3 -wmp_4-s_1212096131.png "style=" Float:none; "title=" 10blkid view. png "alt=" WKIOL1FGOJGIBGPBAAU8UP8U9MM015.PNG-WH_50 " />
Build configuration file
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/86/A1/wKiom1fGOJKys5n4AADRQpibhoU948.png-wh_500x0-wm_3 -wmp_4-s_4260504162.png "style=" Float:none; "title=" 11 build configuration file. png "alt=" wkiom1fgojkys5n4aadrqpibhou948.png-wh_50 "/ >
Formatting
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/86/A1/wKiom1fGOJbgNxwJAAJHaaCR8-8304.png-wh_500x0-wm_3 -wmp_4-s_4224995991.png "style=" Float:none; "title=" 12 formatting md0.png "alt=" wkiom1fgojbgnxwjaajhaacr8-8304.png-wh_50 "/ >
Modify Fstab set boot auto mount
650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M01/86/A1/wKioL1fGOJeCDDfxAAKDLnxOxdo554.png-wh_500x0-wm_3 -wmp_4-s_1978208560.png "style=" Float:none; "title=" 13 Modify the Fstab configuration file. png "alt=" WKIOL1FGOJECDDFXAAKDLNXOXDO554.PNG-WH _50 "/>
Mount and view
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/86/A1/wKiom1fGOJiQzgzsAAIecduXyHM010.png-wh_500x0-wm_3 -wmp_4-s_624998412.png "style=" Float:none; "title=" 14 mount and view. png "alt=" wkiom1fgojiqzgzsaaiecduxyhm010.png-wh_50 "/ >
Always look at the current state
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/86/A1/wKioL1fGOJmRcR08AAI_ElqIwoE121.png-wh_500x0-wm_3 -wmp_4-s_350272511.png "style=" Float:none; "title=" 15 view the current status. PNG "alt=" wkiol1fgojmrcr08aai_elqiwoe121.png-wh_50 "/ >
Simulated damage
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/86/A1/wKioL1fGOJmyy4MqAABkFJvT20Q846.png-wh_500x0-wm_3 -wmp_4-s_1370148647.png "style=" Float:none; "title=" 16 simulates damage. png "alt=" wkiol1fgojmyy4mqaabkfjvt20q846.png-wh_50 "/ >
Alternate partition replacement
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M00/86/A1/wKiom1fGOJrh-q-CAADGuUy0QxE379.png-wh_500x0-wm_3 -wmp_4-s_1974330901.png "style=" Float:none; "title=" 17 alternate partition replaces. png "alt=" wkiom1fgojrh-q-caadguuy0qxe379.png-wh_50 "/ >
View the replacement process
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M00/86/A1/wKiom1fGOJrjWRTwAAFX1lDjfXI553.png-wh_500x0-wm_3 -wmp_4-s_48247735.png "style=" Float:none; "title=" 18 view the replacement process. png "alt=" wkiom1fgojrjwrtwaafx1ldjfxi553.png-wh_50 "/ >
Replacement completed
650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M02/86/A1/wKioL1fGOJuhLPoIAAFJSxaSlvI865.png-wh_500x0-wm_3 -wmp_4-s_815286102.png "style=" Float:none; "title=" 19 replacement completed. png "alt=" wkiol1fgojuhlpoiaafjsxaslvi865.png-wh_50 "/ >
Remove a damaged disk
650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M00/86/A1/wKioL1fGOJ2zVOfUAABuZe1-u7U715.png-wh_500x0-wm_3 -wmp_4-s_867175375.png "style=" Float:none "title=" 20 remove the damaged partition. png "alt=" wkiol1fgoj2zvofuaabuze1-u7u715.png-wh_50 "/ >
650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M01/86/A1/wKioL1fGOJ6Tlu2oAACLqZUqUmc639.png-wh_500x0-wm_3 -wmp_4-s_890029344.png "style=" Float:none; "title=" 21 after removal results. png "alt=" wkiol1fgoj6tlu2oaaclqzuqumc639.png-wh_50 "/ >
This article is from the "11798474" blog, please be sure to keep this source http://11808474.blog.51cto.com/11798474/1844611
Using software to implement RAID on Linux systems