Raid learning and basic knowledge v0.1b

Source: Internet
Author: User

Raid learning and basic knowledge v0, 1B

Author:Small P
From:Linuxsir. org
Abstract:Data damage and loss caused by various disasters and errors cause losses and inconvenience to people. Data on personal and website servers must be backed up. Raid is a feasible and effective method. Raid is divided into soft RAID and hard raid ......

Directory
    1. What is raid;
    2. Introduction of RAID level;

      2.1 features and applications of raid0;
      2.2 features and applications of raid1;
      2.3 features and applications of RAID 3;
      2.4 features and applications of RAID 4;
      2.5 features and applications of RAID 5;
      2.6 RAID 0 + 1 raid 10 raid features and applications;

    3. Select a raid level;

      3.1 raid entries switch to the "striped" access mode;
      3.2 parallel access modes;

          3.2.1 basic working principle of parallel access;
          3.2.2 optimal applications for parallel raid access;

      3.3 independent access mode;

          3.3.1 The best application for independent raid access;

    4. Create and maintain raid;

      4.1 mdadm;

          4.1.1 create a partition;
          4.1.2 create RAID 5;
          4.1.3 create a RAID configuration file;
          4.1.4 create a file system;

      4.2 maintain soft RAID;

          4.2.1 simulate faulty disks;
          4.2.2 remove a faulty disk;
          4.2.3 Add a new hard disk;

    5. About this article;
    6. Update the log;
    7. references;
    8. related documents;


++ ++
Body
++ ++


1. What is raid;

RAID (Redundant Array of Inexpensive Disks) is a Redundant Array of cheap disks. The basic idea of raid is to combine multiple cheap small disks into a single disk group, so that the performance can reach or exceed a large and expensive disk.
Currently, raid is divided into two types: hardware-based raid and software-based raid. In Linux, the raid function can be implemented through the built-in software, which can greatly enhance the IO performance and reliability of the disk without buying expensive hardware raid controllers and accessories. The raid function is implemented by software, so it has flexible configuration and convenient management. Using software raid, you can also combine several physical disks into a larger virtual device to achieve performance improvement and data redundancy. Of course, the hardware-based raid solution is better than the software RAID technology in terms of performance and service performance, this is manifested in the ability to detect and repair multiple-bit errors, automatic detection of error disks, and array reconstruction.


2. Introduction of RAID level;

Commonly used raid classes are RAID 0, RAID 1, RAID 3, raid 4, and RAID 5, plus two-in-one RAID 0 + 1 Tib or raid 10 Gb/s ﹞. We will first compare the advantages and disadvantages of these RAID levels:

Relatively low raid level advantages
RAID 0 provides the fastest access speed and no fault tolerance
Raid 1 completely Fault Tolerant high cost
RAID 3 has better write performance than multi-task functions.
Raid 4 has multi-task and fault tolerance functions. Parity disk drives cause performance bottlenecks.
RAID 5 has multi-task and fault tolerance functions. overhead is available when writing data.
RAID 0 + 1/raid 10 fast, full fault tolerance, high cost


2.1 features and applications of raid0;

Also known as the Strip mode (striped), that is, to distribute continuous data to multiple disks for access ,. When the system has data requests, it can be concurrently executed by multiple disks. Each disk executes its own data requests. This type of parallel operations on data can make full use of the bandwidth of the bus, significantly improving the overall disk access performance. Because reading and writing are done in parallel on the device, the Read and Write Performance will increase, which is usually the main reason for running RAID 0. However, RAID 0 does not have data redundancy. If the drive fails, no data can be restored.


2.2 features and applications of RAID 1;

Raid 1, also known as mirroring, is a fully redundant mode ,. Raid 1 can be used for two or two XN disks, and 0 or more backup disks are used. Data is written to the image disk at the same time each time data is written. This array is highly reliable, but its effective capacity is reduced to half of the total capacity. At the same time, the size of these disks should be equal; otherwise, the total capacity will only have the minimum disk size.


2.3 features and applications of RAID 3;

RAID 3 performs XOR operations on data and generates parity data. After data and parity data are written to the member disk drive in parallel access mode, RAID 3 has the advantages and disadvantages of parallel access mode. Further, each data transmission in RAID 3 updates the entire stripe partition, that is, the data in the relative position of each member disk drive is updated together ﹞, therefore, it is not necessary to read some existing data from the disk drive, and perform XOR operations on the new data, and then write the data to the disk. Raid 4 and RAID 5 may occur, it is generally called read, modify, and write process. Let's just translate it into read, modify, and write process commit ﹞. Therefore, the Write Performance of RAID 3 is the best among all RAID levels.

The parity data of RAID 3 is generally stored in an exclusive parity disk. However, since each data volume updates the entire stripe, RAID 3's parity disk is not like raid 4's parity disk, which may cause access bottlenecks.

The parallel access mode of RAID 3 requires the support of the RAID Controller's special functions to Achieve Synchronous Control of disk drives. In addition, the above write performance advantages, with the current caching technology, therefore, RAID 3 applications will gradually fade out of the market.

With its superior Write Performance, RAID 3 is particularly suitable for large-scale and continuous file writing applications, such as plotting, imaging, video editing, multimedia, data warehousing, and high-speed data retrieval.


2.4 raid 4 features and applications;

Creating raid 4 requires three or more disks. It Stores Verification Information on one drive and writes data to other disks in RAID 0 mode ,. Because a disk is reserved for verification information, the size of the array is (N-l) * s, where S is the minimum drive size in the array. As in Raid 1, the disk size should be equal.

If a drive fails, you can use the verification information to recreate all data. If two drives fail, all data will be lost. This level is not frequently used because verification information is stored on a drive. This information must be updated each time you write data to another disk. Therefore, when writing a large amount of data, it is easy to cause the bottleneck of disk verification. Therefore, raid at this level is rarely used.
Raid 4 adopts an independent access mode and stores parity data with a single exclusive parity disk. Each packet transmitted by raid 4 has a long data segment and can execute overlapped I/O. Therefore, raid 4 has good read performance.

However, because a single exclusive parity disk is used to store parity data, writing will cause a great bottleneck. Therefore, raid 4 is not widely used.


2.5 features and applications of RAID 5;

RAID 5 may be the most useful raid mode when you want to combine a large number of physical disks and retain some redundancy. RAID 5 can be used on three or more disks, and 0 or more backup disks are used. Like raid 4, the size of the RAID5 device is (N-1) * s.

The biggest difference between RAID5 and raid4 is that the verification information is evenly distributed on each drive, as shown in Figure 4. This avoids bottlenecks in raid 4. If one of the disks fails, all data remains unchanged due to verification information. If you can use a backup disk, Data Synchronization starts immediately after the device fails. If both disks fail at the same time, all data will be lost. RAID5 can withstand faults of one disk, but cannot withstand faults of two or more disks.
RAID 5 also adopts the independent access mode, but its parity data is distributed and written to each member's disk drive. Therefore, in addition to the overlapped I/O multi-task performance, it also breaks away from the writing bottleneck of RAID 4 single exclusive parity disk. But, Rai? D 5 at the time of data writing, the data is still slightly dragged down by the "Read, modify, and write process.

RAID 5 can execute overlapped I/O multi-tasks. Therefore, when RAID 5 has more disk drives, the higher the performance is, because one disk drive can only execute one thread at a time, therefore, the more disk drives, the more threads you can overlapped, and the higher the performance. However, the more disk drives, the higher the probability of disk drive failures in the array, the lower the reliability of the entire array or mtdl (mean time to data loss.

As RAID 5 disperses the parity data from various disk drives, it is compatible with the XOR technology. For example, when there are several write requirements at the same time, the data to be written and the parity data may be scattered in different member disk drives, so the RAID Controller can make full use of overlapped I/O, at the same time, several disk drives are separately used for access, so that the overall performance of the array will be improved a lot.

Basically speaking, the RAID 5 architecture is suitable for applications with frequent access and insufficient data volume in a multi-person and multi-task environment, for example, enterprise file servers, Web servers, online transaction systems, and e-commerce applications are applications with low data volumes and frequent access.


2.6 RAID 0 + 1 raid 10 raid features and applications;

RAID 0 + 1/raid 10 combines the advantages of RAID 0 and RAID 1, and is suitable for applications with high speed requirements and full fault tolerance. Of course, there are also a lot of funds. The principle of RAID 0 and RAID 1 is very simple. We will not describe it in detail, but we will talk about whether RAID 0 + 1 should be RAID 0 over RAID 1, or is RAID 1 over RAID 0, that is to say, whether to make multiple RAID 1 into RAID 0, or to make multiple RAID 0 into RAID 1?

RAID 0 over RAID 1

Assume that we have four disk drives, each of which is first raid 1 and then RAID 0, which means RAID 0 over RAID 1:

(RAID 1) A = drive A1 + drive A2 (mirrored)
(RAID 1) B = drive B1 + drive B2 (mirrored)
RAID 0 = (RAID 1) A + (RAID 1) B (striped)

Raid 1 over RAID 0

Suppose we have four disk drives, each of which is RAID 0 first, and then RAID 0 is RAID 1, which is RAID 1 over RAID 0:

(RAID 0) A = drive A1 + drive A2 (striped)
(RAID 0) B = drive B1 + drive B2 (striped)
Raid 1 = (RAID 1) A + (RAID 1) B (mirrored)

In this architecture, if (RAID 0) A has a disk drive failure and (RAID 0) A is destroyed, RAID 1 can still work normally. If (RAID 0) B Also has a disk drive failure. (RAID 0) B is also damaged. At this time, both RAID 1's two disk drives are faulty, and the entire raid 1 data is destroyed.

Therefore, RAID 0 over RAID 1 should have higher reliability than RAID 1 over RAID 0. Therefore, we recommend that you first perform RAID 1 when using the RAID 0 + 1/raid 10 architecture, and then convert several RAID 1 to RAID 0.


3. Select a raid level;

Which type of RAID 012345 is suitable for you? It is not just about cost. The consideration of fault tolerance and transmission performance, but also the scalability in the future should meet the application requirements.
The Application of raid in the market is no longer new. Many people have a general understanding of the basic concepts of raid and the distinction between different RAID levels. However, in practical applications, we found that many users still cannot determine the proper raid level, especially for RAID 0 + 1 (10 ), the choice between RAID 3 and RAID 5 is even more difficult.


3.1 raid entries switch to the "striped" access mode;

In a RAID system that uses the data strip partition data stripping partition, the access to the member disk drive can be divided into two types:

Parallel access to external paralleled access keys ﹞
Independent access to trusted independent access keys ﹞

Raid 2 and RAID 3 adopt parallel access mode.

RAID 0, raid 4, RAID 5, and raid 6 adopt independent access mode.


3.2 parallel access modes;

In the support of the parallel access mode, the spindle motors of all disk drives are precisely controlled so that the positions of each disk are synchronized with each other, then, a short I/O data transmission is performed on each disk drive, so that every I/O instruction from the host is evenly distributed to each disk drive.

To achieve parallel access, each disk drive in raid must have almost identical specifications: the speed of the disk must be the same; the speed of the head search must be the same; the buffer or cache capacity must be consistent with the access speed; the CPU processing command must be at the same speed; and the I/O channel must be at the same speed. All in all, to use the parallel access mode, all the member disk drives in raid should use the same manufacturer and model disk drives.


3.2.1 basic working principle of parallel access;

Assume that there are four disk drives of the same specification in raid, which are disk drive a, drive B, drive C, and drive D. The timeline is divided into T0, T1, T2, T3, and 4:

T0: The RAID Controller transmits the first data record to the buffer of A. The buffer of disk drive B, drive C, and drive D are empty and waiting.
T1: The RAID Controller transmits the second data base to the buffer of B. A starts to write data in the buffer to the sector. The buffer of the disk drive C and D is empty and waiting.
T2: The RAID Controller transmits the third data base to the buffer of C. B starts to write data in the buffer to the sector. A has completed the write operation, and the buffer of disk drive D and A is empty, waiting
T3: The RAID Controller sends the fourth data base to the buffer of D. C starts to write data in the buffer to the sector. B has completed the write operation. The buffer of disk drive a and disk drive B is empty, waiting
T4: The RAID Controller sends the fifth data base to the buffer of A. D begins to write data in the buffer to the sector. C has completed the write operation. The buffer of disk drive B and drive C is empty, waiting

The RAID Controller will be able to process the next I/O command until the I/O command from the host is processed. The point is that when any disk drive is ready to write data into the sector, the target sector must have just improved to the bottom of the head. At the same time, each time the RAID Controller transmits the data length to a disk drive in sequence, it must be exactly the same as the disk drive speed. Otherwise, the raid performance will be compromised in the event of Miss.

3.2.2 optimal applications for parallel raid access;

The architecture of parallel access raid maximizes the performance of each disk drive in the array with its fine-grained motor control and distribution data transmission. At the same time, it makes full use of the storage bus bandwidth, therefore, it is particularly suitable for large and continuous file access applications, such:

Video and video file servers
Data Warehousing System
Multimedia Database
E-library
Front print or negative film output file server
Other large and continuous file servers

Due to the characteristics of the parallel access raid architecture, the raid controller can only handle one I/O requirement at a time and cannot execute multi-task overlapping, therefore, it is not suitable for applications in environments with frequent I/O times, random data access, and a small amount of data transmission. At the same time, because parallel access cannot execute overlapping multi-tasks, there is no way to "hide" the disk drive's search time for the eclipseek category, and the first data transfer in each I/O, wait until the first disk drive's rotation delay reaches rotational latency limit. The average is the time for the half-lap rotation. If you use a 10 thousand-RPM disk drive, you need to wait for 50 USEC on average. Therefore, the mechanical delay is the biggest problem in the parallel access architecture.


3.3 independent access mode;

Compared with the parallel access mode, the independent access mode does not control the synchronization rotation of the member disk drive. Its access to each disk drive is independent and there is no limit on the order and time interval, at the same time, the data volume of each transmission is relatively large. Therefore, the independent access mode can use overlapping multi-task, tagged command queuing, and other advanced functions to "hide" the mechanical latency of the above disk drive, such as eclipseek and rotational latency ﹞.

Because the independent access mode can perform multiple overlapping tasks and process different I/O requests from multiple hosts at the same time, you can achieve maximum performance in multi-host environments such as clustering mounts.


3.3.1 The best application for independent raid access;

Because the independent access mode can accept multiple I/O requests at the same time, it is particularly suitable for systems with frequent data access and a small amount of data. For example:

Online Transaction System or e-commerce application
Multi-user database
Erm and MRP Systems
File Server for small files


4. Create and maintain raid;


4.1 mdadm;

On Linux servers, the mdadm tool is used to create and maintain soft RAID. mdadm is convenient and flexible in creating and managing Soft Raid. Common mdadm parameters include:

* -- Create or-C: Create a New Soft Raid followed by the name of the RAID device. For example,/dev/md0 and/dev/md1.
* -- Assemble or-A: load an existing array, followed by the array and the device name.
* -- Detail or-D: Output details of the specified RAID device.
* -- Stop or-S: Stop the specified RAID device.
* -- Level or-L: Set the raid level. For example, if "-- level = 5" is set, the level of the created array is RAID 5.
* -- Raid-devices or-N: specify the number of active disks in the array.
* -- Scan or-S: scan the configuration file or the/proc/mdstat file to search for the configuration information of the soft RAID. This parameter cannot be used independently and can only be used when other parameters are configured.

 

The following describes how to implement the soft RAID through mdadm through an instance.


4.1.1 create a partition;

[Instance 1]

A machine has four idle hard disks:/dev/SDB,/dev/SDC,/dev/SDD, And/dev/SDE, use these four hard disks to create a RAID 5. The procedure is as follows:

First, use the "fdisk" command to create a partition on each hard disk. The operation is as follows:

 

Root @ xiaop-LAPTOP:/# fdisk/dev/SDB
Device contains neither a valid DOS partition table, Nor Sun, SGI or OSF disklabel
Building a new dos disklabel. changes will remain in memory only,
Until you decide to write them. After that, of course, the previous
Content won't be recoverable.
Warning: Invalid flag 0x0000 of Partition Table 4 will be corrected by W (RITE)
Command (M for help): N # create a new partition by N
Command action
E extended
P primary partition (1-4) # input P to create the primary Partition
P
Partition Number (1-4): 1 # input 1 to create the first primary Partition
First cylinder (1-102, default 1): # Press enter. Select the start partition and start from 1.
Using default value 1
Last cylinder or + size or + sizem or + sizek (1-102, default 102 ):
Using default value 102
Command (M for help): W # Then enter W to write the disk
The partition table has been altered!
Calling IOCTL () to re-read partition table.
Syncing disks.

 

Perform the same operation on the remaining hard disks. Follow this step to perform the same operation on the other two disks;
After all, run fdisk-L to see the following information:

 

           Disk /dev/sdb: 214 MB, 214748160 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
   Device Boot Start End Blocks Id System
/dev/sdb1 1 204 208880 fd Linux raid autodetect
Disk /dev/sdc: 214 MB, 214748160 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
   Device Boot Start End Blocks Id System
/dev/sdc1 1 204 208880 fd Linux raid autodetect
Disk /dev/sdd: 214 MB, 214748160 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
   Device Boot Start End Blocks Id System
/dev/sdd1 1 204 208880 fd Linux raid autodetect

 

We can see that a partition is created on the above three disks, with the same partition size;


4.1.2 create RAID 5;

After you have created four partitions:/dev/sdb1,/dev/sdc1,/dev/sdd1, And/dev/sde1, you can create RAID 5, set/dev/sde1 as the standby device, and the other as the active device. The standby device can be replaced immediately if a device is damaged. The command is as follows:

root@xiaop-laptop:/# mdadm --create /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[b-e]1
mdadm: array /dev/md0 started.

"-- Spare-devices = 1" indicates that there is only one backup device in the current array, that is, "/dev/sde1" as the backup device. If there are multiple backup devices, set the value of "-- spare-devices" to the corresponding number. After the raid device is successfully created, run the following command to view the raid details:root@xiaop-laptop:/# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Mon Jan 22 10:55:49 2007
Raid Level : raid5
Array Size : 208640 (203.75 MiB 213.65 MB)
Device Size : 104320 (101.88 MiB 106.82 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Jan 22 10:55:52 2007
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 -1 spare /dev/sde1
UUID : b372436a:6ba09b3d:2c80612c:efe19d75
Events : 0.6

 


4.1.3 create a RAID configuration file;

The raid configuration file is named "mdadm. conf ", which does not exist by default, so you need to create it manually. The main function of this configuration file is to automatically load soft RAID when the system starts and facilitate future management. "Mdadm. the conf file includes: All devices used for Soft Raid specified by the device option, and the device name of the array, the raid level, the number of active devices in the array, and the uuid of the device. Run the following command to generate a RAID configuration file:

root@xiaop-laptop:/# mdadm --detail --scan > /etc/mdadm.conf

However, the content of the currently generated "mdadm. conf" file does not comply with the specified format, so it does not take effect. In this case, you need to manually modify the file content to the following format:root@xiaop-laptop:/# vi /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
ARRAY /dev/md0 level=raid5 num-devices=3 UUID=b372436a:6ba09b3d:2c80612c:efe19d75

If you have not created a RAID configuration file, you must manually attach the software raid to use it after each system startup. The command to manually attach the software raid is as follows:root@xiaop-laptop:/# mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: /dev/md0 has been started with 3 drives and 1 spare.

 


4.1.4 create a file system;

Next, you only need to create a file system on the raid device. The method for creating a file system on the raid device is the same as that for creating a file system on a partition or disk. Run the following command to create an ext3 File System on the device "/dev/md0:

root@xiaop-laptop:/# mkfs.ext3 /dev/md0

After creating a file system, you can mount the device to use it normally. If you want to create another level of raid, the steps are basically the same as creating RAID 5. The difference is that when you specify the "-- level" value, you need to set this value to the corresponding level.

 


4.2 maintain soft RAID;

Although soft RAID can guarantee data reliability to a large extent, in daily work, you may need to adjust raid and avoid problems related to the possibility of damage to the physical media of RAID devices.

In these cases, you can also use the "mdadm" command to complete these operations. The following describes how to replace a raid faulty disk through an instance.


4.2.1 simulate faulty disks;

[Instance 2]

The previous [instance 1] is used as the basis. If the "/dev/sdc1" device fails, a new disk is replaced. The entire process is described as follows:
In practice, when a disk is detected to be faulty by a soft RAID, the disk is automatically marked as a faulty disk and the read/write operations on the faulty disk are stopped, therefore, mark/dev/sdc1 as a faulty disk. The command is as follows:

root@xiaop-laptop:/# mdadm /dev/md0 --fail /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0

 

Because RAID 5 in instance 1 has a backup device, when it is marked as a faulty disk, the backup disk will automatically replace the faulty disk, arrays can also be reconstructed in a short time. You can view the status of the current array through the "/proc/mdstat" file, as shown below:

root@xiaop-laptop:/# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sde1[3] sdb1[0] sdd1[2] sdc1[4](F)
208640 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
[=====>...............] recovery = 26.4% (28416/104320) finish=0.0min speed=28416K/sec
unused devices: <none>

The above information indicates that the array is being reconstructed. When a device fails or is marked as faulty, the square brackets of the corresponding device will be marked as (f ), for example, "sdc1 [4] (f)". The first digit of "[3/2]" indicates the number of devices included in the array, and the second digit indicates the number of active devices, because there is currently a faulty device, the second digit is 2. At this time, the array runs in degraded mode, although the array is still available, it does not have data redundancy; "[u_u]" indicates that the devices that can be normally used by the current array are/dev/sdb1 and/dev/sdd1. If the device "/dev/sdb1" fails, it is changed to [_ UU].

 

After you rebuild the data and view the array status again, the current RAID device returns to normal again, as shown below:

root@xiaop-laptop:/# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sde1[1] sdb1[0] sdd1[2] sdc1[3](F)
208640 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>

 


4.2.2 remove a faulty disk;

Since "/dev/sdc1" is faulty, remove the device as follows:

root@xiaop-laptop:/# mdadm /dev/md0 --remove /dev/sdc1
mdadm: hot removed /dev/sdc1

"-Remove" indicates removing a disk from a specified RAID device. You can also use "-R" to replace this parameter.

 


4.2.3 Add a new hard disk;

Before adding a new hard disk, you also need to create partitions for the new hard disk. For example, if the device name of the new hard disk is "/dev/sdc1", perform the following operations:

root@xiaop-laptop:/# mdadm /dev/md0 --add /dev/sdc1
mdadm: hot added /dev/sdc1

"-- Add" is the opposite of "-- remove". It is used to add a disk to a specified device and can be replaced by "-.

 

Because RAID 5 in instance 1 is configured with a backup device, RAID 5 can run normally without any operation. However, if a disk becomes faulty again, RAID 5 does not have data redundancy, which is too insecure for devices that store important data. Then "/dev/sdc1" added to RAID 5 appears as a backup device in the array, as follows:

root@xiaop-laptop:/# mdadm --detail /dev/md0
/dev/md0:
……
……
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 65 1 active sync /dev/sde1
2 8 49 2 active sync /dev/sdd1
3 8 33 -1 spare /dev/sdc1
UUID : b372436a:6ba09b3d:2c80612c:efe19d75
Events : 0.133

 


5. About this article;

This article only briefly introduces raid, which is the most basic introduction and does not involve advanced applications. For more information, see other relevant documents;


6. Update the log;

07.7.25 v0.1b


7. references;


8. related documents;
 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.