Implementation of RAID and LVM in Linux

Source: Internet
Author: User
Implementation of RAID and LVM in Linux 1. This document describes how to create and test RAID and LVM volumes in RAID 0, RAID 1, RAID 01, RAID 10, and RAID 5. an array is a kind of hard disk drive that is composed of several hard disk drives according to certain requirements, overall disk array... implementation of RAID and LVM in Linux

I. This section describes how to create and test RAID volumes and LVM volumes in RAID 0, RAID 1, RAID 01, RAID 10, and RAID 5. disk arrays are hard disk drives. make up a whole according to certain requirements, the overall disk array is a system managed by the array controller. Redundant Disk Array RAID technology was proposed by UC Berkeley in 1987. III. RAID ends RAID: Redundant Array of Inexpensive Disks, all called Redundant Arrays of cheap Disks. RAID is a combination of two or more physical disks to form a single logical disk. Data in RAID is written into the combination set in the form of chunks. Another feature is the data verification function, which can be used for additional information on RAID level 2, 3, 4, and 5. when the disk fails, the verification function combines the data in the intact disk to reconstruct the data on the invalid disk. The advantage of RAID: it improves the storage capacity of disks, works in parallel with multiple disks, and improves the data transmission rate. thanks to the data verification function, it improves the data reliability! IV. introduction of RAID levels 1. RAID0: strip technology, also known as band set. Required hard disk: at least two pieces of data are processed: data is written into a RAID disk in blocks, which improves the I/O rate. Fault tolerance: no redundancy is provided. if one disk is damaged, the disk usage of all data cannot be used: nS disk capacity: n2, RAID1: image technology, also known as disk image. Required hard disk: at least two pieces of data are processed: data is written to a disk and then copied to the image disk of the disk for backup. This slows down the write capability, but the reading speed is fast. Fault tolerance: provides redundancy. The system can use the data in this volume only when one of the two disks is guaranteed. Disk utilization: 1/2 disk capacity: n/23, RAID10: hard disks required for image and strip technology: at least four data disks are processed: The data is first stored and combined in the form of raid1, then, the combined RAID 1 volume is used as the physical volume for the raid 0 combination. This improves the reading and writing functions. Fault tolerance: use RAID1 technology to provide fault tolerance for data. Disk utilization: 1/2 disk capacity: n/24, RAID01: hard disks required for the strip and image technology: at least four data disks are processed: data is first stored and combined in RAID 0 mode, then, the combined RAID 0 volume is used as the physical volume for the raid 1 combination. This improves the reading and writing functions. Fault tolerance: use RAID1 technology to provide fault tolerance for data. Disk utilization: 1/2 disk capacity: n/25, RAID5: Hard disk required by the verification code technology: at least three data disks for processing: data is written to the N-1 disk separately, then, the data verification code is stored on another disk, improving the data read/write capability. Fault tolerance: provide redundancy based on verification code, disk utilization: (n-1)/n disk capacity: N-16, RAID50: verification code and strip storage technology required hard disk: at least 6 pieces of processing data: first, the disk is installed with RAID5 for data storage, and then the RAID5 is installed with RAID0. This provides both read and write capabilities and data redundancy. Fault Tolerance function: provide redundancy function disk utilization :( n-2)/n disk capacity: n-2 5, RAID in linux system implementation: 1, RAID implementation has two ways: hard RAID and soft RAID. hard RAID is the assumption of RAID arrays on hardware, requiring hosts to have RAID cards and RAID controllers. Then, set it in the BIOS of the computer. we will not detail it here. Soft RAID: provides the md module in the linux kernel to provide services and support for RAID disk array setup in the linux system. 2. mdadm command. the command for creating a RAID volume on linux is a modeled command creation mode:-C: dedicated option for creating a RAID volume:-l: Select RAID level-n: number of hard disk devices-a: automatically create a device file for it, followed by yes, no-c: specify the chunk (database) size, to the integer power of 2 is 64 K-x by default: specify the number of idle disks-provide redundancy. after the disk is damaged, it is loaded directly to Fg: create raid0 madadm-C/dev/md0-a yes-l 0-n 2/dev/sda {6, 7} Mke2fs-j/dev/md0 Mount/dev/mdo/mnt management mode: -f | -- fail: the simulated disk is damaged. Fg: mdadm/dev/md0 -- fail/dev/sda7-a | -- add: New hard disk Fg: mdadm/dev/md0-a/dev/sda8-r | -- remove: remove corrupted hardware Disk monitoring mode:-F growth mode:-G assembly mode:-A Fg: mdadm-A/dev/md0/dev/sda8/dev/sda9 3. view the RAID information of the current system:-D: displays RAID information-detail: display Information 4. stop RAID array-S | -- stop 5. save the current RAID information to the configuration file. then, assemble Mdadm-D -- scan>/dev/mdadm. conf Assembly: mdadm-A/dev/md #6. RAID practice: Create a raid 5 device with A size of 10 GB, and its chuck size is 32 kB; this device is required to be automatically mounted to the/backup directory when it is started. 1. create three disks with 5 GB size: sda {5, 6, 7}. the file format is fd. fdisk/dev/sda n + 5G t fd w Partprobe after the disk is created, view the disk effect, such as 2. create RAID5, and set the chunk 32 K, the result is as follows: 3. after the synchronization is complete, the result is as follows: 4. format md0 and mount it to the/backup directory, and set mke2fs-j/dev/md0 monut/dev/md0/backup to boot: Vim/etc/fstab 7, LVM: by Linux kernel module DM: device Mapper is a logical Device. 1. the LVM device provides RAID, LVM2, block photos, multi-path, and other functions. in linux, MD is usually used for RAID and LVM2 is set using DM. 2. Logical volumes are divided into three layers: Layer 1: Logical Volume LV (Logical Volume) layer 2: Volume Group VG (Volume Group) layer 3: physical Volume PV (Physiacl Volum) the LVM is created from the third layer to the first layer. the physical volume of the third layer is 8e in the linux system. 3. create physical layer PV: pvcreate: create PV, pvmore: move data, pvremove: erase PV volume, pvdisplay: Display physical volume information, pvscan: scan and display pv volumes on the system, pvs: View pv information Fg: create a physical volume pvcreate/dev/sda {10, 11} 4. create a volume group: vgcreate: create Vg, vgmore: remove, vgremove: delete vg, vgdisplay: Display vg information, vgs: View vg information Vgcreate-s #: specify the pe size, the default is 4 Mb, default unit: mb Fg: create vg Vgcreate myvg/dev/sda {10, 11} 5. create logical volume: lvcreate, lvreduce, and lvremove (remove lv and add path), lvextend, lvdisplay (view lv information), lvs (view lv information) Lvcreate-L #: specify the lv volume Lvcreate-n NAME: specify the NAME of the lv. Fg: create 50 M lv volume Lvcreate-L 50 M-n mylv myvg mke2fs-j/dev/myvg/mylv 6. expand logical volume LVM: expand physical volume first, extend the logical volume lvextend: expand the physical boundary Lvextend-L [+] #/PATH/TO/PV: + TO the added size, not the plus sign, the extended logical boundary is resize2fs/PATH/TO/LV 5G: extended logical boundary TO 5G resize2fs-p/PATH/TO/LV: 7. logical volume LVM reduction: logical boundaries are reduced before physical volume boundaries are reduced. 1. online scaling is not allowed, uninstall the file first. 2. ensure that all files can be stored in the reduced space. 3. forcibly check the files before the reduction to ensure that the file system is in a consistent state. E2fsck-f: Force check file df-lh: check utilization umount: Before reduction, you must uninstall e2fsck-f: Force check file resize2fs/PATH/TO/LV 5G: scale down the logical volume TO 5 GB lvreduce-L [-] #/PATH/TO/PV: scale down the physical volume and re-mount it. 8. snapshot Volume: 1. the lifecycle is the entire data duration. during this period, the data growth cannot exceed the block size: 2. the snapshot volume should be read-only. 3. run the "lvcreate-s" command to create a snapshot in the same volume group as the original volume to specify the block type as-p r | w: to specify the permission Fg: lvcreate-s-L # SLV_NAME-p-r/PATH/TO/LV tar jcf/tmp/usrs.tar.bz2 blocks are created and restored based on the content in the Mount point tar-xf/tmp/ users.tar.bz2 decompression 8. LVM create exercise create a volume group myvg consisting of two physical volumes of 15 GB, the PE size is 16 MB, and then a 5 GB logical volume mylv is created in this volume group. The logical volume must be automatically mounted to the/mnt directory after it is started; then, mylv is extended to 2G1, two physical volumes of 7 GB and 8 GB sda {8, 9} are created, and the disk format is set to 8e in lvm format. Fdisk/dev/sda n + 7g t 8e W pvcreate/dev/sda {8, 9} 2. create a volume group myvg Vgcreate-s 16 myvg/dev/sda {7, 8} 3, create a logical volume mylv lvreate-L 5G-n mylv/dev/myvg4. the logical volume must be automatically mounted to the/mnt directory Mke2fs-j/dev/myvg/mylv mount after it is started. /dev/myvg/mylv/mnt Vim/etc/fstab add startup item 5, and then expand mylv 3 GLvextend-L + 3G/dev/myvg/mylv
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.