Linux Storage Management (2)

Source: Internet
Author: User

RAID disk array, or redundant array of independent disks. Multiple hard disks can be combined in different ways to form a hard disk group, resulting in a higher storage line than a single hard disk and data backup capabilities, multiple disks for parallel read and write, fault-tolerant, users can format the group of hard disk, mount and other operations, and the operation of a single hard disk, the same But its storage speed is much higher than the storage speed of a single hard disk;

RAID disk array by rank, different levels have different functions, the higher the series, the function of the hard disk group is more perfect, here to introduce you;


RAID0:

Requires a disk array consisting of at least two disks, which requires all the memory of the disk to be put into use, no storage space is wasted, so there is no redundant fault tolerance;

RAID1:

RAID1 is a disk data mirroring to achieve data redundancy, on the piles of independent disks to produce data backed up, that is, the data into the main disk, and then the data into the disk, when a disk fails, the system can automatically switch to read and write on the mirror disk, the overall utilization of the disk is lower than 50%, if two disks 60G, The disk utilization is 30G, and redundant fault-tolerant ability;

RAID4:

RAID4 disk array requires at least three pieces of hard disk, which fixed a hard disk as a check disk, when the data carrier as a part of the two hard disk data loss, you can verify the speed and another hard disk to different or verify the data of the lost data hard disk, but if the fixed piece of hard disk as a check disk, Check disk IO Pressure is huge, it is easy to form a performance bottleneck;

RAID5:

The RAID5 disk array is also a disk array that requires at least three drives to be implemented; Unlike RAID4, he does not fix a hard disk as a check disk, but rather a random loop to make the hard disk a check disk, multiple disks for cyclic redundancy checks, and officer to be randomly assigned to a stripe in a different disk. Read/write IO performance is significantly improved without performance bottlenecks;

RAID6:

RAID6 disk is a minimum of four disk to make up, can be multi-disk for two rounds of cyclic redundancy check, Officer test value randomly assigned to two different disks of the stripe; read/write IO performance is significantly improved without performance bottlenecks; it allows up to two disk failures and still guarantees data availability. The cost is to increase the time to calculate the calibration value;

RAID2 and RAID3 because in the first-line movement dimension not often use, so here does not add more description;


To set up a RAID disk array command:

Mdadm: Modelling tools;

The command is divided into three modes for working, creating patterns, assembly modes, and management modes;

Create pattern:-C

Common options:

-N #: How many hard disks are used to create the array to match with the minimum number of disks in the created array;

-L #: Created RAID array level;

-A (Yes|no): Allows the system or does not allow the system to automatically create MD device files;

-C Chunk--size: Specifies the size of the CHUNK;

-X #: Specifies the number of free disks in the array;

Management mode:

-F: Add a damage tag for the specified disk or partition;

-A: Add a disk or partition to the MD device;

-r: Remove the disk or partition from the MD device;

-S: Stop array;

-D--scan:

Show details of viewing RAID devices;

mdadm-d--scan >/etc/mdadm.conf

Saving Assembly information

Assembly Mode:-A

Assemble the RAID disk array by reading the assembly information in the/etc/mdadm.conf file;

Cases

#mdadm-S/dev/md0 stop array

Create an array #mdadm-c/dev/md0-n 4-l 0-a yes/dev/sd{b,c,d,e} to create a RAID0 array consisting of four disks

#mke2fs-T ext4/dev/md0 creating file systems for RAID arrays

Mount to the directory under the root;


RAID arrays give us a different set of disk combinations, and how to effectively manage these multiple disk groups together, so that the disk can be expanded to reduce the operation if necessary, and when a partition is not enough, the administrator may even want to back up the entire system, clear the hard disk, Re-partition the hard disk, and then restore the data to the new partition. Such a way is too inefficient, and even need to restart the entire system, which is difficult for operations staff to achieve, so there is an LVM logical disk volume management method appears;

LVM is a mechanism for Linux systems to manage disk partitioning, a logical layer built on hard disks and partitions, to provide an abstract disk volume for file system masking of lower-level disk partitioning layouts, and to create file systems on disk volumes; physical volumes (physical Volume) physical volume refers to a hard disk partition or a device (such as RAID) that has the same functionality as a logical partition of the disk, and is the basic storage logic block of LVM, but is compared to the basic physical storage media (such as partitions, disks, etc.) and contains management parameters related to LVM. One or more physical volumes can form a volume group;

Note: If the device used to create the physical volume is a normal partition, be sure to modify the partition's ID to 8e;


LVM mechanism Flow:

1. Create and represent a physical volume

2. Create a volume group on the basis of a physical volume, you need to specify the PE size when creating the volume group, the default is 4MB, once the determination can not be modified;

3. In the volume group that you have created, create a logical volume;

4. Creating a file system in a logical volume; (Advanced format)

5. Mount the load;


Management operations for physical volumes:

Pvcreate: Create a physical volume and add the disks or partitions you want to add to the volume group to the physical volume;

Pvdisplay: Displays details of physical volumes;

PVS: Displays simple information about a physical volume;

Pvremove: deleting physical volumes;

Pvmove: Move all the PE in one physical volume to a different physical volume to ensure that no PE occupies on the physical volume to be deleted, otherwise it will affect the data in the logical volume;

Cases

Create four physical blocks #pvcreate/dev/sd{b,c,d,e}

#pvremove/DEV/SDE If no data in the physical block can be deleted directly

#pvmove/DEV/SDC to move the SDC physical volume to a different physical volume

#pvremove/DEV/SDC can only delete

Volume Group Management operations:

Vgcreate: Volume group creation Command, combining several physical blocks into a volume group, and specifying the PE size;

-S [kk| mm| Gg|: Specify the PE size, if this option is omitted, the default PE is 4MB, must be 2 of the n-th square;

Cases

#vgcreate Myvg/dev/sd{b,c,d,e}


Vgremove: Deleting a volume group;

Cases

VGREMOVE/DEV/MYVG Deleting a volume group MYVG


Vgextend: Expand Volume Group capacity and add new physical volumes to the volume Group;

Cases

Vgextend MYVG/DEV/SDF Add a new physical block SDF to the volume group


Vgreduce: Reduce the volume group capacity, remove PV from the volume group, before doing this, you should use the Pvmove, the data in the physical volume will be removed to other physical volumes, to avoid data loss;

Cases

Pvmove/dev/sdb

Vgreduce Myvg/dev/sdb


Logical Volume Management operations:

Lvcreate: To create a logical volume, you need to specify the size of the logical volume, if the logical volume capacity is not enough, can be added without exceeding the volume group capacity, if the volume group capacity is not enough, you can add physical blocks to increase the volume group capacity;

-L lv_size (#{kk|mm|gg}): Specifies the logical volume size, cannot exceed the volume

Group size;

#lvcreate-L 20g-n mylv MYVG


-L #%{free| Vg| origin| PVS}: Specifies the percentage of the logical volume that occupies the corresponding storage unit;

Free: The percentage of space remaining on the logical volume;

VG: The percentage of the remaining space in the volume group;

-N: Specifies the name of the logical volume;

Lvremove: Removes the logical volume; before deleting the logical volume, if it is already mounted, the logical volume is unloaded before removal;

Cases

#umount/DEV/MYVG/MYLV

#lvremove/DEV/MYVG/MYLV

Lvdisplay: Displays the detailed information of the logical volume;

LVS: Displays a brief message of the logical volume;

Lvchange: Modify the status of the LV;

-ay: Activates the logical volume;

-an: Disable logical volumes;

Lvextend: Extended logical Volume space;

When adding space to a logical volume, you need to add the logical volume space in the physical case, and then add the logical volume space in the logical case;

Cases

Building physical Boundaries #lvextend-L -5G/DEV/MYVG/MYLV

#resize2fs-F/DEV/MYVG/MYLV building logical boundaries


Lvreduce: reduced logical volume space;

Uninstall the contents of the Mount first;

Umount/dev/raid_vg/raid_lv

Resize2fs-f/dev/raid_vg/raid_lv 15G

E2fsck/dev/raid_vg/raid_lv

Lvreduce-l 15G/DEV/RAID_VG/RAID_LV

mount/dev/raid_vg/raid_lv/userhome/

Data is stored in the primary disk, and data is mounted from the disk


Snapshot Volume: A snapshot volume is also a logical volume, except that the properties of a snapshot volume are not the same as those of a normal logical volume; To obtain a consistent backup of the file system state; After a snapshot volume is created for a logical volume, the physical copy of the data does not occur, only after the original logical volume has changed. The snapshot volume will automatically copy the data from the pre-modified logical volume;

Create a Snapshot

-S: Create a snapshot volume;

-N: Snapshot volume name;

Lvcreate-s-p-l 15g-n Mylv-snopshot/dev/myvg/mylv

Create a snapshot volume of size 15G, the size of the snapshot volume, preferably the same as the backed up data, the logical volume path of the backup destination, the snapshot volume name Mylv-snopshot

After the snapshot volume is created, the file system is mounted, the data is backed up, and then the snapshot volume is removed;





















Linux Storage Management (2)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.