Storage Management:
Here we are going to learn about two types of disk arrays:
The disk array is made up of many inexpensive disks , combined into a large-capacity disk group, the use of individual disks to provide data generated by the bonus effect to improve the overall disk system performance. Using this technique, the data is cut into many sections, which are stored on each hard drive.
First, RAID
What 1.RAID is:
Raid:redundant Arrays of Inexpensive disks i.e.: Redundant array of Inexpensive disks
1988 by the University of California, Berkeley (University of California-berkeley), "A Case for redundant Arrays of inexpensive Disks". The goal is to synthesize multiple disks of relatively inexpensive IDE interfaces into an "array" to provide better IO performance, disk redundancy, or both.
Features of 2.RAID:
Improved IO capability:
Parallel disk Read and write
Increased durability;
Disk redundancy to achieve
Level: Multiple disk organizations work together in different ways
How the raid is implemented:
External disk array: Adapter capability with expansion cards
On-chip RAID: Motherboard integrated RAID controller
Configure in BIOS before installing OS
Software RAID:
Raid several levels:
RAID-0: Striped Reel
Read and write performance improvement;
Free space: n*min (S1,s2,...)
No fault-tolerant capability
Minimum number of disks: 2, 2
RAID-1: Mirrored volumes
Read performance improvement, write performance slightly decreased;
Free space: 1*min (S1,S2)
Have redundancy capability
Minimum number of disks: 2,
RAID-4:
Multiple data disk XOR or operation values, stored in a dedicated check disk
It's bad. A hard drive can still continue working mode called degraded mode
Check disk pressure is the largest, it is easy to form a performance bottleneck;
RAID-5:
Improved read and write performance
Free space: (N-1) *min (s1,s2,...)
Fault tolerance: Allow up to 1 disks to be damaged
Minimum number of disks: 3,
RAID-6:
Improved read and write performance
Free space: (N-2) *min (s1,s2,...)
Fault tolerance: Allow up to 2 disks to be damaged
Minimum number of disks: 4, 4+
RAID Mix Type level
RAID-10:
Improved read and write performance
Free space: n*min (S1,s2,...) /2
Fault tolerance: Each group of mirrors can only have a bad piece
Minimum number of disks: 4, 4+
RAID-01, RAID-50
Note: A mixed-level RAID array is actually a combination of two different levels of arrays, thus perfecting their shortcomings to raid 10 by combining several RAID1 arrays and then combining the RAID1 whole column into a RAID0 array
RAID7: Can be understood as an independent storage computer with operating system and management tools, can run independently, theoretically the highest performance of the RAID mode
Jbod:just a Bunch of Disks
Function: Combines the space of multiple disks into one large contiguous space for use
Free space: sum (s1,s2,...)
Common levels: RAID-0, RAID-1, RAID-5, RAID-10, RAID-50, JBOD
Implementation method:
How to implement Hardware
How to implement Software
RAID disk array creation tool:
Mdadm: Modelling Tools
Syntax format for the command:
Mdadm [mode] <raiddevice> [options] <component-devices>
Supported RAID levels: LINEAR, RAID0, RAID1, RAID4, RAID5, RAID6, RAID10,...
Mode:
Create pattern:-C
Assembly Mode:-A
Monitor mode:-F
Management mode:-F,-R,-a
Attention:
<raiddevice>:/dev/md#
<component-devices>: Any block device, note that if it is a partition, its partition type should be changed to FD
-C: Create pattern
-N #: Create this raid with a # block device
-L #: Indicates the level of RAID to be created
-A {Yes|no}: Automatically create device files for target RAID devices
-C Chunk_size: Indicates block size, default value is 512K
-X #: Indicates the number of free disks
Promiscuous mode:
1) Details of the raid display: mdadm-d/dev/md#
2) Stop MD device: Mdadm-s/dev/md0
Management mode:
-F: Add a damage flag for the specified disk or partition
-A: Adding a disk or partition to the MD device
-r: Remove the disk or partition from the MD device
Assembly mode: Assemble
-A
Used to reassemble a stopped raid device to make it work properly.
When assembling a RAID device, you need to rely on the/etc/mdadm.conf file for
mdadm-d--scan >>/etc/mdadm.conf
Mdadm-s/DEV/MD2
Mdadm-s/dev/md0
MDADM-C-N 2-l 0/dev/sdc/dev/sdf
Mdadm-a/DEV/MD2
Mdadm/dev/md2-a/dev/md0
Note: We use the Cat/proc/mkstat command to view the status of MD
Watch command:
-N #: Refresh time interval, in seconds, default is 1 seconds;
Watch-n # ' COMMAND
LVM2
Logical Volume Manager, Logical Volume manager, Version 2
Ibm
Use pure software to organize one or more underlying block devices and redefine them as a solution to a logical block device;
Using the kernel of the DM module implementation;
Dm:device Mapper, Device mapping table
The DM module can organize one or more underlying block devices into a logical block device;
User space in the corresponding command to the DM module issued a system call, it can be completed after the logical block device management;
Logical block devices are stored uniformly in/dev/dm-#
Focus: Steps for LVM Management using the DM mechanism:
1. Create and represent physical volumes, PV
Note: If the device used to create the physical volume is a normal partition, be sure to modify the partition's ID to 8e;
2. Create a volume group based on PV, logical block device, create the volume group while specifying the size of the PE;
Note: Once the PE size is specified, changes are not allowed;
3. Create a logical volume in a volume group that has already been created
4. Creating a file system in a logical volume (Advanced format)
5. Mounting
Management operations for physical volumes (PV):
Pvcreate: Creating a physical volume
Pvdisplay: Displaying details of physical volumes
PVS: Display simple information for physical volumes
Pvremove: Deleting physical volumes
Pvmove: Moves all the PE in a physical volume to another physical volume;
Management actions for Volume Group (VG):
Vgcreate: Creating a volume Group
-S #{kkmmgg}: Specify the size of the PE, if this option is omitted, the default PE is 4M;
Vgremove: Deleting a volume group
Vgextend: Expand Volume Group capacity, add new PV to volume group
Vgextend volumegroupname Physicaldevicepath [Physicaldevicepath ...]
Vgreduce: Reduce the volume group capacity, remove PV from the volume group, before doing this, you should use Pvmove to ensure that the removed PV is not occupied by the PE;
Vgreduce volumegroupname Physicaldevicepath [Physicaldevicepath ...]
Vgdisplay: Displays the details of the volume group
VGS: Display a short message for a volume group
Administrative operations for logical volumes (LV):
Lvcreate: Creating a logical volume
-L lv_size (#{kk|mm|gg}): Specifies the size of the logical volume and cannot exceed the capacity of the volume group;
-L #%{free| Vg| origin| PVS}: Specifies the percentage of the logical volume that occupies the corresponding storage unit;
-N lv_name: Specifies the name of the logical volume
-I #: When creating a logical volume, create it in strips and indicate that there are # bands on this logical volume
-I #: When creating a logical volume, create it in strips and indicate the size of the chunk;
Lvremove: Removing logical volumes
Lvdisplay: Displaying detailed information for a logical volume
LVS: Displaying a brief message for a logical volume
Lvchange: Modifying the status of the LV
-ay: Activating a logical volume
-an: Disabling logical volumes
Lvextend: Extend the space of the logical volume,
Note: Be sure to extend the physical boundaries of the logical volumes before extending the logical boundaries of the logical volumes; When using the Ext series file system, the RESIZE2FS command expands the logical boundary;
1) Expand the physical boundaries of the logical volume:
Lvextend-l [+]SIZE/PATH/TO/LVM
If size has +: represents the size of the original logical volume, increase the space
If size does not have +: indicates that the capacity of the logical volume is expanded to size space
2) Expand logical boundaries of logical volumes:
E2fsck/path/to/lvm
RESIZE2FS [-F]/path/to/lvm
Lvreduce: Reducing the space for logical volumes
Note: The logical boundary of the logical volume is reduced before the physical boundary of the logical volume is reduced; The RESIZE2FS command reduces the logical boundary when using the Ext series file system
Lvreduce-l [-]SIZE/PATH/TO/LVM
If size has-: Indicates that the size space is reduced based on the capacity of the original logical volume
If size is None-: indicates that the capacity of the logical volume is reduced directly to size space
Umount/path/to/lvm
E2FSCK/PATH/TO/LVM//Force Check Data consistency
Resize2fs-f/PATH/TO/LVM lv_size//Modify logical boundaries of logical volumes, lv_size to adjusted logical volume size
LVCHANGE-AN/PATH/TO/LVM//Turn off logical volumes
lvreduce-l [+]SIZE/PATH/TO/LVM//Modify the physical bounds of the logical volume to match the logical boundary
LVCHANGE-AY/PATH/TO/LVM//re-activating logical volumes
Mount/path/to/lvm/path/to/lvm_mount_point
For more convenient use of logical volumes, #设备创建了两个符号链接文件 for/dev/dm-:
/dev/mapper/vg_name-lv_name--- /dm-#
/dev/vg_name/lv_name--- /dm-#
Third, a snapshot of the logical volume:
1. What is a snapshot?
The snapshot itself is also a logical volume, another access path to the target logical volume, a special logical volume that is an exact copy of the logical volume that exists when the snapshot is generated, and a snapshot is the most appropriate choice for the temporary copy of the existing dataset and other operations that require backup or replication. Snapshots consume space only when they are different from the original logical volume. A certain amount of space is allocated to a snapshot when it is generated, but only if the original logical volume or snapshot has changed to use the space when the original logical volume changes, the old data is copied to the snapshot. The snapshot contains only the changed data from the original logical volume or the data changed in the snapshot since the snapshot was generated, or you can use Lvextend to extend the snapshot volume.
The snapshot is to record the system information at that time, as if there were any data changes in the future, the original data will be moved to the snapshot area, the unchanged area is shared by the snapshot area and the file system.
Since the snapshot area and the original LV share a lot of PE chunks, so the snapshot to be snapshot with the LV must be on the same VG! When the system recovers, the number of files cannot be higher than the actual capacity of the snapshot area.
Snapshot logical Volumes
Lvcreate-l snapshot_size-s-P r-n SNAPSHOT_NAME/PATH/TO/ORIGIN_LVM
-L Size: Specifies the size of the snapshot logical volume
-S: Create a snapshot Logical volume
-P R: Create processed Logical volume is read-only permission
-N snapshot_name: Specifies the name of the snapshot logical volume
Storage management of Linux system Management