1. Set File system quotas
1.1. Requirements
Perform a soft limit (soft limit) hard limit on a block or node based on a file system to enable different policies for different groups or users in the kernel
1.2. Initialization
Partition mount options: Usrquota, Grpquota Initialize database: Quotacheck
1.3. Set quotas for users
Perform:
Open or Cancel quotas: Quotaon, Quotaoff Direct edit quota: Edquota username directly in the shell: Etquota usename 4096 5120 Original Standard user Dquota-p user1 User2
Reporting Quota Status
User survey: Quota username Quotas Overview: repquota/mountpoint other tools: Warnquota
2. RAID array
Raid:
Redundant Arrays of Inexpensive (independent) Disks
Multiple disks synthesize an "array" to provide better performance, redundancy, or both.
Advantage:
Improved IO capability: Disk parallel read/write, improved durability, disk redundancy to achieve; Level: multiple disk organizations work together in different ways;
How RAID is implemented: External disk array: Provides the ability to adapt with expansion cards. Internal raid: Motherboard integrated RAID controller, configured in BIOS before installing OS. Software RAID: implemented via OS.
RAID level:
RAID-0: Striped volume, Strip RAID-1: mirrored volume, mirror RAID-2. RAID-5 RAID-6 RAID-10 RAID-01 RAID-50 jbod:just a Bunch of Disks; function: Combines the space of multiple disks into a large contiguous space using available empty Room: sum (s1,s2,...)
Common levels: RAID-0, RAID-1, RAID-5, RAID-10, Raid-50,jbod
2.1, the implementation of soft raid
MDADM: Provide management interface for soft raid, add redundancy to spare disk, MD (multi devices) RAID device in kernel can be named/dev/md0,/dev/md1,/DEV/MD2,/DEV/MD3, etc.
2.1. Grammar
Grammar:
Mdadm [mode] <raiddevice> [options] <component-devices>
Supported RAID levels:
LINEAR, RAID0, RAID1, RAID4,RAID5, RAID6, RAID10
Mode:
Create:-C Assembly:-A Monitoring:-F Management:-F,-R,-a
<raiddevice>:/dev/md#<component-devices>: any block device
-C: Create Mode-N #: Use # block device to create this raid-l #: Indicates the level of RAID to be created---{Yes|no}: Automatically create device file for target RAID device-C chunk_size: Indicates block size-X #: Indicates the number of free disks-D: Displays raid details; mdadm-d/dev/md#
Admin mode:-F: flag specifies disk as corrupt-A: Add disk-r: Remove disk observe MD Status: Cat/proc/mdstat
2.2. Soft RAID Configuration Example
Create and define a RAID device using Mdadm mdadm-c/dev/md0-a yes-l 5-n 3-x 1/dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1 Format each RAID device with a file system mke2fs-j/dev/md0 test RAID device use Mdadm to check the status of the raid device Mdadm--detail| D/dev/md0 Add a new member Mdadm–g/dev/md0–n4-a/DEV/SDF1
2.3. Soft RAID Test and repair
Emulated disk failure: Mdadm/dev/md0-f/dev/sda1 Remove Disk: MDADM/DEV/MD0–R/DEV/SDA1 Repair Disk failure from software RAID disk : Replace the failed disk and then power on the backup drive to rebuild the partition mdadm/dev/md0-a/dev/sda1 mdadm,/proc/mdstat, and system log information;
2.4. Soft RAID Management
Build configuration file: Mdadm–d–s >>/etc/mdadm.conf stop device: mdadm–s/dev/md0 activation device: mdadm–a–s/dev/md0 activation forced start: Mdad M–R/DEV/MD0 Delete raid information: Mdadm--ZERO-SUPERBLOCK/DEV/SDB1
3. Logical Volume Manager (LVM)
Role:
An abstraction layer that allows for easy operation of the volume, including the resizing of the file system;
Allows the file system to be re-organized between multiple physical devices;
Designate a device as a physical volume, one or more physical volumes to create a volume group, a physical volume defined by a fixed-size physical region (physical extent,pe), a logical volume created on a physical volume, a physical region (PE), or a logical volume Create a file system;
Dm:device Mapper: A module that organizes one or more underlying block devices into a logical device;
Device Name:/dev/dm-#
Soft Links:
/dev/mapper/vg_name-lv_name/dev/mapper/vol0-root/dev/vg_name/lv_name/dev/vol0/root
Deleting a logical volume must first delete the LV, then delete the VG, and finally remove the PV
3.1. Management tools
PV Management tool: Display PV information PVs: Brief PV Information display Pvdisplay create PV pvcreate/dev/device
VG Management tool: Display volume group VGS vgdisplay Create Volume group Vgcreate [-S #[kkmmggttppee]] volumegroupname Physicalde Vicepath [Physicaldevicepath ...] Manage volume groups Vgextend volumegroupname Physicaldevicepath [Physicaldevicepath ...] Vgreduce volumegroupname Physicaldevicepath [Physicaldevicepath ...] Delete a volume group do pvmove first, then do Vgremove
LV Management tool: Display Logical volume LVS lvdisplay create Logical Volume LVCREATE-L #[mmggtt]-n NAME volumegroup lvcreate-l 60%vg-n mylv TESTVG lvcreate-l 100%free-n yourlv TESTVG Delete Logical Volume Lvremove/dev/vg_name/lv_name Reset file system large Small fsadm [Options] Resize device [new_size[bkmgtep]] resize2fs [-F] [-f] [-m] [-p] [-p] device [new_size]
3.2. Extending and reducing the logical volume implementation
To extend a logical volume:
lvextend-l [+]#[MMGGTT]/dev/vg_name/lv_name resize2fs/dev/vg_name/lv_name lvresize-r-L +100%free/d Ev/vg_name/lv_name
To shrink a logical volume:
Umount/dev/vg_name/lv_name e2fsck-f/dev/vg_name/lv_name Resize2fs/dev/vg_name/lv_name #[mMgGtT] lvreduce-l [-]#[MMGGTT]/dev/vg_name/lv_name Mount
Migrating volume groups across hosts:
On the source computer: 1 in the old system, umount logical volumes on all volume groups 2 vgchange–a n vg0 lvdisplay 3 vgexport vg0 Pvscan vgdisplay Remove the old hard drive on the target computer: 4 Install the old hard drive in the new system and vgimport the VG0. 5 Vgchange–ay vg0 6 mount logical volumes on all volume groups
Example of creating a logical Volume:
Create a physical volume Pvcreate/dev/sda3 assign a physical volume to a volume group Vgcreate VG0/DEV/SDA3 Create a logical volume from a volume group lvcreate-l 256m-n data vg0 mke2fs-j/DEV/VG 0/data Mount/dev/vg0/data/mnt/data
3.3. Logical Volume Manager Snapshot
A snapshot is a special logical volume that is an exact copy of a logical volume that exists when a snapshot is generated, and a snapshot is the most appropriate choice for temporary copies of existing datasets and other operations that need to be backed up or copied, and snapshots consume space only when they differ from the original logical volume;
This space is allocated to a snapshot when it is generated, but only when the original logical volume or snapshot has changed, and when the original logical volume changes, the old data is copied to the snapshot. The snapshot contains only the changed data from the original logical volume or the data changed in the snapshot since the snapshot was generated; the volume size of the snapshot requires only the 15%~20% of the original logical volume, or you can enlarge the snapshot using Lvextend;
The snapshot is to record the system information at that time, as if there are any data changes in the future, the original data will be moved to the snapshot area, the unchanged area is shared by the snapshot area and the file system, because the snapshot area and the original LV share a lot of PE chunks, so the snapshot to the snapshot of the LV must In the same VG. When the system recovers, the number of files cannot be higher than the actual capacity of the snapshot area;
Using LVM snapshots:
To create a snapshot for an existing logical volume: Lvcreate-l 64-s-n data-snapshot-p r/dev/vg0/data mount snapshot: mkdir-p/mnt/snap mount- o ro/dev/vg0/data-snapshot/mnt/snap Recovery snapshot: Umount/dev/vg0/data-snapshot umount/dev/vg0/data LVC Onvert--merge/dev/vg0/data-snapshot Delete Snapshot: Umount/mnt/databackup lvremove/dev/vg0/databackup
Linux System Management-Advanced file System Management