Centos7/rhel7 configure XFS
XFS is a file system with high scalability and high performance. It is also the default file system of rhel7/centos7.
XFS supports metadata journaling, which enables faster recovery from crash.
It also supports fragment and resizing during mounting and activity.
Through latency allocation, XFS has won many opportunities to optimize write performance.
Xfsdump and xfsrestore tools can be used to back up and restore the xfs file system,
Xfsdump can be used at the dump level to complete Incremental backup, and files can be excluded through size, subtree, and inode flags.
User, group, and project quotas are also supported.
The following describes how to create an xfs file system, allocate quotas, and resize it:
######################################## #######################################
Partition/dev/sdb (2G) and enable LVM
[Root @ localhostzhongq] # parted/dev/sdbGNU Parted 3.1 Using/dev/sdbWelcome to GNU Parted! Type 'help' to view a list of commands. (parted) mkpart primary 4 2048 (parted) set 1 lvm on (parted) pModel: VMware, VMware Virtual S (scsi) Disk/dev/sdb: 2147 MBSector size (logical/physical): 512B/512 BPartition Table: gptDisk Flags: Number Start End Size FilesystemName Flags1 4194kB 2048 MB 2044 MB primary lvm
######################################## #######################################
Create PV
[Root @ localhostzhongq] # pvcreate/dev/sdb1Physical volume "/dev/sdb1" stored created [root @ localhostzhongq] # pvdisplay --- Physical volume --- PV Name/dev/sda2VG Name centospsize 24.51 GiB/not usable 3.00 MiBAllocatable yes (but full) PE Size 4.00 MiBTotal PE 6274 Free PE 0 Allocated PE 6274PV UUID volume "/dev/sdb1" is a new physical volume of "1.90 GiB" --- NEW Physical volume --- PV Name/dev/ sdb1VG NamePV Size 1.90 GiBAllocatable NOPE Size 0 Total PE 0 Free PE 0 Allocated PE 0PV UUID bu7yIH-1440-BPy1-APG2-FpvX-ejLS-2MIlA8
######################################## #######################################
Allocate/dev/sdb1 to the VG named xfsgroup00
[Root @ clusters] # vgcreate xfsgroup00/dev/sdb1Volume group "xfsgroup00" created [root @ clusters] # vgdisplay --- Volume group --- VG Name centosSystem IDFormat lvm2Metadata Areas 1 Metadata Sequence No 3VG accessread/writeVG Status resizableMAX LV 0Cur LV 2 Open LV 2Max PV 0Cur PV 1Act PV 1VG Size 24.51 GiBPE Size 4.00 MiBTotal PE 6274 Alloc PE/Size 6274/24 .51 GiBFree PE/Size 0/0VG UUID t3Ryyg-R0rn-2i5r-7L5o-AZKG-yFkh-CDzhKm --- Volume group --- VG Name xfsgroup00System IDFormat lvm2Metadata Areas 1 Metadata Sequence No 1VG Accessread/writeVG Status resizableMAX LV 0Cur LV 0 Open LV 0Max PV 0Cur PV 1Act PV 1VG Size 1.90 gipe Size 4.00 PE 487 Alloc PE/Size 0/0 Free PE/Size 487/1 .90 GiBVG UUID ejuwcc-sVES-MWWB-3Mup-n1wB-Kd0g-u7jm0H
######################################## #######################################
Run the lvcreate command to create an LV named xfsdata with a size of 1 gb in the xfsgroup00 group.
[Root @ localhostzhongq] # lvcreate-L 1024 M-n xfsdata xfsgroup00WARNING: xfs signature detected on/dev/xfsgroup00/xfsdata at offset 0. Wipe it? [Y/n] yWiping xfs signature on/dev/xfsgroup00/xfsdata. logical volume "xfsdata" created [root @ localhostzhongq] # lvdisplay --- Logical volume --- LV Path/dev/centos/swapLV Name swapVG Name centosLV UUID EnW3at-KlFG-XGaQ-DOoH-cGPP-8pSf-teSVbhLV Write Accessread/writeLV Creation host, timelocalhost, 20:15:25 + 0800LV Status available # open 2LV Size 2.03 GiBCurrent LE 520 Segments 1 Allocation inheritRead ahead sectors auto-currently set to 8192 Block device 253: 0 --- Logical volume --- LV Path/dev/centos/rootLV Name rootVG Name centosLV UUID zmZGkv-Ln4W-B8AY-oDnD-BEk2-6VWL-L0cZOvLV Write Accessread/writeLV Creation host, timelocalhost, 20:15:26 + 0800LV Status available # open 1LV Size 22.48 GiBCurrent LE 5754 Segments 1 Allocation inheritRead ahead sectors auto-currently set to 8192 Block device 253: 1 --- Logical volume --- LV Path/dev/xfsgroup00/xfsdataLV Name xfsdataVG Name xfsgroup00LV UUID O4yvoY-XGcD-0zPm-eilR-3JJP-updU-rRCSlJLV Write Accessread/writeLV Creation host, timelocalhost. localdomain, 15:50:19 + 0800LV Status available # open 0LV Size 1.00 GiBCurrent LE 256 Segments 1 Allocation inheritRead ahead sectors auto-currently set to 8192 Block device 253: 3
######################################## #######################################
Format the partition as an xfs file system.
Note: After xfs is created, its size cannot be reduced, but it can be increased through xfs_growfs.
[Root @ localhostzhongq] # mkfs. xfs/dev/xfsgroup00/xfsdatameta-data =/dev/xfsgroup00/xfsdata isize = 256 agcount = 4, agsize = 65536 blks = sectsz = 512 attr = 2, projid32bit = 1 = crc = 0 data = bsize = 4096 blocks = 262144, imaxpct = 25 = sunit = 0 swidth = 0 blksnaming = version 2 bsize = 4096 ascii-ci = 0 ftype = 0log = internallogbsize = 4096 blocks = 2560, version = 2 = sectsz = 512 sunit = 0 blks, lazy-count = 1 realtime = none extsz = 4096 blocks = 0, rtextents = 0
######################################## #######################################
Mount the xfs system partition to the specified directory, and enable the file system quota through the uquota parameter.
?12345 [root @ localhostzhongq] # mkdir/xfsdata [root @ localhostzhongq] # mount-o uquota, gquota/dev/xfsgroup00/xfsdata [root @ localhostzhongq] # chmod 777/xfsdata [root @ localhostzhongq] # mount | grep xfsdata/dev/mapper/xfsgroup00-xfsdata on/xfsdata type xfs (rw, relatime, attr2, inode64, usrquota, kgquota)
######################################## #######################################
Run the xfs_quota command to view the quota information, allocate quotas to users and directories, and verify whether the quota limit is effective.
[Root @ localhostzhongq] # xfs_quota-x-c 'report'/xfsdataUser quota on/xfsdata (/dev/mapper/xfsgroup00-xfsdata) blocksUser ID Used Soft Hard Warn/Grace ---------- ------------------------------------------------ root 0 0 0 00 [--------] Group quota on/xfsdata (/dev/mapper/xfsgroup00-xfsdata) blocksGroup ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 0 0 00 [--------] [root @ localhostzhongq] # xfs_quota-x-c 'limit bsoft = 100 M bhard = 120 M zhongq'/xfsdata [root @ localhostzhongq] # xfs_quota-x- c 'report'/XFS datauser quota on/xfsdata (/dev/mapper/xfsgroup00-xfsdata) blocksUser ID Used Soft Hard Warn/Grace ---------- ------------------------------------------------ root 0 0 0 00 [--------] zhongq 0 102400 122880 00 [--------] Group quota on/xfsdata (/de V/mapper/xfsgroup00-xfsdata) BlocksGroup ID Used Soft Hard Warn/Grace ---------- defaults root 0 0 0 00 [--------] [root @ localhostzhongq] # su zhongq [zhongq @ localhost ~] $ Ddif =/dev/zero of =/xfsdata/zq00 bs = 1 M count = 100100 + 0 records in100 + 0 records out104857600 bytes (105 MB) copied, 28.9833 s, 3.6 MB/s [zhongq @ localhost ~] $ Ddif =/dev/zero of =/xfsdata/zq01 bs = 1 M count = 100dd: error writing '/xfsdata/zq01 ': disk quota exceeded21 + 0 records in20 + 0 records out20971520 bytes (21 MB) copied, 4.18921 s, 5.0 MB/s [zhongq @ localhost ~] $ Exit [root @ localhostzhongq] # xfs_quotaxfs_quota> helpdf [-bir] [-hn] [-f file] -- show free and used countsforblocks and inodeshelp [command] -- helpforone or all commandsprint -- list known mount points and projectsquit -- exitthe programquota [-bir] [-gpu] [-hnNv] [-f file] [id | name]... -- show usage and limitsUse 'help commandname' forextended help. xfs_quota> printFilesystem Pathname/dev/mapper/centos-root/boot/dev/sda1/var/lib/docker/dev/mapper/centos-root/xfsdata/dev/mapper/ xfsgroup00-xfsdata, gquota) xfs_quota> quota-u zhongqDisk quotasforUser zhongq (1000) Filesystem Blocks Quota Limit Warn/Time Mounted on/dev/mapper/xfsgroup00-xfsdata 122880 102400 122880 00 [6 days]/xfsdata
######################################## #######################################
Use the command lvextend to scale the LV to 1.5 GB (the initial capacity is 1 GB), and then use the command xfs_growfs to scale up the xfs File System (here the block count is used)
[Root @ override] # lvextend-L 1.5G/dev/xfsgroup00/xfsdataExtending logical volume xfsdata to 1.50 GiBLogical volume xfsdata volume resized [root @ override] # export/dev/xfsgroup00/xfsdata -D 393216meta-data =/dev/mapper/xfsgroup00-xfsdata isize = 256 agcount = 4, agsize = 65536 blks = sectsz = 512 attr = 2, projid32bit = 1 = crc = 0 data = bsize = 4096 blocks = 262144, imaxpct = 25 = sunit = 0 swidth = 0 blksnaming = version 2 bsize = 4096 ascii-ci = 0 ftype = 0log = internal bsize = 4096 blocks = 2560, version = 2 = sectsz = 512 sunit = 0 blks, lazy-count = 1 realtime = none extsz = 4096 blocks = 0, rtextents = 0 data blocks changed from 262144 to 393216 [root @ localhostzhongq] # df-h | grep xfsdata/dev/mapper/xfsgroup00-xfsdata 1.5G 153 M 1.4G 10%/xfsdata