Reference http://blog.chinaunix.net/uid-522675-id-4665059.html XFS File System Usage Summary
1.3 XFS Related common commands
Xfs_admin: Tuning various parameters of the XFS file system
Xfs_copy: Copying the contents of an XFS file system to one or more target systems (parallel mode)
XFS_DB: Debug or Detect XFS file system (view file system fragmentation, etc.)
Xfs_check: Detecting the integrity of XFS file systems
Xfs_bmap: viewing block mappings for a file
Xfs_repair: Trying to repair a damaged XFS file system
XFS_FSR: Defragmenting
Xfs_quota: Managing disk quotas for XFS file systems
Xfs_metadump: Copy the XFS file system's metadata (metadata) into a file
Xfs_mdrestore: Restoring metadata (metadata) from one file to the XFS file system
XFS_GROWFS: Resizing an XFS file system (extensible only)
Xfs_freeze pause (-F) and recovery (-u) XFS file system
Xfs_logprint: Print a log of the XFS file system
Xfs_mkfile: Creating an XFS File system
Xfs_info: Querying file system details
Xfs_ncheck:generate pathnames from I-numbers for XFS
XFS_RTCP:XFS Real-time Copy command
Xfs_io: Debug XFS I/O path
2.2 Calculating block usage
We want to use MySQL On/dev/sda3, but what can we ensure that it's aligned with the RAID stripes? It takes a small amount of math:
Start with your RAID stripe size. Let's use 64k which is a common default. In the case 64K = 2^16 = 65536 bytes. The default size is 64K
Get your sector size from Fdisk. In this case, bytes. Sector size 512B
Calculate how many sectors fit in a RAID stripe. 65536/512 = Sectors per stripe. 128 sectors per stripe size.
Get start boundary of our MySQL partition from fdisk:27344896. To view the MySQL partition starting number is 27344896
See if the start boundary for our MySQL partition falls on a stripe boundary by dividing the start sector of the partition By the sectors per stripe:27344896/128 = 213632. This is a whole number and so we are good. If it had a remainder, then we partition would not the start on a RAID stripe boundary. See if the start boundary divided by the starting sector falls to the bounds of the stripe, and then the number of sectors is computed, an integer is obtained. If there is an excess, then our partition will not start with the raid stripe boundary.
Create the Filesystem
XFS requires a little massaging (or a lot). For a standard server, it's fairly simple. We need to know and things:
RAID Stripe size
Number of unique, utilized disks in the RAID. This turns off to be the same as the size formulas I gave above:
RAID 1+0:is A set of mirrored drives, so the number here is num DRIVES/2.
RAID 5:is striped drives plus one full drive of parity, and the number here is num drives–1.
In our case, it's RAID 1+0 64k stripe with 8 drives. Since those drives each has a mirror, there is really 4 sets of unique drives that is striped over the top. Using These numbers, we set the ' Su ' and ' SW ' options in MKFS.XFS with those of the values respectively.
2.3 Format File System
Through the above example summary execution command mkfs.xfs-d Su=64k,sw=4/dev/sda3
3. Creation of the XFS file system
3.1 Default method
#mkfs. XFS/DEV/SDC1
META-DATA=/DEV/SDC1 isize=256 agcount=18, agsize=1048576 blks
data = bsize=4096 blocks=17921788, imaxpct=25
= Sunit=0 swidth=0 Blks, unwritten=0
Naming =version 2 bsize=4096
Log =internal log bsize=4096 blocks=2187, version=1
= Sunit=0 Blks
Realtime =none extsz=65536 blocks=0, rtextents=0
3.2 Specifying block and internal log size
# mkfs.xfs-b Size=1k-l SIZE=10M/DEV/SDC1
META-DATA=/DEV/SDC1 isize=256 agcount=18, agsize=4194304 blks
data = bsize=1024 blocks=71687152, imaxpct=25
= Sunit=0 swidth=0 Blks, unwritten=0
Naming =version 2 bsize=4096
Log =internal log bsize=1024 blocks=10240, version=1
= Sunit=0 Blks
Realtime =none extsz=65536 blocks=0, rtextents=0
3.3 Volumes that use logical volumes as external logs
# mkfs.xfs-l LOGDEV=/DEV/SDH,SIZE=65536B/DEV/SDC1
META-DATA=/DEV/SDC1 isize=256 agcount=4, agsize=76433916
Blks
= sectsz=512 attr=2
data = bsize=4096 blocks=305735663,
Imaxpct=5
= Sunit=0 swidth=0 blks
Naming =version 2 bsize=4096 ascii-ci=0
Log =/DEV/SDH bsize=4096 blocks=65536, version=2
= sectsz=512 sunit=0 Blks, lazy-count=1
Realtime =none extsz=4096 blocks=0, rtextents=0
3.3 Directory Blocks
# mkfs.xfs-b Size=2k-n SIZE=4K/DEV/SDC1
META-DATA=/DEV/SDC1 isize=256 agcount=4,
agsize=152867832 Blks
= sectsz=512 attr=2
data = bsize=2048 blocks=611471327,
Imaxpct=5
= Sunit=0 swidth=0 blks
Naming =version 2 bsize=4096 ascii-ci=0
Log =internal log bsize=2048 blocks=298569, version=2
= sectsz=512 sunit=0 Blks, lazy-count=1
Realtime =none extsz=4096 blocks=0, rtextents=0
3.4 Extending the file system
The added space does not make the files on the original file system unchanged, and the added space becomes the available additional file storage
XVM supports the expansion of XFS systems
# xfs_growfs/mnt
Meta-data=/mnt isize=256 agcount=30, agsize=262144 blks
data = bsize=4096 blocks=7680000, imaxpct=25
= Sunit=0 swidth=0 Blks, unwritten=0
Naming =version 2 bsize=4096
Log =internal bsize=4096 blocks=1200 version=1
= Sunit=0 Blks
Realtime =none extsz=65536 blocks=0, rtextents=0
Data blocks changed from 7680000 to 17921788
4. File system Maintenance
4.1 Sorting of fragments
View File Block Status: Xfs_bmap-v file.tar.bz2
View disk fragmentation Status: Xfs_db-c frag-r/dev/sda1
Defragment: xfs_fsr/dev/sda1
Mountpoint and device to distinguish
Mount point
[Email protected] ~]# Xfs_info/root
Meta-data=/dev/mapper/centos-root isize=256 agcount=4, agsize=3110656 blks
= sectsz=512 attr=2, projid32bit=1
= Crc=0 finobt=0
data = bsize=4096 blocks=12442624, imaxpct=25
= Sunit=0 swidth=0 blks
Naming =version 2 bsize=4096 ascii-ci=0 ftype=0
Log =internal bsize=4096 blocks=6075, version=2
= sectsz=512 sunit=0 Blks, lazy-count=1
Realtime =none extsz=4096 blocks=0, rtextents=0
Device name, (more output below)
[Email protected] ~]# Xfs_logprint/dev/mapper/centos-root|more
[Email protected] ~]# xfs_bmap/var/log/messages
/var/log/messages:
0: [0..119]: 6304..6423
1: [120..127]: 6440..6447
2: [128..135]: 6464..6471
[Email protected] ~]# xfs_bmap/var/log/secure
/var/log/secure:
0: [0..7]: 6424..6431
1: [8..15]: 6456..6463
2: [16..23]: 6592..6599
[Email protected] ~]# xfs_bmap-v/var/log/messages
/var/log/messages:
Ext:file-offset Block-range AG Ag-offset Total
0: [0..119]: 6304..6423 0 (6304..6423) 120
1: [120..127]: 6440..6447 0 (6440..6447) 8
2: [128..135]: 6464..6471 0 (6464..6471) 8
[Email protected] ~]# xfs_db-c frag-r/dev/xvda1
Actual 326, ideal 324, fragmentation factor 0.61%
[Email protected] ~]# xfs_db-c frag-r/dev/xvda2
xfs_db:/dev/xvda2 is not a valid XFS filesystem (unexpected SB magic number 0x00000000)
Use-f to force a read attempt.
Because/dev/xvda2 is a PV, it does not contain a file system
[Email protected] ~]# xfs_db-c frag-r/dev/mapper/centos-root
Actual 20226, ideal 20092, fragmentation factor 0.66%
[Email protected] ~]# xfs_db-c frag-r/dev/centos/root
Actual 20239, ideal 20103, fragmentation factor 0.67%
[Email protected] ~]# xfs_db-c frag-r/dev/dm-0
Actual 20239, ideal 20103, fragmentation factor 0.67%
XFS File System