I used to create a file that I normally use DD for example to create a 512M file:
The DD command makes it easy to create files of a specified size, such as
DD If=/dev/zero of=test bs=1m count=1000
Generates a 1000M test file with a total of 0 (read from/dev/zero,/dev/zero 0 source)
But this is actually written to the hard disk, the file production speed depends on the hard disk read and write speed, if you want to produce oversized files, the speed is very slow
In some scenarios, we just want the file system to think there's a very large file here, but it's not actually written to the hard drive
You can
DD If=/dev/zero of=test bs=1m count=0 seek=100000
The file created at this point has a 100000MB display size in the file system, but it does not actually occupy block, so the creation speed is comparable to the memory speed
The purpose of seek is to skip the portion of the size specified in the output file, which is achieved by creating a large file but not actually writing it
Of course, because it's not actually written to the hard drive, you can create 100G on a hard drive with a capacity of only 10G.
Remember when you used to do Windows development, there is an API called SetEndOfFile, you can use the file internal cursor location to the end of the file, can be used to intercept or extend the file, this function is essentially the direct operation of the file partition table structure, Using it for file extension is not required to frequently populate files, Linux must also have the corresponding things, that is, ftrunc/truncate these two functions.
The tool that uses this feature directly to create large files must be done by searching for two commands for Fallocate and truncate, and the seek extension of GNU DD:
The code is as follows |
Copy Code |
# fallocate-l 10G Bigfile # truncate-s 10G Bigfile # dd Of=bigfile Bs=1 seek=10g count=0 |
File system for such a file to create a special treatment, called sparse files, now create a large file speed of light, no longer have to tangle for half a day.