Yesterday, the data on one machine was transferred (backed up) to another machine, and it took almost one night to transmit about 100G of data.
Feel the data source machine hard drive is getting worse, read and write performance degradation is very bad.
These years to write software is seldom to consider the hard disk read and write performance optimization, today online search for some information, looked, some harvest.
How do I quickly reserve a large space on a disk for a file?
Write files to disk, how to ensure that the files on the physical disk continuous is no disk fragmentation
Disk reading and writing optimization of monitoring class system
Prevent disk fragmentation from improving storage performance
How to avoid disk fragmentation as you read and write files
Previously written a software, mainly responsible for the image storage and display, the user upload images saved will generate a small picture of about 4KB, that is, thumbnails.
I previously saved the original and thumbnail images in the same hard drive partition, and pictures uploaded pictures are often deleted.
The user uploads the original image general size in 1k-2m, the thumbnail is generally less than 4 K, if the original image and thumbnails together, after a period of time (write, delete), will produce a lot of fragments.
The large map should be stored in a partition, such as: D, the thumbnail due to the size of the basic, stored to the e-disk.
This effect should be better.
As for the multiple threads, each thread is writing to the file, how to avoid generating a lot of file fragments, who can tell me?
2014-03-06
Some thoughts on HDD performance & file Fragmentation