Linux does not require disk defragmentation! __linux

Source: Internet
Author: User

The following content reproduced from: http://forum.ubuntu.org.cn/viewtopic.php?t=27451


The idea of defragmentation is mainly among two types of friends, one is the friend affected by Windows, and the other is a friend who has a certain degree of understanding of the principles of the operating system.

Let me briefly explain some of the issues here.

All operating systems generate disk fragmentation, which is why some friends have doubts. This fragment is called an internal fragment in the official data above. It is this way, assuming that a disk has 20k of space, its basic storage unit is clustered, with two files, one 7k, one 1k. When the size of the cluster is 4k, the disk is divided into 5 clusters, two files Occupy 3 clusters, that is, the use of 12k, where the waste space is 4k, that is, the production of internal debris 4k. So we know that internal fragmentation is primarily a waste of disk space. Note that the defragmentation of Windows disk defragmentation is not the fragment and cannot operate on this fragment, and its corresponding fragment concept is external fragmentation.

So, can the internal fragmentation be optimized for processing. The answer is yes. In the example above, if you divide each cluster into 2k, then 20k disk is divided into 10 clusters, 7k and 1k two files occupy 5 clusters, 10k space, waste of space, that is, the internal fragment of 2k.

This shows that the smaller the cluster, the less space wasted. This is also an excellent place for NTFS than FAT32. In the case of the FAT32 file system of Win 2000, the size of the partition at 2GB~8GB is 4KB, the size of the cluster is 8KB when 8GB~16GB, and the cluster size is 16KB when the partition size is 16GB~32GB. The NTFS file system of Win 2000, when the size of the partition below 2GB, the size of the cluster is smaller than the corresponding FAT32 clusters, when the size of the partition is above 2GB (2GB~2TB), the size of the cluster is 4KB. By contrast, NTFS can manage disk space more efficiently than FAT32, minimizing the waste of disk space.

Some friends will think further, then why the file system is not the cluster is very small. This leads to another problem, the file access lookup problem. As the above example shows, when we are looking for a file, we need to access it through the page table. For example, the place you live in is like a cluster of files, but to find you, you have to access it through your address, and the Access file is through the file allocation table. If you live in a large number of people, the address is a lot, then you have to find the address of the time spent a lot. In the same way, the smaller the cluster, the greater the address of the record cluster, and the more time it takes to find the cluster where the file resides. When the cluster is 4k, the address of the cluster is 5, and when the cluster is 2k, the address of the cluster is 10. So the size of the cluster is a result of balance in space and time.

There are also some hints about another problem, some third-party partitioning software can customize the size of the cluster, the recommended default value, or in some cases will create some problems.

Some friends ask further questions: why is it that in general, the NTFS-divided cluster is smaller than the FAT32, and the access speed will be almost the same. This involves the file access mechanism and so on. Here I will no longer introduce, in fact, this problem I can not completely explain, interested friends can find some operating system information to read, to some extent, to solve this problem.

OK, let's start with our focus: Linux doesn't need to be defragmented.

The fragmentation under Windows concept, which is called an external fragment in the Linux official data above, is the concept of fragmentation that affects performance. (This is called "external fragmentation" or simply "fragmentation" and are acommon with MS-dos file problem. And Linux does not typically produce this fragmentation. External disk fragmentation should be called file fragmentation because the files are dispersed to different parts of the entire disk, rather than being continuously stored in a contiguous cluster of disks.

When the application requires insufficient physical memory, the normal operating system produces a temporary swap file on the hard disk, which is virtualized into memory using the hard disk space that the file occupies. The Virtual memory management program frequently reads and writes to the hard disk, resulting in a large amount of fragmentation, which is the main cause of hard disk fragmentation.

Other settings such as temporary files or temporary file directories that are generated when browsing information in IE browsers can also cause large amounts of fragmentation in the system. File fragmentation generally does not cause problems in the system, but too much file fragmentation causes the system to look back and forth while reading files, causing system performance to fall, and to shorten hard disk life. In addition, excessive disk fragmentation can also result in the loss of storage files.

What this says is how Windows produces external fragmentation, which is actually related to the data structure used by the file system. For fat, a chain-style structure is used to record the clusters used by a file. The benefit of this approach is the need to contribute to the dynamic growth of the file. But with the problem of fragmentation, so that when reading and writing files, the head frequently moved. For CD-ROM, because it is read-only, so there is no problem of data growth, so the use of a continuous method to record data, there will be no fragmentation, while the Linux ext and other file formats and CD-ROM storage has similarities.

The following article is an easy way to explain why Linux does not need to defragment and why Windows needs to defragment:
From http://geekblog.oneandoneis2.org/index.php/2006/08/17/why_doesn_t_linux_need_defragmenting

Please note that the official information is that the Linux file system does not need to be defragmented when there are 5% free space in the disk. (Linux native file systemsdo not need defragmentation under normal with and this includes no conditionwith at least of Free spaces on a disk.). In practice, the disk will have a warning when it is still unused for about 8%, so defragmentation is not considered.

For a friend of Windows disk defragmentation, here's a little bit of a friendly tip.

1, when defragmenting the disk, to close all other applications, including screen saver, it is best to set the size of virtual memory to a fixed value. Do not read and write to the disk.

2, defragment the frequency of the disk to control the appropriate, too frequent collation will shorten the life of the disk. Frequently read and write disk partitions are collated once a week.

Finally want to talk about the topic of thinking.

For those of you who want to defragment your disks under Linux, have you considered two facts?

First, why Unix-like systems have been in production for decades, and no one has done a defragmentation software. And even now, no friends in this forum have mentioned that we can still find many Unix-like anti-virus software when we encounter Linux viruses. I can at least list 3 kinds of free anti-virus software.

Second, many Unix-like operating systems are not shut down for years, such as banking, telecommunications, military and other systems, you can imagine them to stop disk read and write, in a few hours of disk defragmentation results. These machines have more disk reads and writes than home machines.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.