In Windows, disk fragmentation is a common problem. If you do not pay attention to it, the system performance may be eroded. Linux uses the second extended File System (ext2), which processes file storage in a completely different way. Linux does not have the problem found in Windows systems, which makes many people think that disk fragmentation is not a problem at all. However, this is incorrect.
All file systems tend to be fragmented over time. Linux file systems reduce fragmentation, but they are not eliminated. Because it does not often appear, it may not be a problem for a single-user workstation. However, in busy servers, file fragmentation will reduce hard disk performance over time. Hard Disk performance can only be noticed when data is read or written from the hard disk. The following are some specific measures to optimize the hard disk performance of Linux.
I. Clear Disks
This method looks simple: Clear the disk drive, delete unnecessary files, and clear all files that need to be saved but will not be used. If possible, clear unnecessary directories and reduce the number of subdirectories. These suggestions seem obvious, but you will be surprised to find that each disk has accumulated a lot of garbage. Releasing disk space can help the system work better.
Ii. Sort disk fragments
The disk fragment program on Linux is different from that on Windows 98 or Windows NT. Windows 98 introduces the FAT 32 file system, although running Windows 98 does not have to be converted to a FAT 32 file system. Windows can be set to use FAT or an enhanced file system called NTFS. All these file systems process file storage in essentially the same way.
In Linux, the best way to clear disk fragments is to make a full backup, reformat the partition, and restore the file from the backup. When files are stored, they are written to contiguous blocks without fragmentation. This is a big job. It may be unnecessary for program partitions that do not change frequently like/usr, but it can create a miracle in the/home partition of a multi-user system. It takes the same time as the disk fragmentation of the Windows NT Server.
If the disk performance is still unsatisfactory, there are many other steps to consider, but any hardware solution that includes upgrading or purchasing new devices may be expensive.
3. upgrade from IDE To SCSI
If your hard disk is an IDE drive, you can upgrade it to the SCSI drive for better overall performance. Because the IDE controller must access the CPU, CPU and disk-intensive operations may become very slow. The SCSI controller does not need to process read/write through the CPU. When the IDE drive is reading or writing, the user may complain about the slow system because the CPU cycle is occupied by the IDE drive.
Get faster controllers and disk drives
Standard SCSI controllers cannot read and write data faster than standard IDE controllers, but some very fast "UltraWide" SCSI controllers can make a real leap in read and write speed.
The EIDE and UDMA controllers are very fast IDE controllers. The new UDMA controller can speed up to the SCSI controller. The top-level speed of the UDMA controller is the burst speed, but the continuous transmission speed is much slower. The IDE controller, including UDMA, is embedded in the drive itself. You do not need to buy a controller. You only need to buy a drive, which includes the Controller to achieve UDMA performance.
Disk drives often overlook the speed of disks. The disk speed is measured in rpm, which indicates how many times a disk is rotated per minute. The larger the rpm, the faster the disk speed. If you have a budget for this, most server system vendors can provide 7500rpm or even 10000 rpm SCSI disks. Standard SCSI and IDE disks provide 5400rpm speed.
4. Use multiple controllers
The IDE and SCSI disks can be linked. The IDE chain consists of up to two devices, and the standard SCSI chain consists of up to seven devices. If two or more SCSI disks exist in the system, they may be linked to the same controller. This is sufficient for most operations, especially when the computer is used as a single-user workstation. However, if there is a server, you can improve performance by providing a controller for each SCSI drive. Of course, good controllers are expensive.
5. Adjust hard disk Parameters
The hdparm tool can be used to adjust the performance of the IDE hard disk. It is designed specifically to use the UDMA drive. By default, Linux is the safest to use, but it is the slowest to access the IDE drive. The default mode does not use UDMA for the fastest possible performance.
Using hdparm can significantly improve performance by activating the following features:
◆ 32-bit support 16-bit by default;
◆ The default setting for multi-part access is to interrupt single-part transfer at a time.
Note: Before using hdparm, make sure that the system has been completely backed up. Using hdparm to change IDE parameters may cause loss of all data on the drive if an error occurs.
Hdparm can provide a large amount of information about hard disks. Open a terminal window and enter the following command to obtain information about the first IDE drive in the system (change the device name to obtain information about other IDE drives ):
Hdparm-v/dev/had
The above command shows the information obtained from the drive when the system starts, including whether the drive operation is multi-part access (Multcount) in 16-bit or 32-Bit mode (I/O Support ). You can use the-I parameter to display more detailed information about the disk drive.
Hdparm can also test the drive transmission rate. Enter the command to test the first IDE drive in the system:
Hdparm-Tt/dev/hda
This test measures the speed at which the drive reads directly and caches the memory. The result is an optimized "Best case" number. Change the drive settings and activate the 32-bit transmission. Enter the following command:
Hdparm-c3/dev/hda
-The c3 parameter is supported by 32-bit activation. You can use-c0 to cancel it. -The c1 parameter can also activate 32-bit support and use less memory overhead, but it does not work in many drives.
Most new IDE drives support multi-part transmission, but Linux is set to single-part transmission by default. Note: This setting is performed on some drives. Activating multiple transfers can cause a complete file system crash. This problem occurs mostly on older drives. Enter the following command to activate multipart transfer:
Hdparm-m16/dev/hda
-The m16 parameter activates Part 16 transmission. Aside from the drive for Western data, most drives are best suited to 16 or 32 parts. The size of the drive buffer for Western data is small, and the performance decreases significantly when the value is greater than 8. For western data drives, setting to 4 is the most appropriate.
Activating multiple accesses reduces CPU load by 30% ~ 50%, and increase the data transmission rate to 50%. You can use the-m0 parameter to cancel transmission of multiple parts. Hdparm also has many options to set hard drive, which are not described here.
6. Use software RAID
Redundant Arrays of low-cost RAID drives can also improve disk drive performance and capacity. Linux supports software RAID and hardware RAID. Software RAID is embedded in the Linux kernel, which is much less costly than hardware RAID. The only cost of software RAID is to purchase disks in the system. However, software RAID cannot improve the performance of hardware RAID. Hardware RAID uses specially designed hardware to control multiple disks of the system. Hardware RAID may be expensive, but the performance improvement is matched. The basic idea of RAID is to combine multiple small and inexpensive disk drives into a disk drive array, providing the same performance level as a single large drive in a large computer. A raid drive array is like a single drive for a computer. It can also be processed in parallel. Disk read/write is performed simultaneously on the parallel data path of the RAID disk array.
IBM initiated a study at the University of California to obtain an initial definition of the RAID level. There are now six defined RAID levels, as shown below.
RAID 0: Level 0 is only a data band. In level 0, data is split into more than one drive, resulting in higher data throughput. This is the fastest and most effective form of RAID. However, there is no data image at this level, so failure of any disk in the array will cause loss of all data.
RAID 1: Level 1 is a full disk image. Create and support two copies of data on an independent disk. The Level 1 array is faster to read and write than a drive, but no data is lost if any drive error occurs. This is the most expensive RAID level, because each disk requires a second disk as its image. This level provides the best data security.
RAID 2: Level 2 is intended for a drive without nested error detection. Because all SCSI drives support embedded error detection, this level is obsolete and basically useless. Linux does not use this level.
RAID 3: Level 3 is a disk with a parity disk. Store the parity information on an independent drive, allowing you to recover any errors on a single drive. Linux does not support this level.
RAID 4: Level 4 is a large block with a parity disk. The parity information indicates that data failed on any disk can be recovered. The read performance of the Level 4 array is very good, and the write speed is relatively slow, because the parity data must be updated each time.
RAID 5: Level 5 is similar to level 4, but it distributes parity information to multiple drives. This improves the disk write speed. The cost per MB is the same as level 4, which improves high-speed random performance under high-level data protection and is the most widely used RAID system.
Software RAID is level 0, which makes multiple hard disks look like one disk, but the speed is much faster than any single disk because the drive is accessed in parallel. Software RAID can use an IDE or SCSI controller or any disk combination.
7. Configure Kernel Parameters
It is sometimes obvious to improve performance by adjusting system kernel parameters. If you decide to do this, you must be careful, because changes to the system kernel may optimize the system, or cause a system crash.
Note: Do not change Kernel Parameters on a system in use because of the risk of system crash. Therefore, tests must be performed on a system that is not used by anyone. Set a test machine to test the system to ensure that all operations are normal.
Tweak memory performance
In Linux, the system memory can be Tweak. If you encounter a memory insufficiency error or the system is used for the network, you can adjust the memory allocation settings.
Memory is generally allocated in 4 kilobytes per page. Adjusting the "blank page" setting can significantly improve the performance. Open the terminal window and enter the following command to view the current settings of the system:
Cat/proc/sys/vm/freepages
In this way, three numbers are obtained, as shown below:
128 256 384
These are the minimum blank page, low blank page, and high blank page settings. These values are determined at startup. The minimum setting is twice the number of memory in the system; the low setting is four times the number of memory; the high setting is six times the system memory; the free memory cannot be less than the minimum number of blank pages.
If the number of blank pages is lower than the setting of the height of the blank page, the switch starts (the disk space is allocated to the swap file. When the blank page is low, the intensive switch starts.
Adding a blank page height setting can sometimes improve the overall performance. For example, if you try adding a height setting to 1 MB, you can use the echo command to adjust this setting. Use the sample settings and enter this command to increase the page height to 1 MB:
Echo "128 256 1024">/proc/sys/vm/freepages
Note: test this setting when the system is not in use to ensure that the system performance is monitored during any adjustments. In this way, you can determine which setting is best for the system.