Improve Linux operating system performance

Source: Internet
Author: User
Tags db2

Improve Linux operating system performance 2011-01-05 13:48 Anonymousfont Size:T | T

This article explains in detail how to tune Linux system performance from disk, file and file system, memory and compilation. The tuning method provided in this article can improve the performance of Linux systems to varying degrees, whether it is a Linux server or a client or a user.

AD:2014WOT Global Software Technology Summit Beijing Station course video release

Linux is a high performance , stable and flexible operating system, in recent years, many large software companies in the world have launched a variety of Linux servers and Linux applications. At present, Linux has been able to compete with a variety of traditional commercial operating systems, in the server market, occupies a significant share. Linux server systems are diverse and can be used as Web servers, mail servers, FTP servers, file servers, database servers, and so on. For different systems and the specific application environment, the performance of Linux can be tuned accordingly. The following sections improve the performance of Linux systems from disk tuning, file system, memory management, and compilation optimization.

First, optimized partitioning
At the beginning of installing a Linux system, you should consider how to get the best performance from your Linux system. In Linux systems, we are free to organize disk partitions. An optimized partitioning strategy that can improve the performance of Linux systems, reduce disk fragmentation, and improve disk I/O capabilities.
According to the characteristics of the disk, we know that the more the cylinder outside the disk, the faster the rotation, and each rotation, the disk read and write head can cover more areas, it means that the external cylinder can be better performance. Therefore, when partitioning, we should consider the high frequency of access to the system can affect the relatively large partition on the disk by the external points. Also, in order to reduce disk fragmentation, you should place directories that are constantly changing content on separate partitions. From the point of view of convenient backup data, since many backup tools are more efficient at backing up the entire partition, we should assign a separate file system to each of the main directories of the Linux system. The disk also has a portion of unpartitioned space, so why not partition the entire disk when the system is installed? Now the hard disk space is larger, and when you install the system, perhaps the files you install and the space you need to grow later will not have the entire disk storage space. Keep a portion of unpartitioned space, and you can repartition it with Fdisk when new needs are in the future. When partitioning, it is important to estimate the size of each partition according to the needs of the system's future operation or past experience, lest there will be insufficient space in the future.
If your system has multiple drives, consider using multiple swap partitions. Divides a swap partition on each disk. Multiple swap partitions can have the same priority by setting the PRI option in the/etc/fstab file. Linux systems can use them in parallel. This improves the performance of the swap partition.
Of course, if your system memory is large enough, and your system is far from using so much memory, it will not be able to use any virtual storage. partition, you can consider removing the swap partition. However, as a Linux server, you should set the swap partition even if your memory is large enough.

Ii. using Hdparm to improve Linux system performance
If your Linux system is running on an IDE hard disk, you can use the Hdparm tool to improve disk I/O performance. Be careful with hdparm, however, because the data on your hard disk can be corrupted. So before using Hdparm, read your hard drive manual carefully. Use the corresponding hdparm switch parameters according to your specific hard drive specifications. For a piece of ultraata/66 EIDE hard disk, its control chip supports multi-PIO mode and DMA, we use the following command to tune disk performance:

Option Description:
①C3: Converts the hard drive's 16-bit format to 32-bit mode (32-bit-W/sync). Controls how data is passed from the PCI bus to the controller.
②M16: Changing the read function of the hard drive's multi-sector,-M16 can enable the hard disk to read 16 sectors of data in an I/O interrupt (depending on the specific hard disk).
③D1: Turn on DMA mode.
④X66: In a hard drive that supports udma-capable, this parameter can support data transfer mode for dual DMA channels.
⑤u1:linux when dealing with disk outages, you can unmask other interrupts or respond to other interrupt-related tasks.

To view the above changes, you can use the command: #/SBIN/HDPARM/DEV/HDA
Test disk I/O performance you can use the command: #/SBIN/HDPARM-TT/DEV/HDA
If the performance of the disk is improved, you can use the following command to save the settings: #/sbin/hdparm-k1/dev/hda

Third, use soft raid under Linux
RAID (Redundant array of disks) is a technology that enhances disk performance and reliability by distributing data across multiple disks. If your system does not have a hardware RAID controller, you can use the software to implement RAID under Linux. There are many levels of raid, different levels of hardware have different requirements, the corresponding will be different performance and reliability. RAID0 for data segmentation, the data block is alternately written to disk, can get the best read and write performance, but does not provide data redundancy. RAID1 is a disk image, and everything written to disk 1 is also written to disk 2, which can be read from any disk. RAID3 In addition to data segmentation, a disk drive is specified to store the parity information.
Using software to implement RAID under Linux requires kernel support from Linux, and you can add support for raid by compiling a new kernel. You will then need to compile and install the Raidtools package. The Raidtools is a user-level tool that can initialize, start, stop, and control raid arrays. Here's how we implement RAID0 on Linux kernel 2.4 with two IDE drives.

1. Create a partition

Using RAID0 requires at least two partitions, which are located on different disks. It is best to do RAID0 two partitions with the same size. When you create a partition, specify the partition type as "FD". So that the Linux kernel can recognize them as RAID partitions so that they can be automatically detected and started at each boot. If you do not mark a RAID partition in this manner, you must first enter "Raidstart--all" after each boot to mount the RAID array. We made RAID0 two partitions for Hda5 and HDC5.

2. Edit the/etc/raidtab file
Create a/etc/raidtab file to describe the configuration information for the RAID array. The Linux kernel uses this information to automatically detect and start the RAID array at boot time, so this configuration must be made for each RAID array created. The configuration files that make up the partitions hda5 and HDC5 md0 RAID0 arrays are as follows:

Raiddev/dev/md0
Raid-level 0
Nr-raid-disks 2
Persistent-superblock 1
Chunk-size 32
Device/dev/hda5
Raid-disk 0
Device/dev/hdc5
Raid-disk 1

3. raidtab file
In the Raidtab file, the "Raiddev" entry indicates the RAID array to be created, "Nr-raid-disks" specifies the number of disks in the array, and "Persistent-superblock" is set to 1 to tell the RAID tool when the volume was created; " Chunk-size "Specifies the block size used by the RAID0 in K bytes; Finally, the device that makes up the array is the partition.
The Mkraid and create file systems use the command "# mkraid/dev/md0" command to initialize the/DEV/MD0 partition, and the md0 RAID0 array is started. Next, you can create the file system you want on md0. We are using the ReiserFS log file system on the Linux server, and the command created is "# mkreiserfs/dev/md0".
In this way, you can load the newly created RAID0-based file system like any other file system.


Iv. Tuning disk I/O using Elvtune
In later versions of Linux kernel 2.4, the disk I/O scheduling operation can be used to control disk I/O response time and throughput. By adjusting the maximum wait time in the queue for I/O requests, you can tune between response time and throughput. If less response time is required, throughput is reduced, whereas longer response times can achieve greater throughput. You can use the tool "/sbin/elvtune" to change the maximum response time value. Here's how to use it:

To view the current settings: #/SBIN/ELVTUNE/DEV/HDA1
Modify the current configuration: #/sbin/elvtune-r 2000-w 4000/dev/hda1
(where the-r parameter is for read operations, the-w parameter is for write operations.) )
In addition, you can monitor the effect of the above I/O configuration by commanding the average information (including average request size and average queue Length) obtained by the command "Iostat-d-x/dev/hda1", and adjust the configuration for optimal performance. Generally speaking, to read and write frequently, but the operation of a small amount of data on the Linux server, and the real-time requirements are high, then you can reduce the parameters. Conversely, if the Linux server with high throughput is required to read and write infrequently, the parameters can be increased to achieve greater throughput.

V. File and file system tuning blocks
1, block size
When you create a file system, you can specify the size of the block. If the future in your file system is some relatively large files, the use of large block size will be better performance. Adjusting the block size of the ext2 file system to 4096byte instead of the default 1024byte can reduce file fragmentation and speed up the fsck scan and file deletion and read operations. In addition, in the Ext2 file system, 5% of the space is reserved for the root directory, and for a large file system, unless used as a log file, the 5% ratio is somewhat excessive. You can use the command "# mke2fs-b 4096-m 1/dev/hda6" To change it to 1% and create the file system in block size 4096byte.
How much block size to use, depending on your system to consider, if the system as a mail or news server, the use of large block size, although performance has improved, but will cause a large amount of wasted disk space. For example, file system files average size of 2145byte, if the use of 4096byte block size, each file will be wasted 1951byte space. If you use a block size of 1024byte, you will waste 927byte of space on average per file. How to balance the cost of performance and disk depends on the needs of the specific application.

2. Do not use the Atime property
When files are created, modified, and accessed, the Linux system logs these time messages. Log the last time the file was read, when the system's read file operation is frequent, it will be a lot of overhead. Therefore, in order to improve the performance of the system, we can not modify the Atime property of the file while reading the file. You can do this by using the Notime option when you load the file system. When you load the (mount) file system with the Noatime option, the read of the file does not update the Atime information in the file properties. The importance of setting noatime is to eliminate the file system's write operations to the file, and the file is simply read by the system. This setting can significantly improve server performance because write operations consume system resources more than read. Note that the Wtime information is still valid, and any time the file is written, the information is still updated.

For example, in your system, to set the Notime option for the/home file system, you can modify the corresponding line of the/etc/fstab file as follows:

Label=/home/home ext2 noatime 1 2

To make this setting effective immediately, run the command "#mount-oremount/home". This will no longer modify the Atime property when the system reads the file under/home.

3. Adjust buffer Refresh Parameters

The Linux kernel contains some parameters that can be set for the system's operating state. The parameters of the buffer refresh can be done by adjusting the/proc/sys/vm/bdflush file, which is the format of this file:

# Cat/proc/sys/vm/bdflush
30 64 64 256 500 3000 60 0 0

Each column is a parameter, the most important of which is the previous parameters. The first number is when the "dirty" buffer is reached, forcing the wake-up bdflush process to flush the hard disk, and the second number is how many dirty blocks each time the Bdflush process refreshes. The so-called dirty block is a cache block that must be written to disk. The next parameter is to allow the BD flush to queue the number of memory blocks into the free buffer block list each time. The above values are the default values in Redhat Linux 7.1. How do you modify them? There are two ways to do this for different systems.
(1) # echo "three 0 0" >;/proc/sys/vm/bdflush and add this command to the/etc/rc.d/rc.local file.
(2) Add the following line to the/etc/sysctl.conf file: Vm.bdflush = 100 128 128 512 5000 3000 60 0 0

The above setting increases the buffer size, reduces the frequency of Bdflush being started, and increases the risk of data loss in the event of system crashes. The caching of VFS is one of the important reasons why Linux file systems are efficient, and if performance is really important to you, you should consider adjusting this parameter.


4. Adjust the file handle number and I-node points

In a large Web server where the maximum number of files that may be open at the same time as Linux defaults does not meet the needs of the system, we can increase the default limit of the system by adjusting the number of file handles and I-nodes. Different versions of the Linux kernel have different tuning methods.

In Linux kernel 2.2.x can be modified with the following command:
# echo ' 8192 ' >; /proc/sys/fs/file-max
# echo ' 32768 ' >; /proc/sys/fs/inode-max

and add the above command to the/etc/rc.c/rc.local file to configure the above values each time the system restarts.

The source code needs to be modified in Linux kernel 2.4.x, and the kernel will be recompiled before it takes effect. Edit the Include/linux/fs.h file in the source code of the Linux kernel, change the nr_file from 8192 to 65536, and change the nr_reserved_files from 10 to 128. Edit fs/inode.c file to change Max_inode from 16384 to 262144.

In general, the maximum number of open files is reasonable set to every 4 m physical memory 256, such as 256M memory can be set to 16384, and the maximum number of I nodes to use should be the largest number of open files 3 times times to 4 times times.

5. Using the memory file system

In Linux, a portion of memory can be used as a partition, which we call RAMDisk. For some files that are frequently accessed, and they are not changed, you can significantly improve the performance of your system by placing them in memory through RAMDisk. Of course, your memory needs to be big enough. There are two types of ramdisk, one that can be formatted, loaded, and supported on the Linux kernel 2.0/2.2, the disadvantage of which is fixed size. The other is the kernel 2.4 support, through RAMFS or TMPFS to achieve, they can not be formatted, but flexible, its size with the space required to increase or decrease. Here is the main introduction of Ramfs and TMPFS.
Ramfs, as the name implies, is a memory file system that works on the virtual file system (VFS) layer. You cannot format, you can create multiple, and you can specify the maximum amount of memory that can be used when you create it. If your Linux has compiled Ramfs into the kernel, you can easily use Ramfs. Create a directory and load Ramfs to that directory.
# mkdir-p/RAM1
# mount-t Ramfs none/ram1

By default, Ramfs is limited to half the amount of memory that can be used. Can be changed by MaxSize (in kbyte) option.
# mkdir-p/RAM1
# mount-t Ramfs None/ram1-o maxsize=10000

This creates a ramdisk that limits the maximum memory size to 10M.

TMPFS is a virtual memory file system that differs from the traditional RAMDisk, which is implemented as a block device, and is different from the Ramfs for physical memory. TMPFS can use physical memory or swap partitions. In the Linux kernel, virtual memory resources consist of physical memory (RAM) and swap partitions, which are allocated and managed by the virtual memory subsystem in the kernel. TMPFS is "Dealing" with the virtual memory subsystem, which requests a page from the virtual memory subsystem to store the file, as it does with other requests on the Linux page, and does not know whether the page allocated to it is in memory or in the swap partition. Tmpfs with RAMFS, its size is not fixed, but with the need for space and dynamic increase or decrease. With TMPFS, you first need to select "Virtual Memory file system support" when you compile the kernel, and then you can load the Tmpfs file system.
# mkdir-p/MNT/TMPFS
# Mount Tmpfs/mnt/tmpfs-t Tmpfs

> To prevent TMPFS from using excessive memory resources to degrade or crash the system, you can specify the maximum limit for TMPFS file system size at load time.

# Mount Tmpfs/mnt/tmpfs-t Tmpfs-o size=32m


The Tmpfs file system created by > above specifies a maximum size of 32M. Whether you're using Ramfs or TMPFS, it's important to understand that once the system restarts, the content in them will be lost. So these things can be placed in the memory file system depending on the system's specific circumstances.

6. Using the log file system

If the Linux system does not shut down gracefully due to an unexpected situation, it may cause metadata for some files in the file system (Meta-data information about the file, such as permissions, owners, and time of creation and access) is compromised. The file system needs to maintain the file's metadata to ensure that the file is organized and accessible, and if the metadata is in an unreasonable or inconsistent state, the file cannot be accessed and accessed. When the system restarts, FSCK scans all of the file systems listed in the/etc/fstab file to ensure that their metadata is in a usable state. If metadata inconsistencies are found, FSCK scans and detects the metadata and corrects the error. If the file system is large, this process will take a long time. To solve this problem, you can use the log file system. The log file system tracks the changes in disk content with separate log files, writing the file's metadata while writing the contents of the file. Each time you modify a file's metadata, you must first register the corresponding entry with the data structure called "Log". In this way, the log file system maintains a record of the most recently changed metadata. When the log file system is loaded, if an error is found, the metadata for the entire file system is not scanned, and the most recently changed metadata is checked against the log. Therefore, compared to traditional file systems (such as ext2), the log file system greatly accelerates the scanning and detection time.

There are many log file systems available under Linux, such as XFS,JFS,REISERFS,EXT3 and so on. The log file system is primarily designed to provide excellent performance and high availability for server environments. Of course, Linux workstations and home machines can also benefit from a reliable log file system with high performance. Installing the log file system typically requires downloading the appropriate compression package, patching the kernel, reconfiguring and recompiling the kernel. Detailed installation procedures can be accessed on the official website of the appropriate file system.

Vi. other aspects of tuning
1. Tuned Buffermem

File Buffermen is closely related to the kernel virtual memory subsystem. The file/proc/sys/vm/buffermem controls how much memory is used for the buffer (percentage representation). The default value for kernel 2.4 is: "2 10 60". You can modify it as follows:

# echo "Ten" >;/proc/sys/vm/buffermem

and add it to the script file/etc/rc.d/rc.local. Alternatively, add: Vm.buffermem = 70 10 60 in the/etc/sysctl.conf file

The first parameter 70 means that at least 70% of the memory is allocated as a buffer. The latter two parameters hold the system's default values. The first parameter is set to how large it is, depending on the size of the system's memory and the use of memory when the system is under high load (free monitoring available).

2. Process Limitations

Linux for each user, the system limits its maximum number of processes. To improve performance, you can set the maximum number of processes for superuser root to be unlimited. Editing the. bashrc file (VI/ROOT/.BASHRC) adds the line "Ulimit-u Unlimited" to eliminate the superuser's process limit.
Some other limitations of the core and system on user processes can also be viewed and changed through the Ulimit command. "Ulimit-a" is used to display the current various user process limits. Some examples of changing user limits are as follows:

Ulimit-n 4096 increase the number of files that can be opened by each process to 4096, with a default of 1024

Ulimit-m 4096 limits the amount of memory used by each process.

3. Optimizing GCC Compilation

Place the optimization flag in the/etc/profile file. Use the following optimization flags on Pentium III processors to get the best applications:

Cflags=-o9-funroll-loops-ffast-math-malign-double-mcpu=pentiumpro
-march=pentiumpro-fomit-frame-pointer-fno-exceptions

Add the following line to the/etc/profile position:

Export PATH PS1 HOSTNAME histsize histfilesize USER LOGNAME MAIL INPUTRC CFLAGS
LANG Lesscharset


With the above optimizations, GCC or Egcs compiled programs will get the best performance.

4. Compile kernel optimization

Edit the/usr/src/linux/makefile file to compile according to the specific CPU optimization kernel. The following parameter settings will be optimized for kernel performance.

①vi +18/usr/src/linux/makefile, change HOSTCC =gcc to HOSTCC =egcs.
②vi +25/usr/src/linux/makefile
Convert cc =$ (cross_compile) gcc d__kernel__-i$ (hpath)
Change to CC =$ (cross_compile) Egcs d__kernel__-i$ (Hpath).
③vi +90/usr/src/linux/makefile
Will cflags =-wall-wstrict-prototypes-o2-fomit-frame-pointer
Change to Cflags =-wall-wstrict-prototypes-o9-funroll-loops-ffast-math-malign-double
-mcpu=pentiumpro-march=pentiumpro-fomit-frame-pointer-fno-exceptions
④vi +19/usr/src/linux/makefile
Will hostcflags =-wall-wstrict-prototypes-o2-fomit-frame-pointer
Change to Hostcflags =-wall-wstrict-prototypes-o9-funroll-loops-ffast-math
-malign-double-mcpu=pentiumpro-march=pentiumpro-fomit-frame-pointer
-fno-exceptions

It is possible to recompile the kernel based on the modified makefile file above to get better performance.

Vii. concluding remarks
Linux is a flexible and open system. Users can tune from the perimeter of the system to the kernel of the system for specific application environments. The peripheral tuning of the system includes the configuration of the system hardware to the system installation and the optimization of the system service. The tuning of the system kernel includes the modification of parameters and the improvement of the system's source code. In the tuning of Linux systems used as DB2 database servers, we have synthesized the system performance according to the characteristics of the DB2 database, in accordance with the various tuning aspects of this paper, and also including the tuning of the network. In the integrated testing of the tuned system, the performance of the system is greatly improved.

"Editor's recommendation"

    1. Linux Performance Testing Tools Lmbench detailed
    2. Linux Performance Detection Tool uptime Brief Introduction
    3. Linux Performance test Tool Lmbench Tools detailed description
    4. Five Windows Memory Performance improvement Manuals
    5. Analysis on optimization method of Linux network performance
    6. Five big Linux simple commands to resolve system performance issues
    7. Linux Ext4 File System: Performance and compatibility
    8. Monitoring Linux server performance with Munin

Improve Linux operating system performance

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.