Post-read notes for tuning I/O performance

Source: Internet
Author: User

Tag: Io

Tuning I/O performance, article: http://doc.opensuse.org/products/draft/SLES/SLES-tuning_sd_draft/cha.tuning.io.html


If you have better insights after reading the original article, please do not hesitate to give me some advice. Thank you!


This article takes SuSE Linux Enterprise Server as an example to explain the Linux system's Io Scheduling Policies and tuning, which can be adapted to other Linux operating systems, such as centos and ubuntu.

1. view the current IO scheduling policy (CFQ is the Default policy for most Linux releases)

Command: CAT/sys/block/SDA/queue/schedue

Result: The Noop anticipatory deadline [CFQ] # centos system shows the available Scheduling Policies and the current scheduling policies. The scheduling policies used by the current system in []

2. Change the current scheduling policy

A. Add elevator = schedator kernel parameters when the system starts.

B. directly modify the kernel parameters: Echo scheduler>/sys/block/device/queue/Scheduler

3. Policy Optimization

Each policy has parameters that can be optimized. The parameter paths are roughly the same, in the/sys/block/device/queue/iosched/directory.

Command: Echo value>/sys/block/device/queue/iosched/tunable

4. Key Policy Tuning

A. CFQ (completely fair queuing)

I. CFQ is a fair algorithm. Each thread has a time slice to submit I/O requests. Different tasks can also be assigned different I/O priorities (MAN 1 ionice)

Ii. Tunable Parameters

/Sys/block/<device>/queue/iosched/slice_idle

# Even if a task does not have an I/O Request, using this policy will still wait for a while before switching to the next thread.

# Setting this parameter to 0 is not particularly important for SSDS, multi-disk San, and other current track locations (without additional time addressing). This parameter can significantly increase throughput.

/Sys/block/<device>/queue/iosched/quantum
# Limit the number of requests simultaneously processed by the device. The default value is 4.
# Increasing this value can improve performance, but it may increase the latency of some I/O due to the increase in concurrent processing capacity.

# So you can adjust/sys/block/<device>/queue/iosched/slice_async_rq (the default value is 2, which limits the number of asynchronous write requests for the same time slice)


/Sys/block/<device>/queue/iosched/low_latency
# It is better to set this value to 1 in a load environment with very high requirements on I/O latency


B. Noop

I. It is a very common scheduling policy, which is processed when I/O Requests exist, it can be used to detect whether other scheduling algorithms cause I/O performance in complex I/oenvironments.

Ii. Devices with their own scheduling algorithms, such as smart storage devices and SSDS, although deadline is generally more suitable for these devices, its performance may be better under low load.

C. Deadline

I. deadline is designed to reduce latency. Each I/O request is assigned an end time, which is saved to the queue (read and write queues) after timeout, when there are no time-out requests and then process the requests in these queues, this algorithm is more advantageous for reading than writing.

Ii. When concurrent read/write and priority are not very important, this debugging is much better than the CFQ policy.

Iii. optimization parameters

/Sys/block/<device>/queue/iosched/writes_starved

# The number of read requests that can be processed after a write request is set to 3, indicating that a write request can be processed after three requests are processed


/Sys/block/<device>/queue/iosched/read_expire
# In milliseconds, the default value is 500. Set the read operation timeout time (the value of read_expire pushed after the current time is the time-out point)

/Sys/block/<device>/queue/iosched/write_expire
# Same as above, control Write Request timeout


5. I/O barrier Optimization

Write barriers is a kernel mechanism that ensures that the metadata of the file system is written to persistent storage in a correct and orderly manner, even if the persistent storage loses power. Most file systems (XFS, ext3, ext4, and reiserfs) trigger write barriers during fsync or transaction commit. Write barriers can be disabled on a disk with a spare battery to improve performance.

You can add the barrier = 0 option when mounting the ext3, ext4, and reiserfs file systems, and use the nobarrier option when mounting XFS.


Post-read notes for tuning I/O performance

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.