Linux HDD Performance detection

Source: Internet
Author: User
Tags cpu usage

For the current computer, the performance of the entire computer is mainly affected by the disk IO speed, memory, CPU, including the motherboard bus speed has been very fast.

Basic detection Method 1, DD command

The DD command function is very simple, that is, from a source read data in the form of bit-level write to a destination address, in this way we can detect our actual disk in the Linux system read and write performance, do not need to go through any detection software to read the data, In general, using DD to detect disk performance is also considered to be closest to the real situation.

Usage: DD if[data from where to read, generally from the dev of the zero device, the device constantly return 0 as the data source] of[read the file to write to which file] bs[block size, each read and write the basic speed "count[total read and write how many BS] conv= Fdatasync[has a strong memory caching mechanism in Linux, in order to improve the system performance, Linux will use memory as the hard disk read and write cache, so this parameter to ensure that the data is directly written to the hard disk]

Example:

DD if=/dev/zero of=testfile bs=1m count= Conv=fdatasync

The results on my virtual machine are as follows:

DD if=/dev/zero of=testfile bs=1m count= conv= fdatasync0in  +0  Recordsout 536870912 bytes (53719.6677  27.3 MB/s

It is generally recommended to run this command multiple times to average, and it is recommended to clear the cache with the following command before each execution of the above command:

Echo 3 >/proc/sys/vm/drop_caches

Testing with the DD command is often not rigorous or scientific, as it may be affected by CPU usage and background services.

2. Hdparm command

The Hdparm command is specifically designed to get information about modifying test disks. Hdparm must be running under administrator privileges.

Usage: hdparm-t the disk to be tested

Example:

# hdparm-t/DEV/SDA

Results:

[Email protected] ~]# hdparm-t/dev/SDA/dev/444in  3.01147.35 MB~]# hdparm-t/dev/SDA/dev/808in  3.00 269.21 mb/sec

You can see that the two runs result in a larger gap, so it is recommended to run the average multiple times.

The result of testing in these two ways is very simple, professional test disk performance, not only need to know read and write performance, but also to distinguish between read and write data size (4k/16k/32k), but also to test whether sequential or random read and write, if the mechanical hard disk also test the speed gap between the internal and external tracks and so on.

Advanced detection Method 1, bonnie++

Can be installed via Yum (not included in the Linux default Yum source, it is recommended to install the Repoforge source):

Yum Install -y bonnie++

Usage: bonnie++-u user name-S test read-write file size

Example:

bonnie++-u root-s 2g

By default, a 4G size file is written, divided into 4 parts, which will be thoroughly tested by read-write random operations on the system IO, which will take longer because of the large file written.

Results:

[Email protected] ~]# bonnie++-u root-s 2gUsing uid:0Gid:0. Writing abyteAt a Time... DoneWriting intelligently ... DoneRewriting ... DoneReading abyteAt a Time... DoneReading intelligently ... DoneStart'Em...done...done...done...done...done ...Create filesinchSequential order ... Done. Stat FilesinchSequential order ... Done. Delete FilesinchSequential order ... Done. Create FilesinchRandom order ... Done. Stat FilesinchRandom order ... Done. Delete FilesinchRandom order ... Done. Version1.96------Sequential Output--------Sequential input---random-Concurrency1-per chr---block---rewrite--per chr---block----seeks--Machine Size K/sec%cp k/sec%cp k/sec%cp k/sec%cp k/sec%cp/sec%Cplocalhost.locald 2G287   About 31885   - 59035   the  2795   About 514292   -  9491 421Latency 42230us 2804ms 284ms 8198us 5820us 4819usVersion1.96------Sequential Create--------------Random Create--------Localhost.localdoma-create----Read----Delete---Create----Read----delete--Files/sec%cp/sec%cp/sec%cp/sec%cp/sec%cp/sec%CP - 20946   the+++++ +++ +++++ +++23169  94+++++ +++ +++++ +++Latency 2539us 853us 993us 1675us 284us 1234us1.96,1.96, Localhost.localdomain,1,1414376948, 2G,,287, About,31885, -,59035, the,2795, About,514292, -,9491,421, -,,,,,20946, the,+++++,+++,+++++,+++,23169,94, +++++,+++,+++++,+++,42230us,2804ms,284ms,8198us,5820us,4819us,2539us,853us,993us,1675us,284us,1234us

This format is a bit messy, but fortunately this software also provides tools to convert the results into HTML tables (with the last line conversion):

Echo 1.96,1.96, Localhost.localdomain,1,1414376948, 2G,,287, About,31885, -,59035, the,2795, About,514292, -,9491,421, -,,,,,20946, the,+++++,+++,+++++,+++,23169,94, +++++,+++,+++++,+++,42230us,2804ms,284ms,8198us,5820us,4819us,2539us,853us,993us,1675us,284us,1234us | bon_csv2html >> bon_result.html

The bon_result.html display is as follows:

This is much more beautiful, simply explain:

Sequential output (sequential output, actually write operation) under the Per char is the value written in PUTC way, no doubt, because the cache line is always greater than 1 bytes, so constantly harassing the CPU execution PUTC, see CPU utilization is 99%. The speed of writing is 0.3mb/s, very slow.

Sequential output block is in accordance with block to write, the apparent CPU utilization is down, the speed also up, is 31mb/s, with the above with DD test almost.

Sequential input (sequential input, actually read operation) of the per char refers to the getc way to read the file, the speed is 2.5MB/S,CPU utilization is 99%.

Sequential input block refers to the block to read the file, the speed is the 50MB/S,CPU usage rate is 64%.

The random seeks is randomly addressed and can be addressed more than 9,000 times per second.

Sequential Create (sequential file creation)

Random Create (randomly created file)

Some of the results are many + numbers, which means that bonner++ believes that the value is unreliable and does not output. In general, it is because the implementation is very fast, generally not a system bottleneck, so don't worry.

2, IOzone

The information provided by IOzone is more comprehensive and accurate, so IOzone is one of the most used tools in system performance testing.

IOzone is slightly more complex to use, with only the most commonly used parameters:

-L: The minimum number of processes to use for concurrent testing, and do not want to test multi-process can be set to 1.

-U: The maximum number of processes to use for concurrent testing, and do not want to test multi-process can be set to 1.

-r: Default read and write basic unit, such as 16k, this value is generally related to the test application, such as to test the database, this value is consistent with the database block size.

-S: The default size of read and write, it is recommended that this value is larger (typically twice times the size of the memory), because IOzone does not circumvent the low-level cache, so if the value is small, it may be done directly in memory.

-F: Specify Cache file

Example:

1 1 Tempfile

Results:

Children See throughput for  1Initial writers =31884.46kb/Secparent sees throughput for  1Initial writers =30305.05kb/secmin throughput per process=31884.46kb/sec Max throughput per process=31884.46kb/Secavg throughput per process=31884.46kb/secmin Xfer=2097152.00Kbchildren See throughput for  1Rewriters =102810.49kb/Secparent sees throughput for  1Rewriters =95660.98kb/secmin throughput per process=102810.49kb/sec Max throughput per process=102810.49kb/Secavg throughput per process=102810.49kb/secmin Xfer=2097152.00Kbchildren See throughput for  1Readers =450193.59kb/Secparent sees throughput for  1Readers =450076.28kb/secmin throughput per process=450193.59kb/sec Max throughput per process=450193.59kb/Secavg throughput per process=450193.59kb/secmin Xfer=2097152.00Kbchildren See throughput for 1Re-readers =451833.53kb/Secparent sees throughput for 1Re-readers =451756.47kb/secmin throughput per process=451833.53kb/sec Max throughput per process=451833.53kb/Secavg throughput per process=451833.53kb/secmin Xfer=2097152.00Kbchildren See throughput for 1Reverse readers =61854.02kb/Secparent sees throughput for 1Reverse readers =61851.88kb/secmin throughput per process=61854.02kb/sec Max throughput per process=61854.02kb/Secavg throughput per process=61854.02kb/secmin Xfer=2097152.00Kbchildren See throughput for 1Stride Readers =43441.66kb/Secparent sees throughput for 1Stride Readers =43439.83kb/secmin throughput per process=43441.66kb/sec Max throughput per process=43441.66kb/Secavg throughput per process=43441.66kb/secmin Xfer=2097152.00Kbchildren See throughput for 1Random readers =47707.72kb/Secparent sees throughput for 1Random readers =47705.00kb/secmin throughput per process=47707.72kb/sec Max throughput per process=47707.72kb/Secavg throughput per process=47707.72kb/secmin Xfer=2097152.00Kbchildren See throughput for 1Mixed workload =50807.69kb/Secparent sees throughput for 1Mixed workload =50806.24kb/secmin throughput per process=50807.69kb/sec Max throughput per process=50807.69kb/Secavg throughput per process=50807.69kb/secmin Xfer=2097152.00Kbchildren See throughput for 1Random writers =45131.93kb/Secparent sees throughput for 1Random writers =43955.32kb/secmin throughput per process=45131.93kb/sec Max throughput per process=45131.93kb/Secavg throughput per process=45131.93kb/secmin Xfer=2097152.00Kb

From the above results, you can see the read and write performance of the system disk in various ways.

PS: In fact, the following results, but my virtual machine disk full, crashed.

Reference documents

1,Linux hard disk performance detection

2. Understand the use of your disk bonnie++ Test disk performance

Linux HDD Performance detection

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.