Use bonnie++ to Test system IO speed

Source: Internet
Author: User
Tags chr config file size new set version variable egrep

Original link:

Http://www.eygle.com/unix/Use.Bonnie++.To.Test.IO.speed.htm



There are some well-known problems with Bonnie, such as >2g file support.
Russell Coker (russell@coker.com.au) developed a new set of code to support >2g files.
After getting permission from Tim Bray (tbray@textuality.com), Russell named his software bonnie++, published online, and became popular.

The current version has been updated to 1.03a and you can download it at the following address:
http://www.coker.com.au/bonnie++/
You can also click here to download, this version needs to compile, if you do not compile the environment, you can click here to download my compiled, applicable to the Sun Solaris Environment (SOLARIS8 test passed)

Russell Coker's personal homepage is:
http://www.coker.com.au/

The main differences between bonnie++ and Bonnie are:
Http://www.coker.com.au/bonnie++/diff.html

Let me briefly introduce the compilation and use of bonnie++:

1. Compile

You need to download the above source code to use, if you do not compile the environment, you can click here to download my compiled, applicable to the Sun Solaris Environment (SOLARIS8 test passed)

Of course you need to install make, and GCC and other necessary compilers. During compilation, if you encounter the following error, it may be because you did not set the correct environment variable

$./configure
Grep:illegal option--Q
Usage:grep-hblcnsviw Pattern File ...
Grep:illegal option--Q
Usage:grep-hblcnsviw Pattern File ...
Checking for g++ ... g++
Checking for C + + compiler default output ... a.out
Checking whether the C + + compiler works ... configure:error:cannot run C + + compiled programs.
If you are meant to cross compile, use '--host '.
"Config.log ' for more details."

After you set the environment variable to continue compiling, you can generally succeed.



# Export Ld_library_path=/usr/lib:/usr/local/lib
#./configure
Grep:illegal option--Q
Usage:grep-hblcnsviw Pattern File ...
Grep:illegal option--Q
Usage:grep-hblcnsviw Pattern File ...
Checking for g++ ... g++
Checking for C + + compiler default output ... a.out
Checking whether the C + + compiler works ... yes
Checking whether we are cross compiling ... no
Checking for suffix of executables ...
Checking for suffix of object files ... o
Checking whether we are using the GNU C + + compiler ... yes
Checking whether g++ accepts-g ... yes
Checking to run the C + + preprocessor ... g++-E
Checking for a bsd-compatible install .../usr/bin/install-c
Checking for an ANSI c-conforming const ... Yes
Checking for egrep ... egrep
Checking for ANSI C header files ... yes
Checking for sys/types.h ... yes
Checking for sys/stat.h ... yes
Checking for stdlib.h ... yes
Checking for string.h ... yes
Checking for memory.h ... yes
Checking for strings.h ... yes
Checking for inttypes.h ... yes
Checking for stdint.h ... no
Checking for unistd.h ... yes
Checking for size_t ... yes
Checking vector.h usability ... yes.
Checking Vector.h presence ... yes.
Checking for vector.h ... yes
Checking vector usability. Yes
Checking vector presence. Yes
Checking for vector ... yes
Checking algorithm usability ... yes.
Checking algorithm presence ... yes.
Checking for algorithm ... yes
Checking algo.h usability ... yes.
Checking algo.h presence ... yes.
Checking for algo.h ... yes
Checking Algo usability ... no
Checking Algo presence ... no
Checking for algo ... no
Configure:creating./config.status
Config.status:creating Makefile
Config.status:creating bonnie.h
Config.status:creating port.h
Config.status:creating Bonnie++.spec
Config.status:creating bon_csv2html
Config.status:creating Bon_csv2txt
Config.status:creating Sun/pkginfo
Config.status:creating conf.h
Config.status:conf.h is unchanged



The bonnie++ is generated after the compilation is completed and can be used for testing.

2. Here are some test results

A.T3 large file reading and writing test

 #./bonnie++-d/data1-u root-s 4096-m billingusing uid:0, gid:1.writing with PUTC () ... donewriting intelligently. .. Donerewriting...donereading with getc () ... donereading intelligently...donestart ' Em...done...done...done ... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03------Sequential Output--------Sequential input---random--per chr---block---rewrite--per chr---block-- --seeks--machine Size k/sec%cp k/sec%cp k/sec%cp k/sec%cp k/sec%cp/sec%cpbilling 4G 9915 87 30319 56 11685 38 9999 47326 177.6 3------Sequential Create--------------Random Create---------Create----Read----Delete---Create--- -read----delete--files/sec%cp/sec%cp/sec%cp/sec%cp/sec%cp/sec%cp 16 639 19 +++++ +++ 1258 22 679 16 +++++ + + + 1197 27billing,4g,9915,87,30319,56,11685,38,9999,99,47326,66,177.6,3,16,639,19,+++++,+++,1258,22,679,16,+++++,+++,1197,27

B. EMC CLARiiON CX500 test data

This is after I disable the write cache test data:

Raid1+0 test for 4-piece disk:

 #./bonnie++-d/eygle-u root-s 4096-m jump Using uid:0, gid:1.file size should is double RAM for good results, R AM is 4096m.#./bonnie++-d/eygle-u root-s 8192-m jumpusing uid:0-gid:1.writing with PUTC () ... donewriting intelligent Ly...donerewriting...donereading with getc () ... donereading intelligently...donestart ' Em...done...done...done ... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03------Sequential Output--------Sequential input---random--per chr---block---rewrite--per chr---block-- --seeks--machine Size k/sec%cp k/sec%cp k/sec%cp k/sec%cp k/sec%cp/sec%cpjump 8G 12647 36 13414 8 7952 13 33636 97 146503 465.7 5------Sequential Create--------------Random Create---------Create----Read----Delete---Create----R EAD----delete--files/sec%cp/sec%cp/sec%cp/sec %cp/sec%cp/sec%CP 1 +++++ +++ 161 1 Bayi 1 +++++ +++ 163 1jump,8g,12647,36,13414,8,7952,13,33636,97,146503,71,465 .7,5,16,86,1,+++++,+++,161,1,81,1,+++++,+++,163,1

4 disk RAID5, disable write cache speed:

 #./bonnie++-d/eygle-u root-s 8192-m jumpusing uid:0, gid:1.writing with PUTC () ... donewriting intelligently...d Onerewriting...donereading with getc () ... donereading intelligently...donestart ' Em...done...done...done ... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03------Sequential Output--------Sequential input---random--per chr---block---rewrite--per chr---block-- --seeks--machine Size k/sec%cp k/sec%cp k/sec%cp k/sec%cp k/sec%cp/sec%cpjump 8G 10956 30 10771 6 3388 5 34169 98 158861 431.1 5------Sequential Create--------------Random Create---------Create----Read----Delete---Create----Re Ad----delete--files/sec%cp/sec%cp/sec%cp/sec%cp/sec%cp/sec%cp bayi 1 +++++ +++ 160 1 1 +++++ +++ 109 1ju mp,8g,10956,30,10771,6,3388,5,34169,98,158861,75,431.1,5,16,81,1,+++++,+++,160,1,82,1,+++++,+++,109,1

Comparing these two results we found (unit k/sec):
Character Write block letter read Block read raid1012,64713,41433,636146,503raid510,95610,77134,169158,861diff1,6912,643-533-12,358
We see that in direct reading and writing, writing RAID10 is slightly faster than RAID5, and in Reading, RAID5 is slightly faster than RAID10, which is consistent with our usual view.

The point to mention here is that normally we recommend that Redolog file be stored on a RAID10 disk because of its write advantage.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.