(I) DD: reads data from one source and writes it to another destination address in bits.
[root@linwaterbin ~]# dd if=/dev/zero of=/home/oracle/disktest bs=1M count=512 conv=fdatasync
Note
If: from where
Of: where to go
BS: size of data blocks read each time
Count: Number of BS read
Conv: eliminates the impact of Linux memory cache and ensures that data is directly written to the disk.
Test procedure:
We recommend that you run the command multiple times to get the average value and clear the Linux cache before each run.
[root@linwaterbin ~]# echo 3 > /proc/sys/vm/drop_caches[root@linwaterbin ~]# dd if=/dev/zero of=/home/oracle/disktest bs=1M count=512 conv=fdatasync512+0 records in512+0 records out536870912 bytes (537 MB) copied, 27.4893 seconds, 19.5 MB/s[root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# echo 3 > /proc/sys/vm/drop_caches[root@linwaterbin ~]# dd if=/dev/zero of=/home/oracle/disktest bs=1M count=512 conv=fdatasync512+0 records in512+0 records out536870912 bytes (537 MB) copied, 17.3697 seconds, 30.9 MB/s[root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# echo 3 > /proc/sys/vm/drop_caches[root@linwaterbin ~]# dd if=/dev/zero of=/home/oracle/disktest bs=1M count=512 conv=fdatasync512+0 records in512+0 records out536870912 bytes (537 MB) copied, 14.9991 seconds, 35.8 MB/s[root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# [root@linwaterbin ~]# echo 3 > /proc/sys/vm/drop_caches[root@linwaterbin ~]# dd if=/dev/zero of=/home/oracle/disktest bs=1M count=512 conv=fdatasync512+0 records in512+0 records out536870912 bytes (537 MB) copied, 15.2154 seconds, 35.3 MB/s
19.5 Mb/s + 30.9 Mb/s + 35.8 Mb/s + 35.3 Mb/s add these values to the average
(Ii) hdparm
There are two parameters:
-T: disk Performance Detection
-T: memory cache Performance Detection
We also recommend that you average the number of tests.
[root@linwaterbin ~]# hdparm -t /dev/sda/dev/sda: Timing buffered disk reads: 226 MB in 3.02 seconds = 74.82 MB/sec[root@linwaterbin ~]# hdparm -t /dev/sda/dev/sda: Timing buffered disk reads: 244 MB in 3.10 seconds = 78.59 MB/sec[root@linwaterbin ~]# hdparm -t /dev/sda/dev/sda: Timing buffered disk reads: 246 MB in 3.12 seconds = 78.87 MB/sec
The above two tools only return read/write performance, and the test is relatively simple.
Use Bonnie ++ or Iozone to generate detailed disk reports
Configure the following before installation:
Download the corresponding version from the repoforge website
[root@linwaterbin Desktop]# rpm -ivh --nodeps rpmforge-release-0.5.2-2.el5.rf.i386.rpm warning: rpmforge-release-0.5.2-2.el5.rf.i386.rpm: Header V3 DSA signature: NOKEY, key ID 6b8d79e6Preparing... ########################################### [100%] 1:rpmforge-release ########################################### [100%][root@linwaterbin Desktop]# cd /etc/yum.repos.d[root@linwaterbin yum.repos.d]# lsbase.repo mirrors-rpmforge-extras redhat.repo rpmforge.repomirrors-rpmforge mirrors-rpmforge-testing rhel-debuginfo.repo
(3) Bonnie ++
Installation:
[root@linwaterbin yum.repos.d]# yum install -y bonnie++
First, describe the common parameters in the command.
-D path for generating the test file
-S: size of the test file generated, in MB (if the-R parameter is not used, the file size must be at least twice the physical memory of the system)
-M machine name, which can be considered as the solution name for this test. The default is the host name of the local machine.
-R memory size: Specifies the memory size. In this way, you can use the-S parameter to create an R * 2 file, which is usually used to shorten the test time.
However, it should be noted that the test results may be inaccurate due to the memory cache.
-X number of tests
-U indicates the owner and group of the test file. The default value is the current user and the current group that executes Bonnie ++.
-G indicates the group of test files. The default value is the current group that executes Bonnie ++.
-B calls the fsync () function every time a file is written. It is suitable for testing the mail server or database server, which usually requires synchronization operations,
Without this parameter, it is more suitable for testing the efficiency of copying files or compiling operations.
[root@linwaterbin ~]# bonnie++ -s 512 -r 256 -u root
Main output:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CPlinwaterbin 512M 217 99 32403 14 13798 7 378 99 49235 10 347.0 3Latency 154ms 2218ms 2099ms 125ms 63304us 2672msVersion 1.96 ------Sequential Create------ --------Random Create--------linwaterbin -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 15508 52 +++++ +++ +++++ +++ 27215 91 +++++ +++ +++++ +++Latency 43819us 20118us 19580us 19834us 19699us 20435us
(1) sequential output: write operation
① Per CHR: character
② Block: Block
(2) sequential input: read operation
(3) K/sec: Speed (K/sec)
(4) % CP: CPU usage
(Iv) Iozone
Supports concurrent multi-process testing
Ability to output to excel for drawing
It is also the most widely used tool for routine stress testing.
Installation:
[root@linwaterbin ~]# yum install -y iozone
Parameter description:
-L: Minimum number of processes
-U: Maximum number of processes
-R: Basic read/write unit, subject to the block size of the test object
For example, if the Oracle block is set to 8 K, you can set it to 8 K.
-S: It is consistent with the meaning of the S parameter of Bonnie ++.
If this value is too small, the test results will be affected.
Because many programs have run out of memory
-F: cached files
[root@linwaterbin ~]# iozone -l 1 -u 1 -r 8K -s 128M Record Size 8 KB File size set to 131072 KB Command line used: iozone -l 1 -u 1 -r 8K -s 128M Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Min process = 1 Max process = 1 Throughput test with 1 process Each process writes a 131072 Kbyte file in 8 Kbyte records Children see throughput for 1 initial writers = 125057.24 KB/sec Parent sees throughput for 1 initial writers = 30640.70 KB/sec Min throughput per process = 125057.24 KB/sec Max throughput per process = 125057.24 KB/sec Avg throughput per process = 125057.24 KB/sec Min xfer = 131072.00 KB Children see throughput for 1 rewriters = 513780.34 KB/sec Parent sees throughput for 1 rewriters = 31989.50 KB/sec Min throughput per process = 513780.34 KB/sec Max throughput per process = 513780.34 KB/sec Avg throughput per process = 513780.34 KB/sec Min xfer = 131072.00 KB Children see throughput for 1 readers = 889758.12 KB/sec Parent sees throughput for 1 readers = 849615.75 KB/sec Min throughput per process = 889758.12 KB/sec Max throughput per process = 889758.12 KB/sec Avg throughput per process = 889758.12 KB/sec Min xfer = 131072.00 KB
The value here is very large because our S parameter settings are a little small.