First, view with the top command
top–16:15:05 up 6 days, 6:25, 2 users, load average:1.45, 1.77, 2.14
tasks:147 Total, 1 running, 146 sleeping, 0 stopped, 0 zombie
Cpu (s): 0.2% us, 0.2% sy, 0.0% ni, 86.9% ID, 12.6% wa, 0.0% Hi, 0.0% si
mem:4037872k Total, 4003648k used, 34224k free, 5512k buffers
swap:7164948k Total, 629192k used, 6535756k free, 3511184k cached
See 12.6% WA
Percentage of CPU time spent on Io wait, high io pressure above 30%
Second, with Iostat-x 1 10
If Iostat not, to yum install Sysstat
AVG-CPU:%user%nice%sys%iowait%idle
0.00 0.00 0.25 33.46 66.29
device:rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkb/s wkb/s avgrq-sz avgqu-sz await SVCTM%util
SDA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SDB 0.00 1122 17.00 9.00 192.00 9216.00 96.00 4608.00 123.79 137.23 1033.43 13.17 100.10
SDC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
View%util 100.10%idle 66.29
If%util is close to 100%, there are too many I/O requests, the I/O system is full, and the disk may have a bottleneck.
Idle less than 70% io pressure is larger, the general reading speed has more wait.
You can also view the B parameter (number of processes waiting for the resource) in conjunction with Vmstat
Vmstat-1
If you want to do an IO load on the hard drive the stress test can be done with the following command
Time DD If=/dev/zero bs=1m count=2048 of=direct_2g
This command creates a new 2G file in the current directory
We test the IO load while creating a new folder
See the peak process IO situation again through the following script
Iostat View Linux hard drive IO performance
RRQM/S: Number of read operations per second for merge. Delta (rmerge)/s
wrqm/s: Number of write operations per second for merge. Delta (wmerge)/s
R/S: Number of Read I/O devices completed per second. Delta (RIO)/s
W/S: Number of write I/O devices completed per second. Delta (WIO)/s
RSEC/S: Number of sectors read per second. Delta (rsect)/s
WSEC/S: Number of sector writes per second. Delta (wsect)/s
RKB/S: The number of K bytes read per second. Is half the rsect/s, because each sector size is 512 bytes. (Need to calculate)
WKB/S: The number of K bytes written per second. is half the wsect/s. (Need to calculate)
Avgrq-sz: The average data size (sector) per device I/O operation. Delta (rsect+wsect)/delta (Rio+wio)
Avgqu-sz: Average I/O queue length. The Delta (AVEQ)/s/1000 (because the Aveq unit is in milliseconds).
Await: The average wait time (in milliseconds) for each device I/O operation. Delta (ruse+wuse)/delta (Rio+wio)
SVCTM: Average service time (in milliseconds) per device I/O operation. Delta (use)/delta (RIO+WIO)
%util: How much time is spent in a second for I/O operations, or how many times in a second I/O queues are non-empty. The delta (use)/s/1000 (because the unit of use is in milliseconds)
If the%util is close to 100%, which indicates that there are too many I/O requests, the I/O system is full load, the disk
There may be a bottleneck.
Idle less than 70%io pressure is larger, the general reading speed there are more wait.
You can also view the B parameter () and WA parameters () in conjunction with Vmstat.
The following is written by someone else. Analysis of the output of this parameter
#iostat-x1
Avg-cpu:%user%nice%sys%idle
16.240.004.3179.44
Device:rrqm/swrqm/sr/sw/srsec/swsec/srkb/swkb/savgrq-szavgqu-szawaitsvctm%util
/dev/cciss/c0d0
0.0044.901.0227.558.16579.594.08289.8020.5722.3578.215.0014.29
/dev/cciss/c0d0p1
0.0044.901.0227.558.16579.594.08289.8020.5722.3578.215.0014.29
/dev/cciss/c0d0p2
0.000.000.000.000.000.000.000.000.000.000.000.000.00
The above Iostat output indicates that there are 28.57 device I/O operations in seconds: Total io (IO)/s=r/s (read) +w/s (write) =1.02+27.55=28.57 (Time/sec) where the write operation takes the main (w:r=27:1).
The average per-device I/O operation requires only 5ms to complete, but each I/O request needs to wait 78ms, why? Since there are too many I/O requests (about 29 per second), the average wait time can be calculated if these requests are issued at the same time:
Average wait time = single I/O service time * (Total 1+2+...+ requests-1)/Total requests
Applied to the example above: The average wait time =5ms* (1+2+...+28)/29=70ms, and iostat the average waiting time for 78ms is very close. This, in turn, indicates that I/O is initiated concurrently.
There are many I/O requests per second (about 29), and the average queue is not long (only about 2), indicating that the arrival of these 29 requests is uneven and most of the time I/O is idle.
14.29% of the time in a second I/O queue is requested, that is to say, 85.71% of the time I/O system has nothing to do, all 29 I/O requests are processed within 142 milliseconds.
Delta (ruse+wuse)/delta (IO) =await=78.21=>delta (ruse+wuse)/s=78.21*delta (IO)/s=78.21*28.57=2232.8, indicating i/per second The O request will have to wait for 2232.8ms in total. So the average queue length should be 2232.8ms/1000ms=2.23, and iostat given the average queue Length (AVGQU-SZ) is 22.35, why?! Because the Bug,avgqu-sz value in the Iostat should be 2.23, not 22.35.