IOSTAT (view disk I/O) is used to monitor the I/O load of system devices. iostat displays statistics from the start of the system when it is running for the first time, after running iostat, the statistics after the command was last run are displayed. You can specify the number and time of statistics to obtain the required statistics.
IOSTAT (view disk I/O) is used to monitor the I/O load of the system device. iostat displays statistics from the start of the system when it is running for the first time, after running iostat, the statistics after the command was last run are displayed. You can specify the number and time of statistics to obtain the required statistics.
Use Case Studies to optimize -- IOSTAT (view disk I/O)
Introduction
Iostat is mainly used to monitor the IO load of system devices. iostat displays the statistics from the start of the system when it is run for the first time. After iostat is run, statistics from the last run of the command are displayed. You can specify the number and time of statistics to obtain the required statistics.
Syntax
iostat [ -c ] [ -d ] [ -h ] [ -N ] [ -k | -m ] [ -t ] [ -V ] [ -x ] [ -z ] [ device [...] | ALL ] [ -p [ device [,...] | ALL ] ] [ interval [ count ] ]
Getting started
iostat -d -k 2
The-d parameter indicates that the Usage Status of the device (Disk) is displayed.-k indicates that Kilobytes is used by some major Powers that use block as the unit;
2 indicates that the data is refreshed every 2 seconds.
Output:
[oracle@rh6 ~]$ iostat -d -k 1 1Linux 2.6.32-71.el6.i686 (rh6.cuug.net) 09/03/2014 _i686_ (1 CPU)Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtnsda 7.12 118.75 92.50 359143 279757sdb 4.80 21.57 36.84 65225 111408sdc 1.05 1.96 186.00 5928 562546dm-0 31.31 116.83 92.49 353325 279736dm-1 0.13 0.50 0.00 1516 0dm-2 7.14 11.74 19.82 35501 59940dm-3 4.11 5.58 9.11 16881 27540dm-4 0.12 0.41 0.07 1249 216dm-5 2.32 3.51 7.84 10605 23712dm-6 24.76 0.48 98.55 1457 298052dm-7 21.98 0.48 87.45 1445 264480
Significance of output information
Tps: the number of transmissions per second (Indicate the number of transfers per second that were issued to the device .). "One transmission" means "one I/O Request ". Multiple logical requests may be merged into "one I/O Request ". The size of the "one-time transmission" request is unknown. KB_read/s: the amount of data read from the device (drive expressed) per second; kB_wrtn/s: the amount of data written to the device (drive expressed) per second; kB_read: the total amount of data read; kB_wrtn: the total amount of data written. These units are Kilobytes.
In the above example, we can see the statistics of the disk sda, sdb, sdc, and its various partitions. (Because it is an instantaneous value, the total TPS is not exactly equal to the total TPS of each partition)
Specify the monitored device name as sda. The output result of this command is identical to that of the preceding command.
iostat -d sda 2
By default, all hard disk devices are monitored. Currently, only sda is monitored.
[oracle@rh6 ~]$ iostat -d sdaLinux 2.6.32-71.el6.i686 (rh6.cuug.net) 09/03/2014 _i686_ (1 CPU)Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 5.95 186.83 148.89 718302 572434
-X Parameters
Iostat also has a common option-XThis option is used to display io-related extended data.
[oracle@rh6 ~]$ iostat -d -xLinux 2.6.32-71.el6.i686 (rh6.cuug.net) 09/03/2014 _i686_ (1 CPU)Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilsda 3.31 16.34 2.74 3.15 184.16 146.98 56.21 0.14 23.31 11.92 7.02sdb 2.09 5.85 0.60 3.79 33.46 63.70 22.14 0.10 23.22 20.64 9.05sdc 0.51 35.65 0.41 0.41 3.04 288.45 356.75 0.04 52.55 1.49 0.12dm-0 0.00 0.00 5.65 19.16 181.17 146.97 13.23 6.47 260.78 2.80 6.95dm-1 0.00 0.00 0.10 0.00 0.78 0.00 8.00 0.00 4.15 3.08 0.03dm-2 0.00 0.00 1.17 4.85 18.22 34.03 8.68 0.07 10.98 9.06 5.46dm-3 0.00 0.00 1.06 2.59 8.66 17.28 7.10 0.04 12.06 10.83 3.95dm-4 0.00 0.00 0.07 0.02 0.64 0.11 8.42 0.00 6.56 5.20 0.05dm-5 0.00 0.00 0.26 1.56 5.44 12.28 9.76 0.12 65.55 2.49 0.45dm-6 0.00 0.00 0.09 19.10 0.75 152.83 8.00 2.55 132.86 0.04 0.07dm-7 0.00 0.00 0.09 16.95 0.74 135.61 8.00 2.54 149.24 0.02 0.03
Rrqm/s: the number of read requests related to this device per second is Merge (when the system calls a request to read data, VFS sends the request to each FS, if FS finds that different read requests read data of the same Block, FS merges the request with Merge); wrqm/s: the number of write requests related to this device per second is Merge. Rsec/s: Number of read sectors per second; wsec/: Number of write sectors per second. RKB/s: The number of read requests that were issued to the device per second; wKB/s: The number of write requests that were issued to the device per second; avgrq-sz: avgqu-sz indicates the length of the average request queue. Undoubtedly, the shorter the queue length, the better. Await: Average time (in milliseconds) for processing each IO request ). It can be understood as the IO response time. Generally, the system IO response time should be less than 5 ms. If it is greater than 10 ms, it will be relatively large. This time includes the queue time and service time. That is to say, in general, await is larger than svctm, and the smaller the difference, the shorter the queue time, and the larger the difference, the longer the queue time, it indicates a problem with the system. Svctm indicates the average service time (in milliseconds) for each device I/O operation ). If the svctm value is very close to await, it indicates that there is almost no I/O wait, and the disk performance is good. If the await value is much higher than the svctm value, the I/O queue waits too long, and applications running on the system will slow down. % Util: All IO processing time within the statistical time, divided by the total statistical time. For example, if the statistical interval is 1 second, the device processes IO for 0.8 seconds, and the device is idle for 0.2 seconds, % util = 0.8/1 = 80%, therefore, this parameter implies the degree to which the device is busy. Generally, if this parameter is set to 100%, it indicates that the device is nearly running at full capacity (of course, if it is a multi-disk, even if % util is 100%, because of the concurrency of the disk, so the disk usage may not be a bottleneck ).
-C Parameters
Iostat can also be used to obtain the status values of some CPUs:
[oracle@rh6 ~]$ iostat -c 1 1Linux 2.6.32-71.el6.i686 (rh6.cuug.net) 09/03/2014 _i686_ (1 CPU)avg-cpu: %user %nice %system %iowait %steal %idle 0.34 0.00 0.78 13.60 0.00 85.28
Common usage
Iostat-d-k 1 10 # view TPS and throughput information (disk read/write speed in KB) iostat-d-m 2 # view TPS and throughput information (disk read/write speed in MB) iostat-d-x-k 1 10 # view device usage (% util), response time (await) iostat-c 1 10 # view cpu status
Instance analysis
[Oracle @ rh6 ~] $ Iostat-d-k 1 3 | grep sdb
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
Sdb 4.32 15.57 31.50 65309 132152
Sdb 4.00 0.00 32.00 0 32
Sdb 0.00 0.00 0.00 0 0
[oracle@rh6 ~]$ iostat -d -k -x 1 2Linux 2.6.32-71.el6.i686 (rh6.cuug.net) 09/03/2014 _i686_ (1 CPU)Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %utilsda 3.00 14.90 2.51 3.02 84.39 67.36 54.92 0.13 23.14 12.12 6.70sdb 1.89 5.67 0.54 3.74 15.16 31.05 21.59 0.10 23.20 20.75 8.88sdc 0.46 32.29 0.37 0.37 1.38 130.61 356.75 0.04 52.55 1.49 0.11dm-0 0.00 0.00 5.14 17.59 83.04 67.35 13.23 5.86 257.93 2.92 6.63dm-1 0.00 0.00 0.09 0.00 0.35 0.00 8.00 0.00 4.15 3.08 0.03dm-2 0.00 0.00 1.06 4.66 8.26 16.34 8.59 0.06 11.22 9.24 5.29dm-3 0.00 0.00 0.96 2.55 3.92 8.52 7.08 0.04 12.28 11.12 3.91dm-4 0.00 0.00 0.07 0.01 0.29 0.05 8.42 0.00 6.56 5.20 0.04dm-5 0.00 0.00 0.23 1.56 2.46 6.14 9.59 0.11 61.25 2.47 0.44dm-6 0.00 0.00 0.08 17.30 0.34 69.20 8.00 2.31 132.86 0.04 0.06dm-7 0.00 0.00 0.08 15.35 0.34 61.41 8.00 2.30 149.24 0.02 0.03
Case Analysis:
1) Perform a stress test on the db at 18:09:46 SCOTT @ test1> begin 2 for I in 1 .. 1000000 loop 3 insert into tb1 values (I); 4 end loop; 5 * end; 2) use the oracle monitoring data file I/O 18:14:02 SYS @ test1> select d. tablespace_name tbs, d. file_name, f. phyrds, f. phyblkrd, f. readtim, f. phywrts, f. phyblkwrt, 2 f. writetim 3 from v $ filestat f, dba_data_files d 4 where f. file # = d. file_id 5 * order by tablespace_name, file_name; TBS FILE_NAME PHYRDS PHYBLKRD REA Dtim phywrts phyblkwrt writetim submit Certificate ---------- DICT1/u01/app/oracle/oradata/test1/0 0 0 0 0 unknown/u01/app/oracle/oradata/ test1/1 1 0 0 0 0 index01.dbfPERFS/u01/app/oracle/oradata/test1/1 1 0 0 0 0 perfs. dbfSYSAUX/u01/app/oracle/oradata/test1/958 1321 267 257 337 1387 sysaux01.dbfS YSTEM/u01/app/oracle/oradata/test1/4229 8177 8 121 146 722 system01.dbfTBS _ 16/u01/app/oracle/oradata/test1/1 1 0 0 0 0 tbs_16.dbfTBS FILE_NAME phyrds phyblkrd readtim phywrts phyblkwrt writetim ~---------- ------------ ---------- UNDOTBS2/u01/app/oracle/oradata/test1/12929 12939 15 9037 12335 bytes RS/u01/app/oracle/oradata/test1/72 72 0 1175 1593 1783 users01.dbf8 rows selected. the pressure on the undotbs2 tablespace increases (a large number of undo operations are generated by DML operations), the pressure on the users tablespace increases, and the tb1 table data is stored in the users tablespace. 3) view disk I/O [oracle @ rh6 ~] $ Iostat-d-k-c 1 1 Linux 2.6.32-71. el6.i686 (rh6.cuug.net) 09/03/2014 _ i686 _ (1 CPU) avg-cpu: % user % nice % system % iowait % steal % idle 0.78 0.00 1.03 14.15 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtnsda 5.46 76.27 83.33 363511 4.65 2017137sdb 14.07 87.64 67081 0.82 2017684sdc 1.24 180.29 5932 26.32 859266dm-0 75.05 83.32 357693 0.08 0.32 0.00 1516 1 345256dm-3 3.46 3.54 8.64 16881 41156dm-4 0.07 0.26 0.05 1249 216dm-5 1.87 2.23 6.52 10621 31056dm-6 31.28 0.31 124.79 1461 594772dm-7 13.95 0.30 55.49 1445 [oracle @ rh6 ~] $ Iostat-d-k-x 1 Linux 2.6.32-71. el6.i686 (rh6.cuug.net) 09/03/2014 _ i686 _ (1 CPU) Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm % utilsda 2.70 18.76 2.26 3.19 76.09 83.14 58.44 0.13 23.72 13.63 7.43sdb 1.70 19.57 0.58 4.06 14.04 87.46 43.74 0.12 25.54 22.23 10.32sdc 0.42 44.49 0.33 0.48 1.24 179.86 445.40 0.55 681.36 0.73dm-0 8.93 0.00 0.00 4.63 21.63 74.87 83.14 12.04 7.37dm-1 5.32 202.57 2.81 0.00 0.32 0.00 8.00 0.00 4.15 3.08 0.02dm-2 0.00 0.00 1.05 18.74 7.82 72.28 8.09 0.31 15.64 3.40 6.73dm-3 0.00 0.00 0.87 2.58 3.53 8.63 7.05 0.05 4.09dm-4 13.06 11.85 0.00 0.00 0.06 0.01 0.05 8.42 0.00 6.56 5.20 0.04dm-5 0.00 0.00 0.21 1.66 2.22 6.50 9.33 0.10 54.91 3.38 0.63dm-6 0.00 0.00 0.08 31.13 0.31 124.50 8.00 67.75 0.68dm-7 2171.18 0.22 0.00 0.00 0.08 13.84 0.30 55.36 8.00 2.08 149.24 0.02 0.03
/U01 File System (data file) is on sda, redo log files are stored on sdb, and archive log files are stored on sdc, i/O pressure on sda and sdb is high.