Rationale: Through analysis /proc/diskstats files to monitor the performance of the IO. The explanations are as follows:
+++++++++++++++++++++++++++ 's explanation of/proc/diskstats ++++++++++++++++++++++++++++++++++++++++++++
[email protected] bin]# cat/proc/diskstats | grep SDA | Head-1
8 0 SDA 73840 10263 3178156 91219 1110085 4192562 42423152 1275861 0 447798 1366379
First to third domain, respectively, main device number, secondary device number, device name
4th domain: The number of read times-----Read the disk, the total number of successful reads completed.
(number of issued reads. This is the total number of reads completed successfully.)
5th field: Merge Read complete , 9th field: Number of merge writes completed . For efficiency, adjacent reads and writes may be merged. Thus two times 4K of reading before it is finally processed to disk may become a 8K read before it is counted (and queued), so there is only one I/O operation. This field lets you know how frequently such operations are done.
(number of reads merged)
6th field: number of Read Sectors , total number of sectors successfully read.
(Number of sectors read. This is the total number of sectors read successfully.)
7th domain: The number of milliseconds spent reading , which is the number of milliseconds spent on all read operations (measured in __make_request () to End_that_request_last ()).
(number of milliseconds spent reading. This is the total number of milliseconds spent by all reads (as measured from __make_request () to End_that_request_last ()) .)
8th domain: The number of write completion times----Write completed, the total number of successful write completed.
(number of writes completed. This is the total number of writes completed successfully.)
9th field: The number of merge write finishes -----The number of merge writes.
(Number of writes merged Reads and writes which is adjacent to each other could be merged for efficiency. Thus 4K reads may become one 8K read before it was ultimately handed to the disk, and so it would be counted (and queued ) as only one I/O. This field lets the know how often is done.)
10th field: Number of write sectors----write sector, total number of successful write sectors.
(Number of sectors written. This is the total number of sectors written successfully.)
11th domain: The number of milliseconds the write operation spends---the number of milliseconds the write takes, which is the number of milliseconds spent on all writes (measured in __make_request () to End_that_request_last ()).
(number of milliseconds spent writing this was the total number of milliseconds spent by all writes (as measured from __mak E_request () to End_that_request_last ()).)
12th domain: The number of input/output requests being processed ---the current progress of I/O, only this domain should be 0. Decreases when the request is submitted to the appropriate request_queue_t and the request is completed.
(Number of I/Os currently in progress. The only field is should go to zero. Incremented as requests is given to appropriate request_queue_t and decremented as they finish.)
13th domain: The number of milliseconds spent in an input/output operation ----The number of milliseconds spent on I/O operations, and this field grows as long as field 9 is not 0.
(number of milliseconds spent doing I/Os. This field was increased so long as field 9 is nonzero.)
14th domain: The weighted number of milliseconds that the input/output operation spends -----weighted, the milliseconds spent on I/O operations, and the domain increases at the beginning of each I/O, I/O, and I/O merges. This can provide a convenient measurement standard for I/O completion time and storage for those that can accumulate.
(number of milliseconds spent doing I/Os. This field is incremented on each I/O start, I/O completion, I/O merge, or read of these stats by the number of I/Os in PR Ogress (Field 9) times the number of milliseconds spent doing I/O since the last update of this field. This can provide a easy measure of both I/O completion time and the backlog that could be accumulating.)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++
Next is the Zabbix agent configuration file to do the operation:
Vi/usr/local/zabbix/etc/zabbix_agentd.conf
Userparameter=custom.vfs.dev.read.ops[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$4} '//disk read count
Userparameter=custom.vfs.dev.read.ms[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$7} '//disk read milliseconds
Userparameter=custom.vfs.dev.write.ops[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$8} '//Disk write times
Userparameter=custom.vfs.dev.write.ms[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$11} '//disk writes on the number of milliseconds
Userparameter=custom.vfs.dev.io.active[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$12} '
Userparameter=custom.vfs.dev.io.ms[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$13} '// number of milliseconds spent on IO operations
Userparameter=custom.vfs.dev.read.sectors[*],cat/proc/diskstats | grep $ | head-1 | The number of awk ' {print $$6} '// read sectors (one sector equal to 512B)
Userparameter=custom.vfs.dev.write.sectors[*],cat/proc/diskstats | grep $ | head-1 | awk ' {print $$10} '// number of write sectors (one sector equals 512B)
The test commands are as follows:
[email protected] bin]#/zabbix_get-s 10.2.11.11-p 10050-k CUSTOM.VFS.DEV.WRITE.OPS[SDA]
111153
Add Indicator: Idea: First add the template and then add item to the template.
Indicator Details:
First indicator of NAME:DISK:$1:READ:BYTES/SEC
KEY:CUSTOM.VFS.DEV.READ.SECTORS[SDA]
Units:b/sec
Store value:speed per second//will perform a difference calculation
Use Custom multiplier 512//will multiply the value by 512, because this is a sector, converted to byte 512B
650) this.width=650; "src=" Http://blog.chinaunix.net/attachment/201504/16/26446098_1429168712iR9g.png "width=" 688 "height=" 581 "style=" border:0px;/>
Similarly, the other indicator methods are added as follows:
Second indicator: NAME:DISK:$1:WRITE:BYTES/SEC
KEY:CUSTOM.VFS.DEV.WRITE.SECTORS[SDA]
Units:b/sec
Store Value:speed per second
Use custom multiplier 512
Third Indicator: Name:disk:$1:read:ops per second
KEY:CUSTOM.VFS.DEV.READ.OPS[SDA]
Units:ops/second
Store Value:speed per second
Fourth indicator: Name:disk:$1:write:ops per second
KEY:CUSTOM.VFS.DEV.WRITE.OPS[SDA]
Units:ops/second
Store Value:speed per second
Fifth Indicator: Name:disk:$1:read:ms
KEY:CUSTOM.VFS.DEV.READ.MS[SDA]
Units:ms
Store Value:speed per second
Sixth indicator: Name:disk:$1:write:ms
KEY:CUSTOM.VFS.DEV.WRITE.MS[SDA]
Units:ms
Store Value:speed per second
Transferred from: http://blog.chinaunix.net/uid-26446098-id-4964263.html
Github:https://github.com/grundic/zabbix-disk-performance
Configuration for push disk monitoring via ansible: https://github.com/meissnerIT/mit.zabbix-agent.disk-performance
This article is from the "Zengestudy" blog, make sure to keep this source http://zengestudy.blog.51cto.com/1702365/1874951
Zabbix monitoring disk I/O for Linux servers