IO. sh # iostat is used to view disk activity statistics # display the load of all devices r/s: The number of read I/O devices completed per second. That is, rio/s; w/s: the number of write I/O devices completed per second. That is, iostat such as wio/s # refresh the disk IO information every 2 seconds, in addition, iostat is displayed three times at a time. iostat 2 3 # display the IO information of a disk. iostat-d sda1 # display tty and cpu information. iostat-t # display disk IO information in MB. iostat -m # view TPS and throughput information kB_read/s: data Volume read from drive expressed per second; kB_wrtn/s: data volume written to drive expressed per second; kB_read: total data volume read; kB_wrtn: total data volume written; iostat-d-k 1 1 # view device usage (% util), response time (await) iostat-d-x-k 1 1 # view CPU status iostat-c 1 3 # statistical process (pid) stat, the stat of a process naturally includes the process's IO status pidstat # Only displays IOpidstat-d 1 #-d IO information,-r Page missing and memory information-u CPU usage-t takes the thread as the statistical unit and 1 second to calculate a pidstat-u-r-d-t 1 # file-level IO analysis, view the processes in which the current file is enabled. lsof ls/proc/pid/fd # use sar to report disk I/O information. DEV is monitoring the block device tps. total rd_sec/s number of sectors read from the device per second wr_sec/s average number of sectors written to the device per second avgrq-sz I/O Request # avgqu-sz I/O Request the average queue length of await I/O requests, the Unit is the average service time of the millisecond svctm I/O request, and the Unit is the percentage of the time occupied by the millisecond % util I/O Request, that is, the device utilization rate is sar-pd 10 3 # iotop I/O version iotop # view page cache information where Cached refers to the memory size used for pagecache (diskcac He-SwapCache ). As the cache page is written, the value of Dirty increases. Once the cache page is written to the hard disk, the value of Writeback increases until the write ends. Cat/proc/meminfo # check the number of pdflush processes Linux uses the pdflush process to write data from the cache page to the hard disk # pdflush behavior is controlled by/proc/sys/vm parameters/proc /sys/vm/dirty_writeback_centisecs (default 500): 1/100 seconds. How long does it take to wake up pdflush and write cached page data to the hard disk. By default, two (more) threads are awakened in 5 seconds. If the wrteback time is longer than that of dirty_writeback_centisecs, cat/proc/sys/vm/nr_pdflush_threads # view the I/O scheduler # scheduling algorithm # noop anticipatory deadline [cfq] # deadline: the deadline algorithm ensures minimum latency for established IO requests. # Anticipatory. This causes a large latency for random reads. Poor database applications, and good performance for Web servers. # Cfq: maintain an I/O queue for each process. The I/O requests sent by each process are processed by cfq in a round robin manner, which is fair to every I/O Request. Suitable for discrete read applications. # Noop: process all IO requests in the FIFO queue. The default IO does not have performance problems. Cat/sys/block/[disk]/queue/scheduler # change the IO scheduler $ echo deadline>/sys/block/sdX/queue/schedline # increase the scheduler's request queue $ echo 4096>/sys/block/sdX/queue/nr_requests
From your own github https://github.com/zhwj184/shell-work/blob/master/IO.sh