Vmstat command
Usage: Vmstat 1---> each 1s printing information;
Role:
- R: Indicates the running queue, if the change value is too large, the CPU may be busy, high utilization;
- B: Number of processes blocking and waiting for IO
- SWPD Virtual Memory usage
- Free memory
- The buff is used as a cache
- Si (from disk paging to the number of memory), so (from memory paging to the number of disks) two columns, indicating the frequency of memory exchange, if the value is large long, indicating insufficient memory;
- BI Read disk
- Bo Write disk
Linux Memory management mechanism
- Linux has its own memory management mechanism, Linux will use memory as much as possible to improve IO efficiency;
- If the system is free enough, the system will automatically release cache and buffer memory for program use (cache and Bugffer are dynamically managed by the kernel) when the trigger mechanism is reached.
- If there is a lot of used, and the cache and buffer ratio is small, it may not be enough memory. You can't judge by the size of free alone. So you can simply understand that cache and buffer are also part of free. (Available Memory=free memory+buggers+cached)
- Swan is a virtual memory on disk, so his changes may lead to an increase in exchange with IO;
Low Memory performance:
The free memory is drastically reduced, and the recovery of buffer and cache is useless, with a large number of swap partitions (SWPD), increased read and write disks (IO), and a large amount of CPU time waiting for IO (WA).
Recommendation: Memory during the test to ensure a sufficient amount of memory available, not less than 20%;
Good state: S0 Si tends to 0
The bottleneck of IO:
The bottleneck of Io is now a common problem in the system, has not been a good solution, and development is not as fast as CPU and mem
Good: iowait% < 20%
General: iowait% = 35%
Bad: iowait% >=50%
The above values are for reference only
Cpu>wa too large (reference value, more than);
System>bi&bo too large (reference value, over 2000)
Command usage scenarios:
Vmstat monitoring if us is very high, nearly 100%, you need to use the top name to see which process is causing it, and then in the analysis process;
Sy is very high, can try strace to see the system kernel call situation;
If the IO is abnormal, you can try using Iostat to see
Iostat (easy to understand)
perform iostat alone, showing the results from the system to the current time of execution of the statistical information;
AVG-CPU: Overall CPU usage statistics, for multi-core CPUs, here is the average of all CPUs;
Device: IO statistics for each disk failure
TPS: Io times per second
KB_READ/S: Amount of data read from failed drive expressed per second
KB_WRTN/S: Amount of data written to failure (drive expressed) per second
Kb_read: Total amount of data read
KB_WRTN: Total amount of data written
Iostat-k 5 2
executed 2 times, 5 seconds apart; same as Iostat (iostat continuous printing)
Iostat-x Show more detailed information "Focus"
RRQM/S: How much of this device-dependent read request is merged per second (request merge when same block is requested)
WRQM/S: How much of this device-related write request is merged per second;
R/S: Number of Read requests per second (RIO)
W/S: Number of Write requests per second (WIO)
RESC/S: Number of Write sectors per second (Wsect)
RKB/S: The amount of data read per second, in units of K bytes;
Avgqu-sz: Average I/O Queue Length
Await: Average wait time per device I/O operation (MS)
SVCTM: Average service time Per device I/O operation (MS) SVCTM closer to await indicates less wait time
%util: Indicates the equipment is busy, 80% means the device is already busy;
Symptoms of IO bottlenecks:
1,%util is very high
2, the await is much bigger than SVCTM
3, Avgqu-sz is larger than
Linux Monitoring Combat-2