Those performance parameter metrics for Linux servers

Source: Internet
Author: User
Tags switches cpu usage vps high cpu usage io domain

Those performance parameter metrics for Linux servers

A Linux operating system-based server runs at the same time, it will also characterize a variety of parameter information. In general, OPS and system administrators are extremely sensitive to these data, but these parameters are also important to developers, especially when your program is not working properly, and these clues often help to locate the tracking problem quickly.
Here are just a few simple tools to view the relevant parameters of the system, of course, many tools are also through the analysis of processing/proc,/sys under the data to work, and those more detailed, professional performance monitoring and tuning, may also need more professional tools (perf, SYSTEMTAP, etc.) and technology to complete OH. After all, system performance monitoring itself is a brainiac.

First, CPU and memory class 1.1 top

  ? ~ top

The three values after the first line are the average load of the system at the previous 1, 5, and 15, and you can see that the system load is rising, steady, and declining, and when this value exceeds the number of CPU executable units, the CPU performance is already saturated as a bottleneck.

The second line counts the system's task status information. Running is naturally needless to say, including what is running on the CPU and will be scheduled to run; sleeping is typically a task that waits for events (such as IO operations) to complete, and segments can include interruptible and uninterruptible types Stopped is a suspended task, usually sent Sigstop or a foreground task action ctrl-z can pause it; Zombie Zombie task, although the process terminates the resource is automatically recycled, but contains the task Descriptor requires the parent process to be accessed before it can be released, the process is displayed as a defunct state, regardless of whether the parent process exits prematurely or is not a wait call, and this process should pay extra attention to the program's design errors.

The third line of CPU utilization is based on the following types of conditions:
(US) User:cpu the time taken in the low Nice value (high priority) User state (nice<=0). Under normal circumstances, as long as the server is not very busy, then most of the CPU time should be executed in this kind of program
(SY) System:cpu in the time taken in the kernel state, the operating system from the user state of the system call in the kernel state to perform a specific service, usually the value is relatively small, but when the server performs more intensive Io, the value will be larger
(NI) Nice:cpu the Elapsed Time (nice>0) of the high Nice value (Low priority) user state running at low priority. The default newly initiated process, nice=0, is not counted here unless the program's nice value is manually modified by Renice or setpriority ()
(ID) idle:cpu time spent in idle state (execution kernel idle handler)
(WA) Iowait: Time to wait for IO to complete
(HI) IRQ: The time that the system consumes to handle hardware interrupts
(SI) Softirq: The system to handle the time spent on soft interrupts, remember that soft interrupts are divided into Softirqs, tasklets (in fact, the former special case), work queues, do not know here is the statistics of what time, after all work The execution of the queues is not the interrupt context.
(ST) Steal: In the case of virtual machines is meaningful, because the virtual machine under the CPU is also shared physical CPU, so this time indicates that the virtual machine waits for hypervisor to schedule the CPU time, also means that this time hypervisor the CPU scheduling to other CPU execution, CPU resources at this time are "stolen". This value on my KVM VPS machine is not 0, but also only 0.1 this order of magnitude, is not used to determine the VPS oversold situation?
High CPU usage is a lot of things that means something, which also gives the server high CPU utilization indication of the corresponding troubleshooting ideas:
(a) When the user occupancy rate is too high, usually some individual processes occupy a large number of CPUs, it is easy to find the program through top, at this time, if you suspect the program is abnormal, can be perf and other ideas to find hot call function to further troubleshooting;
(b) When the system occupancy rate is too high, if the IO operation (including terminal IO) is more, it may cause this part of the high CPU utilization, such as on the file server, database server and other types of servers, otherwise (such as >20%) It is possible that some parts of the kernel, driver modules have problems;
(c) When nice occupancy is high, it is usually intentional, and when the initiator of the process knows that some process is taking up a higher CPU, it will set its nice value to ensure that it does not overwhelm the CPU usage requests from other processes;
(d) When the iowait occupancy rate is too high, it usually means that the IO operation of some programs is very inefficient, or the performance of the IO counterpart device is so low that the read and write operation takes a long time to complete;
(e) When the IRQ/SOFTIRQ occupancy rate is too high, it is likely that some peripheral problems, resulting in a large number of IRQ requests, at this time by examining the/proc/interrupts file to dig into the problem;
(f) When the steal occupancy rate is too high, the black-hearted manufacturer virtual machine oversold!

Lines four and fifth are the information for physical memory and virtual memory (swap partition):
Total = Free + used + Buff/cache, now buffers and cached mem information are combined together, but buffers and cached
The relationship between Mem is not clear in many places. In fact, by comparing the data, these two values are the buffers and cached fields in the/proc/meminfo: buffers is a block cache for raw disk, mainly cache the file system metadata (such as the Super Block information) in the way of raw block, This value is generally relatively small (20M or so), while cached is for some specific files to read cache, in order to increase the efficiency of file access to use, can be said to be used in file system file cache use.
The Avail mem is a new parameter value that indicates how much memory space can be given to the newly opened program without swapping, roughly the same as the Free + buff/cached, and this confirms the above statement that the above is not the same as the buffers + cached MEM is the physical memory that is really available. Also, using swap partitioning is not necessarily a bad thing, so switching partition usage is not a serious parameter, but frequent swap in/out is not a good thing, and this situation needs to be noted, which usually indicates a shortage of physical memory.

The last is a list of resource consumption for each program, where CPU utilization is the sum of all CPU core usage. Usually the top of the time, itself the program will be a large number of read/proc operations, so the basic top program itself will be among the highest.
Top is very powerful, but it is often used for real-time monitoring of system information in the console, not suitable for long periods (days, months) to monitor the load information of the system, while the short-lived process will also miss the statistical information can not be given.

1.2 Vmstat

Vmstat is another common system detection tool in addition to top, the following is the system load I compile boost with-J4.

R indicates the number of processes that can be run, the data is roughly the same, and B represents the number of uninterruptible sleep processes; SWPD represents the amount of virtual memory used, and the value of top-swap-used is a meaning, as the manual says, Usually the number of buffers is much smaller than cached mem, buffers generally 20M such an order of magnitude; the Bi, bo in the IO domain indicates the number of blocks (BLOCKS/S) received and sent to disk per second, and the in of the system domain indicates the number of systems interrupts per second ( including clock interrupts), CS indicates the number of context switches due to process switching.
Speaking of this, think of the past many people tangled up in the compilation of Linux kernel when the-J parameter is CPU Core or CPU core+1? By modifying the-j parameter value above to compile boost and Linux kernel while turning on vmstat monitoring, it is found that the context switch basically has no change in both cases, and there is only a significant increase in-j value after the context switch will be significantly increased. It does not seem to be too tangled up in this parameter, although I haven't tested the exact length of the compilation time. The data says if it is not in the system boot or benchmark state, the parameter context switch>100000 program must have a problem.

1.3 Pidstat

If you want to do a full and specific tracking of a process, there is nothing more appropriate than pidstat-stack space, missing pages, the main passive switch and other information. The most useful parameter to this command is-T, which lists the details of each thread in the process.
-R: Display page faults and memory usage, the fault is a paging that the program needs to access the mapped in the virtual memory space but has not yet been loaded into physical memory, and the two main types of fault pages are
(a). Minflt/s refers to the minor faults, when the physical page needs to be accessed for some reason (such as shared pages, caching mechanisms, etc.) already exist in physical memory, only in the current process of the page table is not referenced, the MMU only need to set the corresponding entry on it, The price is quite small.
(b). MAJFLT/S refers to the major Faults,mmu need to request a free physical page in the currently available physical memory (if no free pages are available, you need to switch the other physical pages to swap space to free up the free physical page), Then load the data from the external to the physical page, and set the corresponding entry, the cost is quite high, and the former has several data-level differences
-S: Stack usage, including the stack space stksize is reserved for threads, and the stack space that Stkref actually uses. Use Ulimit-s to discover CentOS 6.x above the default stack space is 10240K, and CentOS 7.x, Ubuntu series default stack space size is 8196K

-U:CPU usage situation, the parameters are similar to the previous
-W: The number of thread context switches is also subdivided into cswch/s because of the active switchover caused by factors such as waiting for resources, and the statistics of passive switching caused by nvcswch/s thread CPU time
If each time the PS get the program's PID and then Operation Pidstat will appear very troublesome, so the killer's-C can specify a string, and then command if the string is included, then the program's information will be printed statistics,-l can display the full program name and parameters
? ~ pidstat -w -t -C “ailaw” -l 
So, if you look at a single particular multi-threaded task, pidstat is better than the usual PS!

1.4 other

When a single CPU condition needs to be monitored separately, in addition to htop you can use Mpstat to see if the workload of each core on the SMP processor is load balanced and if some hotspot threads occupy the core.
? ~ mpstat -P ALL 1
If you want to directly monitor the resources used by a process, you can top -u taozj filter out other user-independent processes in a way that you can, or choose from the following methods, and the PS command can customize the entry information that needs to be printed:

1 While:; Do Ps-eo user,pid,ni,pri,pcpu,psr,comm | grep ' AILAWD '; sleep 1; Done

If you want to clarify the inheritance relationship, the following common parameters can be used to display the process tree structure, the display effect than pstree more beautiful
? ~ ps axjf

Second, disk IO class

Iotop can visually display the real-time rate of disk reads of each process and thread, lsof can not only display the open information of ordinary files (the user), but also manipulate the opening information of/dev/sda1 such device files, such as when the partition cannot be umount. You can use Lsof to find out the usage status of the disk partition, and add the +FG parameter to display the file open flag flag.

2.1 Iostat

? ~ iostat -xz 1
iostat -xz 1in fact, for both use sar -d 1 and use, the key parameters for the disk are:
Avgqu-sz: The average waiting queue length sent to a device I/O request, for a single disk if the value >1 indicates that the device is saturated, with the exception of logical disks for multiple disk arrays;
Await (r_await, w_await): The average wait time (ms) for each device I/O request operation, including the sum of the Times the request is queued and served;
SVCTM: Average service time (MS) sent to device I/O requests, if SVCTM is close to await, indicates that there is little I/O waiting, disk performance is good, otherwise the disk queue waits longer and the disk response is poor;
%util: The usage of the device, indicating the ratio of the I/O working time per second, the performance of a single disk when%util>60% is reduced (reflected in the await will also increase), when the device is nearly 100% saturation, but for multiple disk arrays of logical disk, except for the case;
Also, although the monitored disk performance is poor, but does not necessarily affect the response of the application, the kernel usually uses I/O asynchronously technology, using read-write caching technology to improve performance, but this is constrained by the above physical memory limitations.
The above parameters are also useful for the network file system.

Third, the network class

Network performance for the importance of the server is self-evident, the tool Iptraf can be intuitive real-time network card transmission and delivery speed information, comparison of simple and convenient through sar -n DEV 1 can also get similar throughput information, and network cards are standard with the maximum rate information, such as the Gigabit LAN Gigabit network card, it is easy to see the utilization of equipment.
In general, the transmission rate of network card is not the most concerned in the development of networks, but for specific UDP, TCP connection packet loss rate, retransmission rate, and network delay and other information.

3.1 netstat

? ~ netstat -s
Displays the overall data information for each protocol since the system started. Although the parameter information is rich and useful, but the cumulative value, unless the two run bad to get the current system network status information, or use watch eyes to visualize its numerical trend. So netstat is typically used to detect port and connection information:

Netstat–all (a) –numeric (n) –tcp (t) –udp (U) –timers (o) –listening (l) –program (p)

–timers can cancel the domain name reverse query, speed up the display speed; more commonly used

12 ? ~ NETSTAT-ANTP #列出所有TCP的连接? ~ NETSTAT-NLTP #列出本地所有TCP侦听套接字, do not add-a parameter

3.2 SAR

SAR This tool is too powerful, what CPU, disk, page Exchange what all tube, here use-n mainly used to analyze network activity, although the network it also subdivided the NFS, IP, ICMP, sock and other levels of various protocols of the data information, we only care about TCP and UDP. The following command shows, in addition to the general situation, the sending and receiving of datagrams, including
  Tcp
  ? ~ sudo sar -n TCP,ETCP 1 

ACTIVE/S: A locally initiated TCP connection, such as Via Connect (), the status of TCP from closed to Syn-sent
PASSIVE/S: A remote-initiated TCP connection, such as through accept (), the status of TCP from Listen, SYN-RCVD
RETRANS/S (Tcpretranssegs): The number of TCP retransmissions per second, usually when the network quality is poor, or the server is overloaded after the packet drops, according to the TCP confirmation retransmission mechanism will occur retransmission operation
ISEGERR/S (TCPINERRS): Packets received in error per second (e.g. checksum failed)
  Udp
  ? ~ sudo sar -n UDP 1 
NOPORT/S (udpnoports): The number of datagrams received per second but no application at the specified destination port
IDGMERR/S (udpinerrors): Number of datagrams received but not available in addition to the above reasons
Of course, these data can explain the network reliability to some extent, but it is only meaningful to combine with the specific business requirement scenario.

3.3 tcpdump

Tcpdump had to say it was a good thing. We all know the local debugging like to use the Wireshark, but the online service side of the problem how to do? Appendix of the reference to the idea: the recovery environment, using tcpdump to grab the packet, when the problem is reproduced (such as the log display or a state appears), you can end the capture, and the tcpdump itself with-c/-w parameters, you can limit the size of the crawl packet storage file, When this limit is reached, the saved packet data is automatically rotate, so the total number of packets is still controllable. After that the packet took off the line, with Wireshark want to see how to see, not happy! Tcpdump Although there is no GUI interface, but the function of grasping the package is not weak, you can specify the network card, host, port, protocol and other filtering parameters, the package is complete with time stamp, so the online program packet analysis can be so simple.
Here is a small test, it can be seen that Chrome started automatically to the webserver initiated the establishment of three connections, due to the limitation of the DST port parameter, so the service side of the response packet was filtered out, take off to open with Wireshark, SYNC, ACK the process of establishing a connection is still obvious! In the use of tcpdump, you need to configure as much as possible the filtering conditions, on the one hand to facilitate the next analysis, the second tcpdump on the network card and the performance of the system will have an impact, which will affect the performance of the online business.

Finish this article!

Reference
    • Linux performance analysis in 60,000 Milliseconds
    • Linux Programmer ' s Manual PROC-5
    • Command line Tools to Monitor Linux performance
    • Linuxperformanceguide
    • Understanding Linux CPU Stats
    • Linux Performance monitoring--cpu,memory,io,network
    • Linux System performance Metrics
    • Performance analysis
    • Linux Performance Analysis and Tools
    • Uncover the meaning of top ' s Statistics
    • Tcpdump and Wireshark together, pulling out the problematic machine.
    • The usage of super detailed tcpdump
    • How to surprise by being a linux-performance "know-it-all"

This article title: Performance parameter metrics for Linux servers

Last update: 2017-01-04, 22:41:16

Original link: https://taozj.org/201701/linux-performance-basic.html

License Agreement: "Attribution-Non-commercial-share 4.0" Reproduced please keep the original link and the author.

Text Processing tool sed and awk use summary

Those performance parameter metrics for Linux servers

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.