Linux Performance View Commands:
Linux Performance monitoring: CPU Chapter
CPU usage depends largely on what resources are running on the CPU, such as copying a file that usually takes up less CPU, because most of the work is done by DMA (Direct Memory Access), but only after the copy has been completed. Let the CPU know that the copy is complete Scientific calculation usually takes up a lot of CPU, most of the calculation work needs to be done on the CPU, memory, hard disk subsystem only do temporary data storage work. To monitor and understand CPU performance, you need to know some basic knowledge of the operating system, such as: interrupts, process scheduling, process context switching, running queues, and so on. Here Vpsee use an example to briefly describe these concepts and their relationships, the CPU is very innocent, is a hard-working workers, every moment there is work to do (process, thread) and own a work list (can run queue), by the Boss (process scheduling) to decide what he should do, he needs Communicate with your boss to get the boss's ideas and adjust your work in time (context switching) part of the work to be completed in time to report to the Boss (interruption), so wage earners (CPU) in addition to doing their own work, there is a lot of time and energy spent on communication and reporting.
The CPU is also a hardware resource, like any other hardware device requires driver and management programs to use, we can take the kernel process scheduling as a CPU management program, to manage and allocate CPU resources, reasonable scheduling process to preempt the CPU, and decide which process to use the CPU, which process should wait. The process scheduling in the operating system kernel is mainly used to dispatch two kinds of resources: process (or thread) and interrupt, process scheduling to different resources assigned different priority, the highest priority is the hardware interruption, followed by the kernel (System) process, and finally the user process. Each CPU maintains a running queue that holds the threads that are available to run. A thread is either in a sleep state (blocked is waiting for IO) or in a running state, and if the current load on the CPU is too high and the new request continues, there will be a process schedule that is temporarily not able to cope, and this time it will have to put the thread into the running queue. Vpsee is here to talk about performance monitoring, which has nothing to do with performance, so what does these concepts have to do with performance monitoring? Relationship is significant. If you are the boss, how do you check the efficiency (performance) of the wage earners? We will generally use the following information to determine whether wage earners are lazy:
· How many tasks did the wage earners accept and complete and report to the Boss (interrupted);
· The employee communicates with the boss, negotiates the work progress of each work (context switch);
· The work list of wage earners is not full (can run queue);
· Wage earners how to work, is not lazy (CPU utilization).
Now replace the workers with CPUs, we can monitor CPU performance by looking at these important parameters: interrupts, context switches, running queues, and CPU utilization.
Bottom line
Linux Performance Monitoring: the introduction mentioned that before performance monitoring needs to know the bottom line, then monitoring CPU performance of the bottom line is what. Usually we expect our system to reach the following goals:
· CPU Utilization , if the CPU has 100% utilization, then should reach such a balance: 65%-70% User time,30%-35% System time,0%-5% Idle time;
· context Switching, context switching should be linked to CPU utilization, and a large number of context switches are acceptable if the above CPU utilization balance is maintained;
· can run queues , each run queue should not have more than 1-3 threads (per processor), such as: Dual-processor system can not run the queue of more than 6 threads.
CPU monitoring is divided into two types: CPU utilization and IPC/CPI, the application of both scenarios are not the same,
CPU usage monitoring is usually used, and basically the operating system provides
IPC/CPI monitoring requires the assistance of performance experts, the operating system is basically no relevant instructions
CPU usage requires attention to user-state CPU usage and system-state CPU usage, which indicates the percentage of CPU time that the system has been running on the CPU, which is the percent of the total CPU time taken by the operating system. The user-state CPU percent of 100% is the ideal condition, but this is usually not possible, and when there is program scheduling, thread context switching, and IO interaction, the system CPU usage increases. It should be clear that the application consumes a lot of CPUs and does not mean that performance or scalability reaches the highest or bottleneck. If the system CPU usage is high for a long time, then it needs attention, it may be that the program is not elegant, it may be that the disk will be damaged caused IO consumption time is too long, then need CPU monitoring to analyze the results
For computing-intensive applications, it is not enough to monitor CPU usage, but also to monitor IPC (per clock instruction) or CPI (per instruction clock cycle). Why do I need to know this data? Either IPC or CPI can reflect the percentage of CPU clock cycles that are consumed when no instruction is executed, in short, the percentage of elapsed time that the CPU waits for an instruction to load registers from memory, i.e., stagnation (stall). Stagnation-When the CPU executes the instruction and the operand is not in the register or cache and the current clock cycle has not expired, the CPU needs to wait for data to load registers from memory, and stagnation, when it occurs, typically wastes hundreds of CPU clock cycles. To improve the performance of compute-intensive applications, you need to gain IPC/CPI monitoring data, reduce stagnation, reduce CPU latency, or improve caching.
Linux Performance monitoring: Memory Chapter
The "Memory" mentioned here includesPhysical MemoryAndVirtual Memory, virtual Memory extends the memory space of the computer to the hard disk, physical memory (RAM) and part of the hard disk space (SWAP) as virtual memory provides a consistent virtual memory space for the computer, and the advantage is that we have more memory than we have. Can run more, larger programs, the disadvantage is that part of the hard disk when the overall performance of memory is affected, hard disk read and write faster than memory several orders of magnitude, and the exchange between RAM and swap increased the burden of the system.
In the operating system, virtual memory is divided into pages, and on the x86 system each page size is 4KB. The Linux kernel reads and writes the virtual memory is the "page" unit operation, transfers the memory to the hard disk swap space (swap) and reads to the memory from the swap space to read and write by the page. This exchange process for memory and swap is called page Exchange (paging), and it is noteworthy that paging and swapping are two completely different concepts, and many reference books in the country confuse the two concepts, swapping also translates them into exchanges, In the operating system refers to the full exchange of a program to the hard disk to free up memory for the new program to use, and paging only Exchange Program part (page) is two different concepts. Pure swapping is already hard to see in modern operating systems, because switching the entire program to a hard drive is time-consuming, laborious and unnecessary, and modern operating systems are basically paging or paging/swapping mixed, swapping originally in Unix Implemented on system V.
Virtual memory management is the most complex part of the Linux kernel, and it may take a whole book to understand this part of the content. Vpsee Here only describes the two kernel processes associated with performance monitoring: KSWAPD and Pdflush.
KSWAPD daemon is used to check Pages_high and Pages_low, and if the available memory is less than PAGES_LOW,KSWAPD start scanning and attempt to free 32 pages, and the process of repeated scan release until the available memory is greater than pages_high Check. 3 Things to check when scanning: 1 if the page is not modified, put the page into the available memory list; 2 If the page is modified by the file system, the page content is written to disk; 3 if the page is modified but not modified by the file system, the page is written to swap space.
Pdflush daemon is used to sync file-related memory pages, synchronizing the memory pages to the hard drive. For example, open a file, the file is imported into memory, the file has been modified and saved, the kernel does not immediately save the file to the hard disk, by Pdflush decide when to write the appropriate page to the hard disk, which by a kernel parameter vm.dirty_background_ratio to control, For example, the following parameters show that the dirty page (dirty pages) reaches all memory page 10% when it begins to write to the hard disk.
Linux Performance monitoring: Disk IO Chapter
disk is usually the slowest subsystem of the computer and the most prone to performance bottlenecks because the disk is farthest from the CPU and CPU access to the disk involves mechanical operations, such as rotating shafts, track-finding, and so on. The speed difference between accessing the hard disk and accessing the memory is measured in order of magnitude, just like the difference between 1 days and 1 minutes. To monitor IO performance, it is necessary to understand the fundamentals and how Linux handles IO between the hard disk and memory. On the memory page a Linux Performance Monitor: Memory mentions that IO between memory and hard disk is done in pages, with a size of 4K of 1 pages on a Linux system. You can view the system default page size with the following command:
?
1 2 3 4 |
$/usr/bin/time-vdate ... Page size (bytes): 4096 ... |
Page break Linux uses virtual memory to greatly expand the program address space, so that the original physical memory can not tolerate the program also through the memory and the continuous exchange between the hard disk (the temporary use of memory pages to the hard disk, the required memory page from the hard disk to read memory) to win more memory, It looks like the physical memory has been enlarged. In fact, the process is completely transparent to the program, the program does not care which part of their own, when the exchange into memory, everything has the kernel of virtual memory management to complete. When the program is started, the Linux kernel first checks the CPU's cache and physical memory, if the data is already in memory to ignore, if the data is not in memory will cause a Page fault (pages Fault), and then read the pages from the hard disk, and to cache the missing pages in the physical memory. The page break can be divided into main page faults (Major pages Fault) and the secondary fault (Minor pages Fault), to read data from the disk is interrupted by the main fault; data has been read into memory and cached, The interrupt that is generated from the memory cache instead of reading the data directly from the hard drive is the page break. The memory buffer above plays the role of the read-ahead hard disk, the kernel first in the physical memory to look for pages, no words produced a page break from the memory cache to find, if you have not found the words from the hard disk read. Obviously, the extra memory out of the memory buffer to improve access speed, there is a hit rate problem, good luck if every page fault can be read from the memory buffer will greatly improve performance. An easy way to increase the hit rate is to increase the memory buffer area, the larger the cache, the more stored pages, the higher the hit rate. The following time command can be used to see how many primary page faults and secondary page faults were generated when a program first started:
$/usr/bin/time-vdate
...
Major (requiring I/O) page faults:1
Minor (Reclaiming a frame) page faults:260
...
File Buffer Cache
Reading pages from the memory cache (also called file buffer cache) is much faster than reading pages from the hard drive, so the Linux kernel wants to be able to produce a page break (read from a file buffer) as much as possible, and to avoid the main page break (read from the hard drive) as much as possible. In this way, with the increase in the number of pages broken, file buffer gradually increased, until the system only a small amount of available physical memory Linux began to release some unused pages. After running Linux for a while, we find that although there are not many programs running on the system, there is always a small amount of available memory, which gives us the illusion that Linux is inefficient in memory management, in fact, Linux uses the physical memory that is temporarily unused to do the storage (memory buffer) efficiently. The following print is the physical memory and file cache on a Sun server in Vpsee:
?
1 2 3 4 5 |
$ cat/proc/meminfo memtotal:8182776 KB memfree:3053808 KB buffers:342704 KB cached:3972748 KB |
This server has a total of 8GB physical memory (memtotal), 3GB of available memory (Memfree), 343MB around used to do disk caching (buffers), 4GB around to do file buffer (Cached), visible Linux really use a lot of physical memory to do Cache, and this cache can continue to grow.
Page type
There are three types of memory pages in Linux:
· Read pages, pages only (or code pages), those pages that are read from the hard drive by the main page fault, including static files, executables, library files, and so on that cannot be modified. When the kernel needs them to read them into memory, when the memory is low, the kernel releases them to the free list, and when the program needs them again, it needs to read the memory again through a page break.
· Dirty pages, dirty pages, refer to pages of data that have been modified in memory, such as text files. These files are synchronized to the hard disk by Pdflush, and the KSWAPD and Pdflush write the data back to the hard disk and release the memory when the memory is low.
· Anonymous pages, anonymous pages, those that belong to a process but are not associated with any files, cannot be synchronized to the hard disk, and the KSWAPD is responsible for writing them to the swap partition and freeing up memory when there is not enough memory.
IO ' s per Second (IOPS)
Each disk IO request takes a certain amount of time, and the wait time is simply unbearable compared to the amount of memory being accessed. On a typical 1GHz PC on a 2001-year drive, a random access to a word requires 8,000,000 nanosec = 8 millisec, sequential access to a word requires a nanosec, while accessing a word from memory requires only a few NA Nosec. (data from: Teach Yourself Programming in Ten Years) This hard drive can provide 125 IOPS (1000 MS/8 ms).
sequential io and random io
IO can be divided into sequential io and random io Two kinds, before performance monitoring needs to understand whether the system bias sequence IO application or random IO application. Sequential IO refers to the simultaneous request of large amounts of data, such as database execution of a large number of queries, streaming media services, and sequential IO can quickly move a large number of data. You can evaluate the performance of IOPS by dividing the read and write Io bytes per second by the number of read and write ioPS per second, rkb/s divided by r/s,wkb/s by w/s. The following is a 2-second IO situation, which shows that the data per IO is incremented (45060.00/99.00 = 455.15 kb/io,54272.00/112.00 = 484.57 KB per IO). In the case of relative random io, sequential IO should pay more attention to the throughput of each IO (KB per io):
?
1 2 3 4 5 6 7 8 9 10 11 12 |
$ iostat-kx 1 AVG-CPU:%user%nice%system%iowait%steal%idle 0.00 0.00 2.50 25.25 0.00 72.25 device:rrqm/s wrqm/s r/s w/s rkb/s wkb/s avgrq-sz avgqu-sz await SVCTM%util SDB 24.00 19995.00 29.00 99.00 4228.00 45060.00 770.12 45.01 539.65 7.80 99.80 AVG-CPU:%user%nice%system%iowait%steal%idle 0.00 0.00 1.00 30.67 0.00 68.33 device:rrqm/s wrqm/s r/s w/s rkb/s wkb/s avgrq-sz avgqu-sz await SVCTM%util SDB 3.00 12235.00 3.00 112.00 768.00 54272.00 957.22 144.85 576.44 8.70 100.10 |
Random IO refers to random request data, its IO speed does not depend on the size and arrangement of data, depending on the number of times the disk can be IO per second, such as Web services, Mail services, and so on each request data is very small, random IO each second will have more requests to produce, so the disk can IO number of times per second is the key.
?
1 2 3 4 5 6 7 8 9 10 11 12 |
$ iostat-kx 1 AVG-CPU:%user%nice%system%iowait%steal%idle 1.75 0.00 0.75 0.25 0.00 97.26 device:rrqm/s wrqm/s r/s w/s rkb/s wkb/s avgrq-sz avgqu-sz await SVCTM%util SDB 0.00 52.00 0.00 57.00 0.00 436.00 15.30 0.03 0.54 0.23 1.30 AVG-CPU:%user%nice%system%iowait%steal%idle 1.75 0.00 0.75 0.25 0.00 97.24 device:rrqm/s wrqm/s r/s w/s rkb/s wkb/s avgrq-sz avgqu-sz await SVCTM%util SDB 0.00 56.44 0.00 66.34 0.00 491.09 14.81 0.04 0.54 0.19 1.29 |
According to the formula above: 436.00/57.00 = 7.65 kb per io,491.09/66.34 = 7.40 kb/IO. Compared with sequential Io, it is found that the KB per IO of random io is small enough to be negligible, and that it is important for random IO to have IOPS each second rather than the throughput of each IO (KB/IO).
SWAP
Swap devices are used when the system does not have enough physical memory to handle all requests, and the swap device can be a file or a disk partition. But be careful, the cost of using swap is very high. If the system does not have physical memory available, it is frequently swapping, and if the data being accessed by the swap device and program is on the same file system, it will run into a serious IO problem and eventually cause the entire system to slow down or even crash. The swapping status between swap devices and memory is an important reference for determining the performance of Linux systems, and we already have a number of tools to monitor swap and swapping situations, such as top, cat/proc/meminfo, Vmstat, etc.:
?
1 2 3 4 5 6 7 8 9 10 |
$ cat/proc/meminfo memtotal:8182776 KB memfree:2125476 KB buffers:347952 KB cached:4892024 KB swapcached:112 KB ... swaptotal:4096564 KB swapfree:4096424 KB ... |
Linux Performance monitoring: Web article
Network monitoring is the most complex of all Linux subsystems, there are too many factors in it, such as: delay, blocking, conflict, packet loss, and, worse, the Linux host connected to the router, switch, wireless signal will affect the overall network and it is difficult to judge because Linux The problem of network subsystem or other equipment, increased the complexity of monitoring and judgment. Now we use all of the network cards are called Adaptive Network card, meaning that the network can be based on different network equipment caused by different network speed and working mode for automatic adjustment. We can use the Ethtool tool to view the configuration and working mode of the NIC:
?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21st |
#/sbin/ethtool Eth0 Settings for eth0: Supported ports: [TP] Supported Link Modes:10baset/half10baset/full 100baset/half100baset/full 1000baset/half1000baset/full Supports Auto-negotiation:yes Advertised Link Modes:10baset/half10baset/full 100baset/half100baset/full 1000baset/half1000baset/full Advertised Auto-negotiation:yes speed:100mb/s Duplex:full Port:twisted Pair Phyad:1 Transceiver:internal Auto-negotiation:on Supports wake-on:g Wake-on:g Current message level:0x000000ff (255) Link Detected:yes |
The example above shows that the NIC has 10baset,100baset and 1000baseT three choices and is currently adapting to 100baseT (SPEED:100MB/S). You can force the NIC to work under 1000baseT by using the Ethtool tool:
/sbin/ethtool-s eth0 Speed 1000 duplex full Autoneg off
Linux Performance monitoring: Common commands
1:uptime View the current system's load status
09:50:21 up, 15:07, 1 user, load average:0.27, 0.33, 0.3
2:DMESG |tail View the contents of the boot print information and the ring buffer.
3:vmstat Monitoring System (memory, CPU, disk, Io, NIC) status
Vmstat 1 means that it is collected once per second.
Vmstat 1 10 means that it is collected once every second, a total of 10 times.
procs-----------Memory-------------Swap-------io------System-------CPU-----
R b swpd free buff cache si so bi bo in CS us sy ID WA St
1 0 0 3932924 230676 1280388 0 0 0 5 4 16 5 6 89 0-0
0 0 0 3932916 230676 1280388 0 0 0 0 1147 1314 7 9 84 0-0
0 0 0 3932908 230676 1280388 0 0 0 16 439 1245 0 0 99 0-0
0 0 0 3932908 230676 1280388 0 0 0 0 699 1381 1 0 99 0-0
1 0 0 3932908 230676 1280388 0 0 0 0 1106 1328 6 8 86 0-0
0 0 0 3932908 230676 1280388 0 0 0 0 660 1332 2 2 96 0-0
1 0 0 3932908 230676 1280388 0 0 0 20 1122 1250 7 8 85 0-0
2 0 0 3932916 230676 1280388 0 0 0 4 2001 1463 14 19 67 0-0
0 0 0 3932792 230676 1280388 0 0 0 0 1111 1375 5 4 90 0-0
0 0 0 3932792 230676 1280388 0 0 0 0 589 1295 1 0 99 0-0
Procs (Process):
R indicates the number of processes waiting for available CPU resources (if this value is longer than the number of processor hardware threads, you need to be concerned.) )。
Number of hardware threads: DMESG |grep CPU can be viewed.
b indicates the number of blocked processes (blocking is said to require additional resources, such as IO)
Swap (Swap partition): Two values are large indicates system memory resources are tight.
Si: Writes the swap size from the disk per second,
So: Writes the swap size from memory per second
Memory
SWPD:
The size that swap has used, if more than 0, indicates that your machine physical memory is not enough, if not the cause of the program memory leak, then you should upgrade the memory or the memory of the task to migrate to other machines, if the SWPD is not 0, but the value of si,so for a long period of 0, does not affect system performance
Free: The amount of idle physical memory.
Buff: For storing, what content in the directory, the cache of permissions, etc.
Cache: Directly used to memorize the files we open, to buffer the files
Io:
BI: Reads from disk or swap to RAM, the number of blocks read per second.
Bo: The number of blocks written per second from RAM to disk or swap.
Note: When random disk reads and writes, the larger the 2 values (such as exceeding 1024k), the greater the value of the CPU waiting in IO
System:
in: Number of interrupts per second, including clock interrupts.
CS: The number of context switches per second.
For example, when we call a system function, we need context switching, thread switching, and process context switching, the smaller the value, the better, the larger the number of threads or processes to consider, such as in the Apache and Nginx Web servers, We generally do performance testing will be thousands of concurrent or even tens of thousands of concurrent testing, the process of selecting a Web server can be a process or thread peak has been reduced, pressure, until the CS to a relatively small value, the process and the number of threads is more appropriate value. System call is also, each call system functions, our code will enter the kernel space, resulting in context switching, this is very resource-intensive, but also try to avoid frequent call system functions. Too many context switches means that most of your CPU waste in context switching, resulting in less CPU time to do serious things, the CPU is not fully utilized, is not desirable.
Cpu:
US: User CPU time, but if the long-term use of 50%, then we should consider the optimizer algorithm or acceleration.