I. CPU performance EVALUATION
1.vmstat [-v] [-N] [Depay [Count]]
-V: Print out version information, optional parameters
-N: Header information is displayed only once for cyclic output
Delay: The time interval between outputs of two times
Count: The number of times that are counted according to the interval specified by delay. Default is 1
Example: Vmstat 1 3
[Email protected]:~$ vmstat 1 3
procs-----------Memory-------------Swap-------io-----system------CPU----
R b swpd free buff cache si so bi bo in CS US sy ID WA
0 0 0 1051676 139504 477028 0 0 46 31 130 493 3 1 95 2
0 0 0 1051668 139508 477028 0 0 0 4 377 1792 3 1 95 0
0 0 0 1051668 139508 477028 0 0 0 0 327 1741 3 1 95 0
R: The number of processes running and waiting for CPU time slices (if the number of CPUs is longer than the CPU, indicating that the CPU is not enough, need to increase the CPU)
B: Number of processes waiting on the resource (e.g. waiting for I/O or memory exchange, etc.)
SWPD: Amount of memory switched to memory swap, in kilobytes
Free: Currently idle physical memory, in kilobytes
Buff:buffers cache memory, generally read and write to block devices requires caching
Cache:page cached The amount of memory, generally as a file system cached, frequently accessed files will be cached
Si: The number of disks into memory, that is, memory into the memory swap area
So: memory is transferred into the disk, the memory swap area enters the amount of memory
BI: The total amount of data read from a block device, i.e. read disk, Unit kb/s
Bo: The total amount of data written to the block device, that is, the write disk, unit kb/s
In: Number of device interrupts per second observed in a time interval
CS: Number of context switches generated per second
US: Percentage of CPU time consumed by user processes "Note"
SY: Kernel process consumes CPU time percent "note"
Id:cpu time percentage in idle state "note"
Percentage of CPU time occupied by Wa:io waits
If the value of Si and so is not 0 for a long time, it means that the system memory needs to be increased.
Bi+bo reference value of 1000, if more than 1000, and WA large, indicating a problem with the system IO, should improve the read and write performance of the disk
The larger the in and CS, the more CPU time is consumed by the kernel
Us+sy reference value of 80%, if greater than 80%, indicates that there may be a situation of insufficient CPU resources
In summary, the CPU performance evaluation focuses on the values of R, US, SY, and ID columns.
2. Sar [options] [-o filename] [Interval [count]]
Options
-A: Displays the operating status of all resource devices (CPU, memory, disk) of the system
-U: Displays the load status of all CPUs in the system over the sampling time
-P: Displays the usage of the specified CPU (CPU count starts from 0)
-D: Displays the usage of all hard disk devices during sample time
-r: Displays the usage status within the sampling time
-B: Displays the buffer's usage over the sample time
-V: Show process, file, I node, and lock table status
-N: Displays network operational status. The parameters are followed by dev (network interface), Edev (network error statistics), SOCK (socket), full (show all 3 other parameters). Can be used alone or together
-Q: Displays the size of the running queue, the same as the average load at the time of the system
-R: Shows the activity of the process during the sampling time
-y: Displays the activity of the end device during the sampling time
-W: Displays the state of the system swap activity during the sampling time
-O: The command result is stored in a binary format in the specified file
Interval: Sampling interval time, required parameters
Count: Number of samples, default 1
Example: Sar-u 1 3
[Email protected]:~$ sar-u 1 3
Linux 2.6.35-27-generic (user1-desktop) March 05, 2011 _i686_ (2 CPU)
09:27 18 sec CPU%user%nice%system%iowait%steal%idle
09:27 19 sec All 1.99 0.00 0.50 5.97 0.00 91.54
09:27 20 sec All 3.90 0.00 2.93 5.85 0.00 87.32
09:27 21 sec All 2.93 0.00 1.46 4.39 0.00 91.22
Average time: All 2.95 0.00 1.64 5.40 0.00 90.02
%user: Percentage of CPU time consumed by user processes
%nice: Percentage of CPU time consumed by running a normal process
%system: Percentage of CPU time consumed by system processes
%iowait:io% CPU time to wait
%steal: Memory in a relatively tense and bad pagein force steal operations on different pages
%idle:cpu percent of time in idle state
3. Iostat [-C |-d] [-K] [-T] [-X [device]] [interval [count]]
-C: Show CPU usage
-D: Show disk usage
-K: Display data in K bytes per second
-T: Print out the time when statistics start executing
-X Device: Specifies the disk device name to be counted, which defaults to all disk devices
Interval: Two time intervals for statistics
Count: Number of statistics
such as: iostat-c
[Email protected]:~$ iostat-c
Linux 2.6.35-27-generic (user1-desktop) March 05, 2011 _i686_ (2 CPU)
AVG-CPU:%user%nice%system%iowait%steal%idle
2.51 0.02 1.27 1.40 0.00 94.81
(The meaning of each delegate is the same as the SAR)
4. uptime , such as:
[Email protected]:~$ uptime
10:13:30 up 1:15, 2 users, load average:0.00, 0.07, 0.11
The display is: the current time of the system, the system last boot to the current run how long, the current number of users logged in, the system within 1 minutes, 5 minutes, 15 minutes of the average load
Note: The three values of load average are generally not larger than the number of system CPUs, otherwise the CPU is busy
Two. Memory Performance evaluation
1. Free
2. Watch is combined with free, followed by a command that needs to be run after watch, and watch automatically repeats the command, which is executed 2 seconds by default, such as:
Every 2.0s:free Sat 5 10:30:17 2011
Total used free shared buffers Cached
mem:2060496 1130188 930308 0 261284 483072
-/+ buffers/cache:385832 1674664
swap:3000316 0 3000316
(-n Specifies the time to repeat execution, and-D for highlighting changes)
3. use Vmstatto focus on SWPD, si and so
4. SAR - R, such as:
[Email protected]:~$ sar-r 2 3
Linux 2.6.35-27-generic (user1-desktop) March 05, 2011 _i686_ (2 CPU)
10:34 11 sec kbmemfree kbmemused%memused kbbuffers kbcached kbcommit%commit
10:34 13 sec 923548 1136948 55.18 265456 487156 1347736 26.63
10:34 15 sec 923548 1136948 55.18 265464 487148 1347736 26.63
10:34 17 sec 923548 1136948 55.18 265464 487156 1347736 26.63
Average time: 923548 1136948 55.18 265461 487153 1347736 26.63
Kbmemfree: Free Physical Memory
Kbmemused: Physical memory is used
%memused:% of total memory used
Kbbuffers:buffer Cache Size
Kbcached:page Cache Size
Kbcommit: Application currently uses memory size
%commit: Percentage of memory used by applications
Three. Disk I/O performance evaluation
1. sar-d , such as:
[Email protected]:~$ sar-d 1 3
Linux 2.6.35-27-generic (user1-desktop) March 05, 2011 _i686_ (2 CPU)
10:42 27 sec DEV TPs rd_sec/s wr_sec/s avgrq-sz avgqu-sz await SVCTM%util
10:42 28 sec dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:42 28 sec DEV TPs rd_sec/s wr_sec/s avgrq-sz avgqu-sz await SVCTM%util
10:42 29 sec dev8-0 2.00 0.00 64.00 32.00 0.02 8.00 8.00 1.60
10:42 29 sec DEV TPs rd_sec/s wr_sec/s avgrq-sz avgqu-sz await SVCTM%util
10:42 30 sec dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average time: DEV TPs rd_sec/s wr_sec/s avgrq-sz avgqu-sz await SVCTM%util
Average time: dev8-0 0.67 0.00 21.33 32.00 0.01 8.00 8.00 0.53
DEV: Disk device Name
TPS: The number of transfers per second to the physical disk, which is the I/O traffic per second. A transfer is an I/O request, and multiple logical requests can be combined into a single physical I/O request
RC_SEC/S: Number of sectors read in per second from the device (1 sectors = 512 bytes)
WR_SEC/S: Number of sectors written to the device per second
Avgrq-sz: The average data size (in sectors) per Device I/O operation
Avgqu-sz: The length of the average I/O queue
Await: Average wait time per device I/O operation (MS)
SVCTM: Average service time Per device I/O operation (MS)
%util: Percent of time in a second used for I/O operations
Under normal circumstances SVCTM should be less than await, while the size of SVCTM and disk performance, CPU, memory load will also affect the SVCTM value, too many requests will also introduce the increase in SVCTM value.
The size of an await generally depends on the value of the SVCTM and the I/O Queue length and I/O request mode. If the SVCTM is close to await, it means that there is little I/O waiting, disk performance is good, and if the value of await is much higher than the value of SVCTM, the I/O queue waits too long and the application running on the system slows down, and the problem can be resolved by replacing a faster hard disk.
If the%util is close to 100%, indicating that the disk generates too many I/O requests and that the I/O system is working at full capacity, the disk may have bottlenecks. In the long run, it is bound to affect the performance of the system, either by optimizing the program or by replacing a higher, faster disk to resolve the problem.
2. Iostat - D
[Email protected]:~$ iostat-d 2 3
Linux 2.6.35-27-generic (user1-desktop) March 05, 2011 _i686_ (2 CPU)
Device:tps blk_read/s blk_wrtn/s Blk_read Blk_wrtn
SDA 5.89 148.87 57.77 1325028 514144
Device:tps blk_read/s blk_wrtn/s Blk_read Blk_wrtn
SDA 0.00 0.00 0.00 0 0
Device:tps blk_read/s blk_wrtn/s Blk_read Blk_wrtn
SDA 0.00 0.00 0.00 0 0
BLK_READ/S: Number of data blocks read per second
BLK_WRTN/S: Number of data blocks written per second
Blk_read: Number of blocks read
BLK_WRTN: Number of blocks written
If the blk_read/s is large, it means that there is a lot of direct disk read operations, you can write the read data to memory, and if the blk_wrtn/s is large enough to indicate that the disk writes frequently, consider optimizing the disk or optimizing the program. These two options do not have a fixed size, different operating system values are different, but the long-term large data read and write, is certainly not normal, will certainly affect the performance of the system.
3. Iostat- x /dev/sda 2 3 , separate statistics on the specified disk
4. vmstat-d
Four. Network Performance assessment
1. Ping
The time value shows the network latency between the two hosts, which, if large, indicates a large delay in the network. Packets loss indicates that the network packet loss rate, the smaller the network quality is higher.
2. netstat -I, such as:
[Email protected]:~$ netstat-i
Kernel Interface Table
Iface MTU Met rx-ok rx-err rx-drp rx-ovr tx-ok tx-err tx-drp TX-OVR FLG
Eth0 0 6043239 0 0 0 87311 0 0 0 Bmru
Lo 16436 0 2941 0 0 0 2941 0 0 0 LRU
Iface: Interface name for network devices
MTU: Maximum transmission unit, Unit byte
RX-OK/TX-OK: How many packets are received/sent accurately
Rx-err/tx-err: How many errors are generated when a packet is received/sent
RX-DRP/TX-DRP: How many packets were dropped when the packet was received/sent
RX-OVR/TX-OVR: How many packets have been lost due to errors
FLG: Interface tag, where:
L: This interface is a loopback device
B: Set the broadcast address
M: Receive all Packets
R: interface is running
U: Interface is active
O: Disable ARP on this interface
P: Indicates a point-to-point connection
Under normal circumstances, RX-ERR,RX-DRP,RX-OVR,TX-ERR,TX-DRP,TX-OVR should be 0, if not 0 and very large, then the network quality must be a problem, network transmission performance will certainly decline.
When there is a problem with the network transmission, you can detect if the network card device is faulty, and you can check whether the deployment environment is reasonable.
3. netstat-r (the default row corresponds to a value that represents the system's defaults route)
4. sar-n , after N is dev (network interface information), Edev (network error statistics), SOCK (socket information), and full (show All)
[Email protected]:~$ sar-n DEV 2 3
Linux 2.6.35-27-generic (wangxin-desktop) March 05, 2011 _i686_ (2 CPU)
11:55 32 sec IFACE rxpck/s txpck/s rxkb/s txkb/s rxcmp/s txcmp/s rxmcst/s
11:55 34 sec lo 2.00 2.00 0.12 0.12 0.00 0.00 0.00
11:55 34 sec eth0 2.50 0.50 0.31 0.03 0.00 0.00 0.00
11:55 34 sec IFACE rxpck/s txpck/s rxkb/s txkb/s rxcmp/s txcmp/s rxmcst/s
11:55 36 sec Lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55 36 sec eth0 1.50 0.00 0.10 0.00 0.00 0.00 0.00
11:55 36 sec IFACE rxpck/s txpck/s rxkb/s txkb/s rxcmp/s txcmp/s rxmcst/s
11:55 38 sec Lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00
11:55 38 sec eth0 14.50 0.00 0.88 0.00 0.00 0.00 0.00
Average time: IFACE rxpck/s txpck/s rxkb/s txkb/s rxcmp/s txcmp/s rxmcst/s
Average time: Lo 0.67 0.67 0.04 0.04 0.00 0.00 0.00
Average time: eth0 6.17 0.17 0.43 0.01 0.00 0.00 0.00
IFACE: Network Interface Device
RXPCK/S: Packet size received per second
TXPCK/S: Packet size sent per second
rxkb/s: Number of bytes accepted per second
txkb/s: Number of bytes sent per second
RXCMP/S: Compressed packets received per second
TXCMP/S: Compressed packets sent per second
RXMCST/S: Multicast packets received per second
Linux performance issues (CPU, memory, disk I/O, network)