Understanding load average in Linux systems (graphic version)
Blog Category:
Linux load Nagios One, what is load average?
The load on the Linux system is a measure of the current CPU workload (wikipedia:the system load is a measure of the amount of work, a computer system is doing). It is also simple to say the length of the process queue.
Load Average is the average load over a period of time (1 minutes, 5 minutes, 15 minutes).
We can view the current load average situation by the system command "W"
[Email protected] ~]# W
20:01:55 up days, 8:20, 6 users, load average:1.30, 1.48, 1.69
The above content shows the system load is "1.30, 1.48, 1.69", what do these 3 values mean?
- First bit 1.30: Indicates the last 1 minutes average load
- Second bit 1.48: Indicates the last 5 minutes average load
- Third digit 1.69: Indicates the last 15 minutes average load
PS. Linux system is 5 seconds to load sample
Second, load average meaning 2.1 single-core processor
Suppose our system is a single CPU single core, it is likened to a one-way road, the CPU task compared to a car. When the car was not much,load <1; load=1 when the car occupied the whole road, when the road was full, and the road was full of cars, load>1
Load < 1
Load = 1
Load >1
2.2 Multi-core processors
We often find that server load > 1 is still good, because the server is a multi-core processor (multi-core).
Assuming that our server CPU is 2 cores, then it will mean we have 2 roads, our load = 2 o'clock, all roads are full of vehicles.
Load = 2 o'clock the road is full.
#查看CPU Core
grep ' model name '/proc/cpuinfo | Wc-l3. What kind of load average value to be vigilant
- 0.7 < Load < 1: At this time it is a good state, if you come in more cars, your road can still cope.
- Load = 1: Your road is about to jam, and there's no more resources for extra tasks, just look at what's going on.
- Load > 5: Very serious congestion, our road is very busy, each car can not run quickly
4. Three kinds of load values, which should I see?
Usually we look at load for 15 minutes, if load is high, then look at 1 minutes and 5 minute load to see if there is a downtrend.
1 minutes Load value > 1, then we don't have to worry, but if the 15-minute load is more than 1, we have to hurry and see what's going on. So we have to look at these three values according to the actual situation.
5. Configure load monitoring alarms with Nagios
See article: http://heipark.iteye.com/blog/1340190
Reference:
Understanding Linux CPU Load-when should are you worried?
Http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages
Unix/linux's Load primary explanation
Http://www.dbanotes.net/arch/unix_linux_load.html
--Heipark
Linux-load Average Analysis
Blog Category:
Load Average
Transferred from: http://www.blogjava.net/sliverfancy/archive/2013/04/17/397947.html
1.1: What is load? What is load Average?
Load is the measure of how much the computer works (wikipedia:the system load is a measure of the amount of work, a compute system is doing)
This is simply the length of the process queue. Load Average is the average load over a period of time (1 minutes, 5 minutes, 15 minutes). "Reference article: Unix Load Average part1:how It Works"
1.2: View Instructions:
W or uptime or ProcInfo or top
Load average:0.02, 0.27, 0.17
1 Per/minute 5 Per/minute Per/minute
1.3: How to tell if the system has over Load?
For the general system, according to the number of CPU to judge. If the average load is always at 1.2, and you have 2 cups of machine. There is not enough CPU usage. That is, load averages less than the number of CPUs
1.4:load and capacity planning (capacity planning)
Generally it will be based on the load average of 15 minutes first.
1.5:load misunderstanding:
1: System load high must be a performance problem.
Truth: Load high may be due to CPU-intensive computing
2: System load high must be CPU capacity problem or insufficient quantity.
The truth: Load high simply means that the queue that needs to run accumulates too much. But the tasks in the queue may actually be CPU-intensive, or they may be i/0 of other factors.
3: System long-term load high, first increase CPU
The truth: Load is just an appearance, not a substance. Increasing the CPU will temporarily see load drop, but the symptoms are not a cure.
2: How to identify the system bottleneck in the case of high load average.
Is the CPU insufficient, or is the IO not fast enough or insufficient memory?
2.1: View system Load Vmstat
Vmstat
procs-----------Memory-------------Swap-------io------System------CPU----
R b swpd free buff cache si so bi bo in CS US sy ID WA
0 0 100152 2436 97200 289740 0 1 34 45 99 33 0 0 99 0
procs
R column represents the number of processes running and waiting for CPU time slices, if the long-term is greater than 1, indicating that the CPU is low and needs to be increased. The
B column represents the number of processes waiting on the resource, such as waiting for I/O, or memory swapping. The
CPU represents the CPU usage status
US column shows the percentage of CPU time that is spent in user mode. When the value of us is higher, the user process consumes more CPU time, but if the long-term is greater than 50%, you need to consider optimizing the user's program. The
Sy column shows the percentage of CPU time that the kernel process spends. Here US + SY reference value is 80%, if us+sy greater than 80% indicates that there may be insufficient CPU. The
WA column shows the percentage of CPU time consumed by IO waits. Here WA has a reference value of 30%, if WA is more than 30%, it indicates that the IO wait is serious, this may be caused by a lot of random access to disk, or the bandwidth bottleneck of disk or disk access controller (mainly block operation). The
ID column shows the percentage of time that the CPU is idle
The system displays the number of interrupts that occurred during the acquisition interval the
in column represents the number of device interrupts per second observed during a time interval. The
CS column represents the number of context switches produced per second, such as when CS is much higher than disk I/O and network packet rates, and should be investigated further. The amount of memory (k) that the
memory
SWPD switches to the swap area. If the value of SWPD is not 0, or larger, such as more than 100m, as long as the value of Si, so long 0, the system performance is normal
Free page List of memory (k)
Buff as the amount of memory buffer cache, Generally read and write to block devices requires buffering.
Cache: As the number of memory page cache, generally as the file system cache, if the cache is large, indicating that the file with the cache is more, if at this time bi in IO is relatively small, the file system efficiency is better.
Swap
Si the number of memory-swapping areas that are entered by memory. The
so is entered into memory by the memory swap area.
IO
Bi reads the total amount of data (read disk) (in kilobytes per second) from the block device.
The total amount of data written by the Bo Block device (KB per second)
Here we set the Bi+bo reference value of 1000, if more than 1000, and the large WA value should consider balancing the disk load, can be combined with the Iostat output to analyze.
2.2: View disk load Iostat
The disk IO information is counted every 2 seconds until you press CTRL + C to terminate the program, and the-D option represents the statistics disk information,-K is displayed in kilobytes per second, and-T requires time information to be printed, and 2 indicates output every 2 seconds. The disk IO load status for the first output provides information about the statistics since the system started. Each subsequent output is the average IO load condition between each interval.
# iostat-x 1 10
Linux 2.6.18-92.el5xen 02/03/2009
AVG-CPU:%user%nice%system%iowait%steal%idle
1.10 0.00 4.82 39.54 0.07 54.46
device:rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await SVCTM%util
SDA 0.00 3.50 0.40 2.50 5.60 48.00 18.48 0.00 0.97 0.97 0.28
SDB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SDC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SDD 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
SDE 0.00 0.10 0.30 0.20 2.40 2.40 9.60 0.00 1.60 1.60 0.08
SDF 17.40 0.50 102.00 0.20 12095.20 5.60 118.40 0.70 6.81 2.09 21.36
SDG 232.40 1.90 379.70 0.50 76451.20 19.20 201.13 4.94 13.78 2.45 93.16
RRQM/S: The number of read operations per second for the merge. Delta (rmerge)/s
WRQM/S: The number of write operations per second for the merge. Delta (wmerge)/s
R/S: Number of Read I/O devices completed per second. Delta (RIO)/s
W/S: Number of write I/O devices completed per second. Delta (WIO)/s
RSEC/S: Number of Read sectors per second. Delta (rsect)/s
WSEC/S: Number of Write sectors per second. Delta (wsect)/s
rkb/s: Reads K bytes per second. is half the rsect/s because the size of each sector is 512 bytes. (Calculation required)
wkb/s: Writes K bytes per second. is half the wsect/s. (Calculation required)
Avgrq-sz: The average data size (sector) per device I/O operation. Delta (rsect+wsect)/delta (Rio+wio)
Avgqu-sz: Average I/O queue length. That is Delta (AVEQ)/s/1000 (because the Aveq is in milliseconds).
Await: The average wait time (in milliseconds) for each device I/O operation. Delta (ruse+wuse)/delta (Rio+wio)
SVCTM: The average service time (in milliseconds) per device I/O operation. Delta (use)/delta (RIO+WIO)
%util: How much time in a second is spent on I/O operations, or how many times in a second I/O queues are non-empty. That is, the delta (use)/s/1000 (because the unit of use is milliseconds)
If%util is close to 100%, it indicates that there are too many I/O requests and that the I/O system is fully loaded, the disk
There may be bottlenecks.
Idle less than 70% io pressure is larger, the general reading speed has more wait.
You can also combine vmstat to see the b parameter (the number of processes waiting for a resource) and the WA parameter (the percentage of CPU time that IO waits for, higher than 30% when the IO pressure is high)
In addition, you can also refer
So so:
SVCTM < await (because the waiting time for waiting requests is repeated),
The size of the SVCTM is generally related to disk performance: The load of cpu/memory also affects it, and excessive requests can indirectly lead to an increase in SVCTM.
The size of the await:await generally depends on the service time (SVCTM) and the length of the I/O queue and the emit mode of I/O requests.
If SVCTM is closer to await, I/O has almost no waiting time;
If the await is much larger than SVCTM, the I/O queue is too long and the application gets slower response time,
If the response time is more than the user can tolerate, consider replacing a faster disk, tuning the kernel elevator algorithm, optimizing the application, or upgrading the CPU.
The queue Length (AVGQU-SZ) can also be used as an indicator for measuring the system I/O load, but because Avgqu-sz is averaged over a unit time, it does not reflect instantaneous I/O flooding.
Someone else is a good example. (I/O system vs. supermarket queuing)
For example, how do we decide which checkout to go to when we queue up in the supermarket? The first is to look at the number of rows, 5 people than 20 people faster? We often look at the number of people in front of the purchase of things, if there is a shopping for a week in front of the aunt, then you can consider changing the platoon. There is the speed of the cashier, if the money is not clear to the novice, then there are waiting. In addition, the timing is also very important, perhaps 5 minutes before the overcrowded cash table, now is empty, this time the payment is very cool Ah, of course, the premise is that the past 5 minutes to do things than queued to make sense (but I have not found anything more boring than queuing).
I/O systems also have many similarities with supermarket queues:
r/s+w/s similar to the total number of people who have been
Average Queue Length (AVGQU-SZ) is similar to the number of average queueing people in a unit time
Average service time (SVCTM) is similar to the cashier's payment speed
Average wait time (await) is similar to the average wait time per person
Average I/O data (AVGRQ-SZ) is similar to the average number of things each person buys
The I/O operation rate (%util) is similar to the time scale at which a person is queued at the cashier.
We can analyze the mode of I/O requests based on these data, and the speed and response time of I/O.
The following is the analysis of the output of this parameter written by others
# iostat-x 1
AVG-CPU:%user%nice%sys%idle
16.24 0.00 4.31 79.44
device:rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkb/s wkb/s avgrq-sz avgqu-sz await SVCTM%util
/dev/cciss/c0d0
0.00 44.90 1.02 27.55 8.16 579.59 4.08 289.80 20.57 22.35 78.21 5.00 14.29
/dev/cciss/c0d0p1
0.00 44.90 1.02 27.55 8.16 579.59 4.08 289.80 20.57 22.35 78.21 5.00 14.29
/dev/cciss/c0d0p2
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
The Iostat output above indicates that there are 28.57 device I/O operations per second: Total io (IO)/s = r/s (read) +w/s (write) = 1.02+27.55 = 28.57 (Times/sec) where the write operation takes up the body (w:r = 27:1).
The average Per Device I/O operation takes only 5ms to complete, but each I/O request needs to wait for 78ms, why? Because there are too many I/O requests (about 29 per second), assuming that these requests are issued at the same time, the average wait time can be computed like this:
Average wait time = single I/O service time * (1 + 2 + ... + total requests-1)/Total requests
Apply to the above example: Average wait time = 5ms * (1+2+...+28)/29 = 70ms, and the average wait time for 78ms given by Iostat is very close. This in turn indicates that I/O is initiated concurrently.
The number of I/O requests per second (about 29), the average queue is not long (only 2 or so), indicating that the arrival of these 29 requests is uneven, most of the time I/O is idle.
14.29% of the time in a second I/O queue is requested, that is, 85.71% of the time I/O system has nothing to do, all 29 I/O requests are processed within 142 milliseconds.
Delta (ruse+wuse)/delta (IO) = await = 78.21 = Delta (ruse+wuse)/s=78.21 * Delta (IO)/s = 78.21*28.57 = 2232.8, indicating I/O requests per second You need to wait 2232.8ms in total. So the average queue length should be 2232.8ms/1000ms = 2.23, while the average queue Length (Avgqu-sz) given by Iostat is 22.35, why?! Because the Bug,avgqu-sz value in Iostat should be 2.23, not 22.35.
Understanding the load average in Linux systems