More and more people are beginning to touch the Linux operating system, from VPS to the brushless system (such as OpenWrt, Tomato), but also essential to a variety of probe and system monitoring interface to see "System average Load" or "load Average" the word, But it's not like we used to be in the Windows, Mac operating system to provide a percentage of the CPU, memory consumption, but a few spaces separated by a floating-point number to represent the system average load, then what exactly do they mean? How to measure the system load and the stability of the system?
System average load-basic explanation
Under the Linux shell, there are many commands to see the load Average, for example:
[Email protected]:~# uptime
12:49:10 up 182 days, 16:54, 2 users, load average:0.08, 0.04, 0.01
[Email protected]:~# W
12:49:18 up 182 days, 16:54, 2 users, load average:0.11, 0.07, 0.01
[Email protected]:~# Top
top-12:50:28 up 182 days, 16:55, 2 users, load average:0.02, 0.05, 0.00
Give a rough idea of what these 3 numbers mean: The average number of processes in the process queue that the system has run in the last 1 minutes, 5 minutes, and 15 minutes, respectively.
Run queue, no wait io, no wait, no kill process all into this queue.
There is also a direct command to display the average load on the system
[Email protected]:~# cat/proc/loadavg
0.10 0.06 0.01) 1/72 29632
In addition to the first 3 numbers representing the average number of processes, the next 1 fractions, the denominator represents the total number of system processes, the numerator represents the number of processes that are running, and the last number represents the most recently run process ID.
System average load-advanced interpretation
Just the explanation of the sentence above is basically no explanation. The reason for writing this article is because I saw a foreigner wrote about load average article, I think the explanation is very good, so I intend to take part in their own words to translate.
@scoutapp Thanks for your article understanding Linux CPU Load, I just translate and share it to Chinese audience S.
In order to better understand the system load, we use traffic flow to do analogy.
1, single core CPU-cycle path-Number between 0.00-1.00 Normal
The road manager will tell the driver that if the front is more congested, the driver will have to wait, if the front way unimpeded, then the driver can drive directly.
Specifically:
The numbers between the 0.00-1.00 indicate that the traffic is very good at this time, without congestion, and the vehicle can pass without hindrance.
1.00 means the road is normal, but it is likely to worsen and cause congestion. At this point the system has no redundant resources, the administrator needs to be optimized.
1.00-*** said the road is not very good, if the arrival of 2.00 means there is a bridge on the number of vehicles a few times the vehicle is waiting. You have to check the situation.
2, multi-core CPU-multi-lane-digital/CPU number of cores between 0.00-1.00 Normal
Multi-core CPU, the number of full-load status is "1.00 * CPU Cores", that is, dual-core CPU is 2.00, quad-core CPU is 4.00.
3, the safe system average load
The authors believe that the single-core load is safe under 0.7 and that more than 0.7 will need to be optimized.
4, should see which number, 1 minutes, 5 minutes or 15 minutes?
The authors believe that it is better to look at 5 minutes and 15 minutes, that is, 2 digits behind.
5, how to know my CPU is a few cores it?
Use the following command to get the number of CPU cores directly
grep ' model name '/proc/cpuinfo | Wc-l
Conclusion
Get the number of CPU cores N, observe the following 2 digits, with the number/n, if the resulting value of less than 0.7 can be carefree.
Average load of Linux systems 3 digits meaning