Under Linux/unix, CPU utilization (CPU utilization) is divided into user state, System State and idle state, respectively, the CPU is in user state execution time, system kernel execution time, and idle system process execution time. Usually referred to as CPU utilization refers to: the CPU to perform the system Idle process time/CPU total execution time. (the method used in the preceding code is: 1-CPU Idle run time/total run time, same as the principle of this calculation method)
In the Linux kernel, there is a global variable: Jiffies. Jiffies represents time. Its units vary depending on the hardware platform, and the system defines a constant Hz----represents the number of minimum intervals per second. So the jiffies unit is 1/hz. The Intel platform Jiffies Unit is 1/100 seconds, which is the minimum time interval that the system can tell. Each CPU time slice, jiffies must add 1. CPU utilization is represented by the jiffies divided by the total jifffies that executes the user state + system state.
Then there is a word that is often confused with CPU utilization (CPU utilization)-CPU load (CPU load). CPU load depends on CPU queue length rather than CPU utilization, because when a host is overloaded, its CPU utilization is close to 100%, which makes it impossible to accurately respond to load conditions, while using CPU queue lengths can directly reflect the amount of CPU load. For example, two systems, one system has 3 processes in the queue, and the other has 6 processes in the queue, and if CPU utilization is used to represent the load level, they are likely to be close to 100%, while using the CPU queue length their load is completely different.
How we understand CPU load. A single core processor can be likened to a cycling path. So: * * * 0.00 means there is no traffic on the bridge. In fact, this situation is the same as between 0.00 and 1.00, in short, the past vehicles can not wait to pass.
1.00 indicates that it is within the scope of the bridge. This is not a bad situation, but there are some traffic jams, but this may lead to more and more slow transportation.
More than 1.00, it shows that the bridge has exceeded the load and traffic congestion is heavy. So how bad is it? For example, 2.00 of the situation shows that the traffic is more than the bridge can withstand, then there will be redundant bridge one times the vehicle is anxiously waiting. 3.00, the situation is even worse, indicating that the bridge is basically almost unbearable, and more than twice times the bridge load is waiting for vehicles.
The above situation is very similar to the load condition of the processor. A car's bridge time is like the actual time the processor is working on a thread. The Unix system-defined process runs the length of time for all processor cores plus the time the thread waits in the queue.
As with the toll collector, you certainly hope that your car will not be anxiously waiting. So, ideally, you want the load average to be less than 1.00. Of course not excluding part of the peak will be more than 1.00, but in the long term to maintain this state, it will be a problem, this time you should be very anxious.
In multiprocessor systems, the load mean is determined based on the number of cores. With a 100% load calculation, 1.00 for a single processor, and 2.00 for two dual processors, 4.00 indicates that the host has four processors. Back to the analogy of the vehicle crossing over us. 1.00 I said it was "a one-lane road". So in the case of Bike lane 1.00, the bridge has been stuffed with cars. In a dual-processor system, this means one more load, that is, 50% of the remaining system resources----because there is another lane to pass through.
So, the single processor is already in the load case, the dual processor load fulfilment situation is 2.00, it still has one times the resources can be utilized.
In fact, many Linux systems use the CPU load mean (load average) to represent the current system's load status, such as using the top command:
[CPP] View plain copy long@long-ubuntu:~$ top top - 20:12:45 up 3:05, 6 users, load average: 1.16, 1.27, 1.14 TASKS: 208 TOTAL,   1 RUNNING, 206 SLEEPING,   0 stopped, 1 zombie %cpu (s): 11.8 us, 3.7 sy, 0.0 ni, 84.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st kib mem: 2067372 total, 1998832 used, 68540 free, 54104 buffers kib swap: 2095100 total, 25540 used, 2069560 free, 449612 cached PID USER pr ni virt res shr s %cpu %mem TIME+ COMMAND 6635 long 20 0 435m 79m 32m s 7.3 3.9 11:31.39 rhythmbox 4523 root 20 0 110m 61m 4804 S 5.3 3.0 8:34.14 Xorg 5316 long 9 -11 162m 5084 4088 S 4.3 0.2 6:01.53 pulseaudio 5793 long 20 0 114m 22m 13m S 4.3 1.1 0:23.38 gnome-terminal ...... displayed at the end of the first line as " load average:1.16, 1.27 ,1.14 "
The effect is similar using the "Uptime" command:
[CPP]View Plain copy long@long-ubuntu:~$ uptime 20:15:01 up 3:07, 6 users, load average:0.43, 0.97, 1.05 these three numbers are: one minute System load mean within, within five minutes, and within 15 minutes. In other words, looking at these data from right to left, we can judge the development trend of the system load. In fact, this is exactly what the CPU load needs to measure, because the load mean does not include those processes or threads that are waiting for I/O, the network, data, or other CPU-independent, and it is concerned only with processes or threads that actively require CPU time. This is a lot different from CPU utilization.
The load mean is very different from CPU utilization in two ways: 1 the load mean is used to measure the development trend of CPU utilization, not the condition of a moment 2 the load mean includes all CPU requirements, not just active in the measurement
section III How to calculate CPU utilizationIn a Linux system, you can use the/proc/stat file to compute CPU utilization (detailed reference). This file contains information about all CPU activity, all values in the file are accumulated from the start of the system to the current time. Such as:
[CPP] View plain copy long@long-ubuntu:~$ cat /proc/stat cpu 426215 701 115732 2023866 27329 4 557 0 0 0 cpu0 218177 117 57458 1013633 8620 0 6 0 0 0 cpu1 208038 584 58274 1010233 18709 4 550 0 0 0 intr 21217894 119 18974 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 146350 0 647836 370 86696 3 146156 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0&nbsP;0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0&nbSp;0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 38682044 btime 1362301653 processes 10118 procs_running 1 procs_blocked 0 softirq 11177991 0 6708342 2178 148765 86792 0 14537 1507468 29072 2680837
Output explanation: (CPU as well as CPU0, CPU1, CPU2, CPU3 each parameter meaning of each row (in the first case))
Parameters |
Explain |
User (426215) |
From the start of the system startup to the current time, the user state CPU time (in jiffies), does not contain the nice value is a negative process. 1jiffies=0.01 seconds |
Nice (701) |
The CPU time taken by a process with a negative nice value from the start of the system startup to the current time (in jiffies) |
System (115732) |
Cumulative from system startup to current time, core time (unit: jiffies) |
Idle (2023866) |
From system startup to current time, wait time other than hard disk IO Wait (unit: jiffies) |
Iowait (27329) |
From the start of system startup to the current time, hard disk IO wait time (in jiffies), |
IRQ (4) |
From system boot start to current time, hard Interrupt Time (unit: jiffies) |
SOFTIRQ (557) |
From the start of the system start to the current moment, soft interrupt time (unit: jiffies) |