This command has two major uses, one is to see if your machine has recently been restarted, or because of hardware, and so on, the other is to look at your CPU load.
Uptime
10:19:04 up 257 days, 18:56, users, load average:2.10, 2.10,2.09
1.10:19:04//System Current time
2, up 257 days, 18:56//host has been running time, the greater the time, indicating that your machine more stable.
3/user connection number, is the total number of connections rather than the number of users
4, load average//System average load, statistics of the last 1, 5, 15 minutes of the system average load
The first three items are easy to understand, and for the fourth explanation, find a very understandable article from the Internet
A lot of people understand that. Load mean: Three numbers represent the average system load (one minute, five minutes, and 15 minutes) for different periods of time, and their numbers are, of course, the smaller the better. The higher the number, the greater the load on the server, which may also be a signal of some kind of problem with the server.
The fact is not quite so, what constitutes the magnitude of the load mean, and how to tell whether their current condition is "good" or "bad". When to pay attention to what are the abnormal values.
Before you answer these questions, you first need to know some of the facts behind these values. Let's start with the simplest example of a server with a single core processor.
A single core processor can be likened to a cycling path. Imagine that you now need to charge a toll on the road-if you're busy with the vehicles that are going to cross the bridge. The first thing you need to know, for example, is the load on the vehicle and how many vehicles are waiting to cross the bridge. If there is no vehicle waiting in front of you, then you can tell the driver behind the pass. If there are many vehicles, then you need to tell them that they may need to wait a little longer.
Therefore, some specific codes are required to indicate current traffic conditions, for example:
• 0.00 means there is no traffic on the bridge. In fact, this situation is the same as between 0.00 and 1.00, in short, the past vehicles can not wait to pass.
• 1.00 indicates that it is within the scope of the bridge. This is not a bad situation, but there are some traffic jams, but this may lead to more and more slow transportation.
• More than 1.00, it shows that the bridge has exceeded the load and traffic congestion is heavy. So how bad is it? For example, 2.00 of the situation shows that the traffic is more than the bridge can withstand, then there will be redundant bridge one times the vehicle is anxiously waiting. 3.00, the situation is even worse, indicating that the bridge is basically almost unbearable, and more than twice times the bridge load is waiting for vehicles.
The above situation is very similar to the load condition of the processor. A car's bridge time is like the actual time the processor is working on a thread. The Unix system-defined process runs the length of time for all processor cores plus the time the thread waits in the queue.
As with the toll collector, you certainly hope that your car will not be anxiously waiting. So, ideally, you want the load average to be less than 1.00. Of course not excluding part of the peak will be more than 1.00, but in the long term to maintain this state, it will be a problem, this time you should be very anxious.
"So you say the ideal load is 1.00. ”
Well, that's not exactly true. Load 1.00 indicates that the system has no remaining resources. In practice, an experienced system administrator would draw this line at 0.70:
• "Rules of Investigation": If your system is loaded at 0.70, then you need to take the time to understand why before things get worse.
• "Fix the Law Now": 1.00. If your server system load hovers around 1.00 for a long time, you should solve the problem immediately. Otherwise, you will receive a call from your boss in the middle of the night, which is not a pleasant thing.
• "3:30 A.M. Exercise Body Law": 5.00. If your server load exceeds the number of 5.00, then you will lose your sleep, and you'll have to explain why it happened in the meeting, and don't let it happen anyway.
so many processors. My average is 3.00, but the system works fine.
Wow, you have four processor hosts. Then it's normal to have a load mean of 3.00.
In multiprocessor systems, the load mean is determined based on the number of cores. With a 100% load calculation, 1.00 for a single processor, and 2.00 for two dual processors, 4.00 indicates that the host has four processors.
Back to the analogy of the vehicle crossing over us. 1.00 I said it was "a one-lane road". So in the case of Bike lane 1.00, the bridge has been stuffed with cars. In a dual-processor system, this means one more load, that is, 50% of the remaining system resources-because there is another lane to pass.
So, the single processor is already in the load case, the dual processor load fulfilment situation is 2.00, it still has one times the resources can be utilized.
Multi-core and multi-processor
First off the subject, let's discuss the difference between a multi core processor and a multiprocessor. From a performance perspective, a host with a multiple core processor with the same number of processing performance can be considered to be almost the same. Of course, the actual situation will be much more complex, different amounts of cache, processor frequency and other factors can cause differences in performance.
But even if the actual performance of these factors is slightly different, the system still calculates the load mean at the core number of processors. And that gives us two new laws:
• "How much of the core is the load" rule: in multi-core processing, your system should not mean more than the total number of cores in the processor.
• "Core core" law: Core distribution is not important in a few individual physical processes, in fact, two quad-core processors equal to four dual-core processors equals eight single processors. So, it should have eight processor cores.
Let's take a look at the output of uptime.
Uptime 23:05 up, 6:08, 7 users, load averages:0.65 0.42 0.36
This is a dual-core processor, and the result also shows a lot of free resources. The reality is that even if it peaks to 1.7, I have never considered its load problem.
So, how can there be three numbers that are really disturbing. We know that 0.65, 0.42, 0.36 describe the system load mean for the last minute, the last five minutes, and the final 15 minutes respectively. So this brings up another problem:
Which number do we use as the subject? One minute. Five minutes. It's still 15 minutes.
Actually we've talked a lot about these numbers and I think you should look at the average of five minutes or 15 minutes. To be honest, if the load in the previous minute was 1.00, it would still be a good indication that the server situation was still normal. But if the 15-minute value remains at 1.00, then it's worth noting (in my experience, you should increase the number of processors at this point).
So how do I know how many core processors my system is equipped with?
Under Linux, you can use the
Cat/proc/cpuinfo
Get information about each processor on your system. If you only want to get a number and see how many CPUs there are, then use the following command:
grep ' model name '/proc/cpuinfo | Wc-l