Objective
A good timer helps program developers determine the performance bottlenecks of a program, or perform performance comparisons on different algorithms. However, it is not easy to accurately measure the running time of the program, because process switching, interruption, shared multi-user, network traffic, cache access, and transfer prediction will affect the timing of the program.
Let's take a look at the small series for everyone to organize several timing methods
Method One:
If you want to count how long a program is running, you can use the
Method Two:
If you want to clock a function or statement, there is another way. For example, a gettimeofday
function. To paste the sample code directly:
#include <sys/time.h>
void f ()
{
//...
}
int main ()
{
struct timeval t1, T2;
Gettimeofday (&T1, NULL);
f ();
Gettimeofday (&t2, NULL);
The time that function f runs is
//deltat = (t2.tv_sec-t1.tv_sec) * 1000000 + t2.tv_usec-t1.tv_usec microsecond return
0;
}
gettimeofday
Can only be accurate to microseconds, and it is affected by the system clock (it works by reading the system clock, so the result will be inaccurate when other programs modify the system clock during the time of the timer).
What if you want to be accurate to nanosecond? Keep looking down:
Method Three:
#include <time.h>
void f ()
{
//...
}
int main ()
{
timespec t1, T2;
Clock_gettime (Clock_monotonic, &t1);
f ();
Clock_gettime (Clock_monotonic, &t2);
Then f takes the time to
//deltat = (t2.tv_sec-t1.tv_sec) * 10^9 + t2.tv_nsec-t1.tv_nsec nanosecond return
0;
}
All this said is wall clock
that if you want to get CPU execution time and understand the explanation of the clock_gettime
parameters and possible values, you can have a man.
Summarize
The above is in the Unix/linux C + + program time of the whole content of the method, I hope the content of this article for everyone to learn to use C + + program can help. If you have questions, please leave a message for discussion.