& Nbsp; In Linux, if it is time-level, I believe that everyone has used time and other functions to implement it, but is it more accurate? What about milliseconds and microseconds? Take a look at the following source code to understand: # include & lt; sys/time. h & gt; # include & lt; stdio. h & gt;
In Linux, if the time is computed in seconds, I believe everyone has used time and other functions to implement it. but what should I do more accurately? What about milliseconds and microseconds?
Take a look at the following source code to understand:
# Include
# Include
# Include
Void function ()/* is used to consume a certain amount of time. it is useless. */{unsigned int I, j; double y; for (I = 0; I <10000; I ++) for (j = 0; j <10000; j ++) y = sin (double) I);} int main (int argc, char ** argv) {struct timeval tpstart, tpend; float timeuse; gettimeofday (& tpstart, NULL); function (); gettimeofday (& tpend, NULL); timeuse = 1000000 * (tpend. TV _sec-tpstart. TV _sec) + tpend. TV _usec-tpstart. TV _usec; timeuse // = 1000000; printf ("Used Time: % f", timeuse); exit (0 );}
|
The gettimeofday function is used, and this structure is used in the function:
struct timeval { long tv_sec; /* seconds */ long tv_usec; /* microseconds */ };
|