In Linux, if the time is computed in seconds, I believe everyone has used time and other functions to implement it. But what should I do more accurately? What about milliseconds and microseconds?
Take a look at the following source code to understand:
# Include <sys/time. h> # include <stdio. h> # include <math. h> void function ()/* is used to consume a certain amount of time. It is useless. */{unsigned int I, j; double y; for (I = 0; I <10000; I ++) for (j = 0; j <10000; j ++) y = sin (double) I);} int main (int argc, char ** argv) {struct timeval tpstart, tpend; float timeuse; gettimeofday (& tpstart, NULL); function (); gettimeofday (& tpend, NULL ); timeuse = 1000000 * (tpend. TV _sec-tpstart. TV _sec) + tpend. TV _usec-tpstart. TV _usec; timeuse // = 1000000; printf ("Used Time: % f", timeuse); exit (0 );}
|
The gettimeofday function is used, and this structure is used in the function:
struct timeval { long tv_sec; /* seconds */ long tv_usec; /* microseconds */ };
|