First, meet the Clock () and GetTickCount ():
One,clock()
Clock () is a timing function in C/s + +, and its associated data type is clock_t. In MSDN, the check for the clock function is defined as follows:
clock_t clock (void);
In simple terms, it is the time that the program consumes the CPU from startup to function calls. This function returns the number of CPU clock ticks (clock ticks) from "Open this program process" to "call clock () function in program", called Wall Clock Time (Wal-clock) in MSDN, and returns 1 if the wall clock time is not desirable. Where clock_t is the data type used to hold the time.
In the time.h file, we can find the definition of it:
#ifndef _clock_t_definedtypedef long clock_t; #define _clock_t_defined#endif
It is clear that clock_t is a long shaping number. In the Time.h file, you also define a constant clocks_per_sec, which is used to indicate how many clock units are in a second, defined as follows:
#define CLOCKS_PER_SEC ((clock_t))
You can see that every 1 per thousand seconds (1 milliseconds), the value returned by the call to the clock () function increases by 1. Simply put, the clock () can be accurate to 1 milliseconds.
Under the Linux system, the value of the clocks_per_sec may be different, the current use of the Linux print out the value is 1000000, indicating a microsecond. This is a point to note.
Second,GetTickCount()
GetTickCount function: It returns the number of milliseconds that have elapsed from the operating system boot to the current, and its return value is DWORD. Often used to determine when a method executes, its function prototype is a DWORD GetTickCount (void), and the return value is stored in a 32-bit double-word type DWORD, so the maximum value that can be stored is (2^32-1) MS is approximately 49.71 days, so if the system is running for more than 49.71 days, this number will be referred to in 0,MSDN as well: "Retrieves the numbers of milliseconds that haselapsed since The system was started, up to 49.7 days.". Therefore, if you are writing server-side programs, you must be extremely careful here to avoid causing unexpected conditions.
Special Note: This function is not sent in real time, but is sent by the system every 18ms, so its minimum precision is 18ms. When a precision calculation of less than 18ms is required, the stopwatch method should be used.
Use clock to calculate the time difference:
#include <time.h> #include <stdio.h> int main () { double start,end,cost; Start=clock (); Sleep (1); End=clock (); Cost=end-start; printf ("%f/n", cost); return 0; }
use GetTickCount to calculate the time difference:
#include <iostream> #include <windows.h> using namespace std; int main () { Double start = GetTickCount (); Sleep (+); Double End=gettickcount (); cout << "GetTickCount:" << end-start << Endl; return 0; }
It can be seen that the clock is more accurate than the GetTickCount, so it is recommended to use clock if a higher precision time difference is required.
Calculates a two time difference (accurate to milliseconds) under Windows