Problem 1: multiple calls to getsystemtime/getlocaltime within 15 ms (the corresponding function in Java is system. currenttimemillis (), and the same value is returned.
Solution: Use getsystemtime as the baseline, and use the high-precision timer queryperformancecounter provided by Windows (the corresponding function in Java is system. nanotime () for timing. The precise clock is baseline + timer time.
Question 2: queryperformancecounter/queryperformancefrequency
This problem mainly depends on the implementation of Windows queryperformancecounter. The early implementation was to use the CPU-level timestamp-counter (TSC). Because the value of tick count varies greatly on different cores, therefore, the value calculated using this timer is completely unpredictable. In the early days, a simple solution was to use the timer only in a thread, and bind this thread to a core (but the thread in Java cannot set CPU affinity)
On Windows XP SP2, Windows Server 2003 sp2, and later systems, queryperformancecounter can choose to use Power Management timer pmtimer. add the/usepmtimer option in ini to set the implementation method of queryperformancecounter. Because pmtimer uses the timer of the motherboard, there is no problem of multi-core synchronization.
Conclusion: 1. Check the return value (parameter) of queryperformancefrequency ),
If the returned value is 3,579,545, the current system uses pmtimer --> step 3.
Return Value = CPU frequency, the system uses TSC --> Step 2
2. If the operating system is Windows XP SP2/Windows Server 2003 SP2 or later, add the/usepmtimer parameter in c: \ Boot. ini and restart the system.
3. Use getsystemtime + queryperformancecounter to implement precise clock references:
---------------------------------------------------------------------------------
Http://msdn.microsoft.com/zh-cn/windows/hardware/gg463347
Http://support.microsoft.com/kb/833721
Http://support.microsoft.com/kb/895980
Http://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
Windows use of clocks and timers varies considerably from platform to platform and is plagued by problems-again this isn't necessarily window's fault, just as it wasn' t the VM's fault: the hardware support for clocks/Timers is actually not very good-the references at the end lead you to more information on the timing hardware available. the following relates to the "NT" Family (WIN 2 K, XP, 2003) of windows.
There are a number of different "Clock" API's available in windows. Those used by hotspot are as follows:
- System. currenttimemillis ()Is implemented usingGetsystemtimeasfiletimeMethod, which essential tially just reads the low resolution Time-of-day value that Windows maintains. reading this global variable is naturally very quick-around und 6 cycles according to reported information. this time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed-depending on the platform this will either be 10 ms or 15 ms (this value seems tied to the default interrupt period ).
- System. nanotime () Is implemented using Queryperformancecounter / Queryperformancefrequency API (if available, else it returns Currenttimemillis \ * 10 \ ^ 6 ). Queryperformancecounter (Qpc) is implemented in different ways depending on the hardware it's running on. typically it will use either the programmable-interval-timer (PIT), or the ACPI Power Management timer (PMT), or the CPU-level timestamp-counter (TSC ). accessing the pit/PMT requires execution of slow I/O port instructions and as a result the execution time for qpc is in the order of microseconds. in contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency ). you can tell if your system uses the acpi pmt by checking if Queryperformancefrequency Returns the signature value of 3,579,545 (ie 3.57 MHz ). if you see a value around und 1.19 MHz then your system is using the old 8245 pit chip. otherwise you shocould see a value approximately that of your CPU frequency (Modulo any speed throttling or power-management that might be in effect .)
The default mechanism used by qpc is determined by the hardware specified action layer (HAL), but some systems allow you to explicitly control it using options in Boot. ini , Such /Usepmtimer That explicitly requests use of the Power Management timer. this default changes not only has SS hardware but also has ss OS versions. for example Windows XP Service Pack 2 changed things to use the Power Management timer (pmtimer) rather than the processor timestamp-counter (TSC) due to problems with the TSC not being synchronized on different processors in SMP systems, and due the fact its frequency can vary (and hence its relationship to elapsed time) based on power-settings management. (The issues with the TSC, in particle for AMD systems, and how amd aims to provide a stable TSC in future processors is discussed in rich Brunner's article referenced below. you can also read how the Linux kernel folk have abandoned use of the TSC until a new stable version appears in CPUs .)
The timer related API's for doing timed-waits all useWaitformultipleobjectsAPI as previusly mentioned. This API only accepts timeout values in milliseconds and its ability to recognize the passage of time is based on the timer interrupt programmed through the hardware.
Typically a Windows machine has a default 10 ms timer interrupt period, but some systems have a 15 ms period. This timer interrupt period may be modified by application programs using Timebeginperiod / Timeendperiod API's. the period is still limited to milliseconds and there is no guarantee that a requested period will be supported. however, usually you can request a 1 ms timer interrupt period (though its accuracy has been questioned in some reports ). the hotspot VM in fact uses this 1 ms period to allow for higher resolution Thread. Sleep Callthan wowould otherwise be possible. The sample Sleeper. Java Will cause this higher interrupt rate to be used, thus allowing experimentation with a 1 ms versus 10 ms period. It simply CILS Thread. Sleep (integer. max_value) Which (because it is not a multiple of 10 ms) causes the VM to switch to a 1 ms period for the duration of the sleep-which in this case is "forever" and you'll have to Ctrl-C the "Java sleeper" Execution.
Public class sleeper {public static void main (string [] ARGs) throws throwable {thread. Sleep (integer. max_value );}}
you can see what interrupt period is being used in Windows by running the perfmon tool. after you bring it up you'll need to add a new item to watch (click the + icon above the graph-even if it appears grayed/disabled). select the interrupts/sec items and add it. then right click on interrupts/sec under the graph and edit its properties. on the "data" tab, change the "scale" to 1 and on the graph tab, the vertical Max to be 1000. let the system settle for a few seconds and you shoshould see the graph drawing a steady line. if you have a 10 ms interrupt then it will be 100, for 1 ms it will be 1000, for 15 ms it will be 66.6, etc. note: On a multiprocessor system show the interrupts/sec for each processor individually, not the total-one processor will be fielding the timer interrupts.
note that any application can change the timer interrupt and that it affects the whole system. windows only allows the period to be shortened, thus ensuring that the shortest requested period by all applications is the one that is used. if a process doesn' t reset the period then Windows takes care of it when the process terminates. the reason why the VM doesn't just arbitrarily change the interrupt rate when it starts-It cocould do this-is that there is a potential performance impact to everything on the system due to the 10X increase in interrupts. however other applications do change it, typically multi-media viewers/players. be aware that a browser running the JVM as a plug-in can also cause this change in interrupt rate if there is an applet running that uses the thread. sleep method in a similar way to sleeper .
Further note, that after Windows suspends or hibernates, the timer interrupt is restored to the default, even if an application using a higher interrupt rate was running at the time of suspension/hibernation.