Jiffies is every clock interrupt and adds 1. This leads to a problem. Regardless of the number of bytes jiffies (generally unsigned long), there is always an overflow.
More extreme times. The current jiffies is 0xFFFFFFFF. The next moment is 0x0.
Such When you calculate the delay/time difference when the 0x0-0xffffffff result is 0xffffffff, in fact just over a clock cycle, so the difference is huge.
So how do you prevent such a situation from happening?
Very easy
If T1 is the current jiffies of the Jiffies,t2 that was previously recorded , you want to calculate the difference between the values:
(long) T2-(long) T1
This will make it possible to avoid the previous situation successfully.
Because:
(long) 0xFFFFFFFF is 0x1
(long) 0x0 is 0x0
So (long) 0xFFFFFFFF- (long) The value of 0x0 is exactly 1, which is the correct result.
Linux kernel Compute time difference and jiffies overflow