I used the difference between the two time.time() calls to measure the time that my app takes to complete a network request. App is running at Debian server. The problem is that periodically I get a negative difference between these values, as if time is going backwards.
I wrote a simple example:
import time
while True:
print("OK")
start_time = time.time()
time.sleep(0.01)
end_time = time.time()
diff = end_time - start_time
if diff < 0:
print(start_time)
print(end_time)
print(diff)
break
and this is output (this is repeatable and takes only few seconds to get decreasing time):
...
OK
OK
OK
OK
OK
OK
OK
1558957392.940343
1558957342.089879
-50.8504638671875
In my Mac there are no such problems with this code. Now I switched to time.perf_counter()
at my server and all works fine. Documentation says:
While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.
But I want to know: why it takes only few seconds to get error in my Debian server and why time difference is always about 50.85 seconds (last 3 runs: 50.84628367424011, 50.84458661079407, 50.84599566459656)?