I'm reading Learning Python by Mark Lutz and in the chapter related to timing he uses the function time.time()
.
Reading the documentation about this function it says:
Return the time in seconds since the epoch as a floating point number. Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second.
Now because some tests return number results as 0.004040.., 0.00037289...and so on, I asked myself, how can I get a such good precision if the documentation says that not all systems provide a better precision than 1 second? How can I understand what precision my system has?
One thing I don't understand. If in every computer there is a circuit dedicated to the real-time clock (here it is said so Understanding time.perf_counter() and time.process_time()) how can the timer precision differ from platform (i.e from unix or win) should that be always the same? I mean if the circuit refreshes its counter every 1us shouldn't the precision be 1us regardless of the platform?