I want to capture timestamps with sub-second precision in python. It looks like the standard answer is int(time.time() * 1000)
However, if time.time()
returns a float, won't you have precision problems? There will be some values that won't represent accurately as a float.
I'm worried about some fractional times that don't represent correctly as a float, and the timestamp jumping forward or backward in those cases.
Is that a valid concern?
If so, what's the work-around?