3

I have been using a hack like this in a lot of my code:

import time
if not hasattr(time, 'time_ns'):
    time.time_ns = lambda: int(time.time() * 1e9)

It works around the limitation of Python 3.6 and earlier, which did not have a time_ns method. The issue is that the above workaround is based on time.time, which returns a float. In 2019's UTC, this is about accurate to a micro-second scale.

How would I implement time_ns for older versions of Python with full nano-second accuracy? (Primarily targeting UNIX-like systems.)

s-m-e
  • 3,433
  • 2
  • 34
  • 71

1 Answers1

2

Looking at the CPython source code, the following can be derived:

import ctypes

CLOCK_REALTIME = 0

class timespec(ctypes.Structure):
    _fields_ = [
        ('tv_sec', ctypes.c_int64), # seconds, https://stackoverflow.com/q/471248/1672565
        ('tv_nsec', ctypes.c_int64), # nanoseconds
        ]

clock_gettime = ctypes.cdll.LoadLibrary('libc.so.6').clock_gettime
clock_gettime.argtypes = [ctypes.c_int64, ctypes.POINTER(timespec)]
clock_gettime.restype = ctypes.c_int64    

def time_ns():
    tmp = timespec()
    ret = clock_gettime(CLOCK_REALTIME, ctypes.pointer(tmp))
    if bool(ret):
        raise OSError()
    return tmp.tv_sec * 10 ** 9 + tmp.tv_nsec

The above works on 64bit UNIX-like systems.

s-m-e
  • 3,433
  • 2
  • 34
  • 71