86

Is there a way to measure time with high-precision in Python --- more precise than one second? I doubt that there is a cross-platform way of doing that; I'm interesting in high precision time on Unix, particularly Solaris running on a Sun SPARC machine.

timeit seems to be capable of high-precision time measurement, but rather than measure how long a code snippet takes, I'd like to directly access the time values.

fuad
  • 4,265
  • 9
  • 34
  • 32
  • 2
    You mean 'elapsed time' or 'wall clock time', not 'CPU time'. Also, <1s is not considered high-precision. And when you say 'cross-platform', do you only mean 'across Linuxes', or also Windows? – smci Jul 10 '17 at 14:04
  • Related Q&A: [How to get millisecond and microsecond-resolution timestamps in Python](https://stackoverflow.com/questions/38319606/how-to-get-millisecond-and-microsecond-resolution-timestamps-in-python) – Gabriel Staples Aug 23 '18 at 03:47
  • 2
    [PEP-418](https://www.python.org/dev/peps/pep-0418/#operating-system-time-functions) (which introduces `time.perf_counter`) and [PEP-564](https://www.python.org/dev/peps/pep-0564/#annex-clocks-resolution-in-python) provide a wealth of information about timing performance on a wide variety of operating systems, including **tables** for resolution, etc. – djvg Nov 22 '21 at 14:26

15 Answers15

104

The standard time.time() function provides sub-second precision, though that precision varies by platform. For Linux and Mac precision is +- 1 microsecond or 0.001 milliseconds. Python on Windows uses +- 16 milliseconds precision due to clock implementation problems due to process interrupts. The timeit module can provide higher resolution if you're measuring execution time.

>>> import time
>>> time.time()        #return seconds from epoch
1261367718.971009      

Python 3.7 introduces new functions to the time module that provide higher resolution:

>>> import time
>>> time.time_ns()
1530228533161016309
>>> time.time_ns() / (10 ** 9) # convert to floating-point seconds
1530228544.0792289
daf
  • 5,085
  • 4
  • 31
  • 34
  • 48
    Note that on Windows time.time() has ~16 milliseconds precision. – alexanderlukanin13 Dec 16 '11 at 10:24
  • How much delay is in getting this time ? Is this accurate? – Coderaemon May 21 '15 at 10:24
  • 4
    inaccurate on windows (useless) – Szabolcs Dombi Aug 21 '17 at 12:47
  • 2
    time_ns does provide nano second resolution. But is it really precise on Windows? Seems no description about it in official document – Jcyrss Jun 19 '19 at 09:14
  • 12
    @Jcyrss, `time_ns` does not provide nanosecond resolution, it just returns the time in nanoseconds. This may or may not improve the resolution. Measured clock resolutions for Linux and Windows are documented in [PEP 564](https://www.python.org/dev/peps/pep-0564/#annex-clocks-resolution-in-python). For Windows, the measured resolution of `time_ns` was **318 us**. – wovano Jul 06 '19 at 08:36
  • Inaccuracy on Windows can be addressed by calling WinAPI's [`timeBeginPeriod()`](https://learn.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod). The default behavior is supposed to reduce power consumption. As an example, most video game engines will call it for better frame timing, since it also affects the accuracy with which a call to `Sleep()` will return. – Zyl Aug 23 '19 at 19:30
  • On my macbook, `time.time` (and thus time_ns) has 1e-6 resolution, on a linux machine, it's 1e-9. Check with `time.get_clock_info('time')`. My mac's `time_ns()` always ends in triple zeros. (python 3.7.7). On linux: `python3.7 -c 'import time; print(time.time_ns() - time.time_ns())` yields `-11454` `-13131` `-15365` etc. Take that for what you will, I take it to mean your error bars on anything you are trying to time (using `time_ns`, `timeit` may be internal) are at _least_ 11-15 µs. – DeusXMachina Apr 06 '20 at 16:48
  • 8
    In python >=3.3 the answer is [time.perf_counter()](https://docs.python.org/3.5/library/time.html#time.perf_counter) and time.process_time() – Berwyn Nov 14 '20 at 04:28
  • Building on what @Zyl mentioned, `NtSetTimerResolution` can usually set timer to 0.5ms. `NtQueryTimerResolution` can be used to find the current, min and max resolutions possible. Both are still "undocumented" but have been unchanged since at least Vista. – NolePTR Mar 02 '21 at 16:48
  • For **0.5us** (0.5 microsecond) resolution timestamps in Python in Windows and Linux _before version 3.7_, including all the way back to Python 3.3 and earlier, see my answer here: [How can I get millisecond and microsecond-resolution timestamps in Python?](https://stackoverflow.com/a/38319607/4561887) – Gabriel Staples Aug 25 '22 at 04:28
28

David's post was attempting to show what the clock resolution is on Windows. I was confused by his output, so I wrote some code that shows that time.time() on my Windows 8 x64 laptop has a resolution of 1 msec:

# measure the smallest time delta by spinning until the time changes
def measure():
    t0 = time.time()
    t1 = t0
    while t1 == t0:
        t1 = time.time()
    return (t0, t1, t1-t0)

samples = [measure() for i in range(10)]

for s in samples:
    print s

Which outputs:

(1390455900.085, 1390455900.086, 0.0009999275207519531)
(1390455900.086, 1390455900.087, 0.0009999275207519531)
(1390455900.087, 1390455900.088, 0.0010001659393310547)
(1390455900.088, 1390455900.089, 0.0009999275207519531)
(1390455900.089, 1390455900.09, 0.0009999275207519531)
(1390455900.09, 1390455900.091, 0.0010001659393310547)
(1390455900.091, 1390455900.092, 0.0009999275207519531)
(1390455900.092, 1390455900.093, 0.0009999275207519531)
(1390455900.093, 1390455900.094, 0.0010001659393310547)
(1390455900.094, 1390455900.095, 0.0009999275207519531)

And a way to do a 1000 sample average of the delta:

reduce( lambda a,b:a+b, [measure()[2] for i in range(1000)], 0.0) / 1000.0

Which output on two consecutive runs:

0.001
0.0010009999275207519

So time.time() on my Windows 8 x64 has a resolution of 1 msec.

A similar run on time.clock() returns a resolution of 0.4 microseconds:

def measure_clock():
    t0 = time.clock()
    t1 = time.clock()
    while t1 == t0:
        t1 = time.clock()
    return (t0, t1, t1-t0)

reduce( lambda a,b:a+b, [measure_clock()[2] for i in range(1000000)] )/1000000.0

Returns:

4.3571334791658954e-07

Which is ~0.4e-06

An interesting thing about time.clock() is that it returns the time since the method was first called, so if you wanted microsecond resolution wall time you could do something like this:

class HighPrecisionWallTime():
    def __init__(self,):
        self._wall_time_0 = time.time()
        self._clock_0 = time.clock()

    def sample(self,):
        dc = time.clock()-self._clock_0
        return self._wall_time_0 + dc

(which would probably drift after a while, but you could correct this occasionally, for example dc > 3600 would correct it every hour)

cod3monk3y
  • 9,508
  • 6
  • 39
  • 54
  • this is greate work cod3monk3y... thank you for sharing! – ojblass Apr 17 '14 at 14:00
  • 6
    On Windows, `time.clock` measures elapsed time to high precision. On OS X and Linux, it measures CPU time. As of Python 3.3 it is [deprecated](https://docs.python.org/3/library/time.html#time.process_time) in favor of `perf_counter` to measure elapsed time and `process_time` to measure CPU. – George Jun 09 '16 at 03:51
  • 3
    This answer is misleading. Just because your windows has timer resolution set to 1ms at the time you ran this script does not guarantee that another process cannot or will not set it to a higher resolution. The default resolution is 15.6ms, any process can come along and change that value. I ran your script and I got 15ms delta, then I used https://github.com/tebjan/TimerTool and set it to 1ms and ran it again and got 1ms time delta. Be wary of assuming that windows is holding 1ms timer resolution, you should be explicit and set it yourself at the start of your script if needed. – Kevin S Mar 19 '19 at 18:02
27

Python tries hard to use the most precise time function for your platform to implement time.time():

/* Implement floattime() for various platforms */

static double
floattime(void)
{
    /* There are three ways to get the time:
      (1) gettimeofday() -- resolution in microseconds
      (2) ftime() -- resolution in milliseconds
      (3) time() -- resolution in seconds
      In all cases the return value is a float in seconds.
      Since on some systems (e.g. SCO ODT 3.0) gettimeofday() may
      fail, so we fall back on ftime() or time().
      Note: clock resolution does not imply clock accuracy! */
#ifdef HAVE_GETTIMEOFDAY
    {
        struct timeval t;
#ifdef GETTIMEOFDAY_NO_TZ
        if (gettimeofday(&t) == 0)
            return (double)t.tv_sec + t.tv_usec*0.000001;
#else /* !GETTIMEOFDAY_NO_TZ */
        if (gettimeofday(&t, (struct timezone *)NULL) == 0)
            return (double)t.tv_sec + t.tv_usec*0.000001;
#endif /* !GETTIMEOFDAY_NO_TZ */
    }

#endif /* !HAVE_GETTIMEOFDAY */
    {
#if defined(HAVE_FTIME)
        struct timeb t;
        ftime(&t);
        return (double)t.time + (double)t.millitm * (double)0.001;
#else /* !HAVE_FTIME */
        time_t secs;
        time(&secs);
        return (double)secs;
#endif /* !HAVE_FTIME */
    }
}

( from http://svn.python.org/view/python/trunk/Modules/timemodule.c?revision=81756&view=markup )

Joe Koberg
  • 25,416
  • 6
  • 48
  • 54
22

If Python 3 is an option, you have two choices:

  • time.perf_counter which always use the most accurate clock on your platform. It does include time spent outside of the process.
  • time.process_time which returns the CPU time. It does NOT include time spent outside of the process.

The difference between the two can be shown with:

from time import (
    process_time,
    perf_counter,
    sleep,
)

print(process_time())
sleep(1)
print(process_time())

print(perf_counter())
sleep(1)
print(perf_counter())

Which outputs:

0.03125
0.03125
2.560001310720671e-07
1.0005455362793145
ereOn
  • 53,676
  • 39
  • 161
  • 238
  • I tried the code on Windows, and got: 0.0625, 0.0625, 1.0497794, 2.0537843. The output of the `perf_counter()` should be greater than that of the `process_time() + 1`, isn't it? – starriet Sep 17 '21 at 00:39
  • This should now be the accepted answer IMHO. Also see tables with timing results in [PEP-418](https://www.python.org/dev/peps/pep-0418/#operating-system-time-functions) and [PEP-564](https://www.python.org/dev/peps/pep-0564/#annex-clocks-resolution-in-python). The latter mentions `100ns` precision measured for `perf_counter` on Windows 8. – djvg Nov 22 '21 at 14:38
14

You can also use time.clock() It counts the time used by the process on Unix and time since the first call to it on Windows. It's more precise than time.time().

It's the usually used function to measure performance.

Just call

import time
t_ = time.clock()
#Your code here
print 'Time in function', time.clock() - t_

EDITED: Ups, I miss the question as you want to know exactly the time, not the time spent...

kynan
  • 13,235
  • 6
  • 79
  • 81
Khelben
  • 6,283
  • 6
  • 33
  • 46
11

Python 3.7 introduces 6 new time functions with nanosecond resolution, for example instead of time.time() you can use time.time_ns() to avoid floating point imprecision issues:

import time
print(time.time())
# 1522915698.3436284
print(time.time_ns())
# 1522915698343660458

These 6 functions are described in PEP 564:

time.clock_gettime_ns(clock_id)

time.clock_settime_ns(clock_id, time:int)

time.monotonic_ns()

time.perf_counter_ns()

time.process_time_ns()

time.time_ns()

These functions are similar to the version without the _ns suffix, but return a number of nanoseconds as a Python int.

Chris_Rands
  • 38,994
  • 14
  • 83
  • 119
  • It will be interesting to see a benchmark of this on a Raspberry Pi 3. – SDsolar Jun 07 '18 at 17:31
  • 4
    These functions are just as accurate (or inaccurate) as the original functions. The only that changed is the *format* of the returned data: instead of being floating point numbers they are integers. For example 1ms is returned 1000000 rather than 0.001. – Arthur Tacca Jan 10 '20 at 15:46
5

time.clock() has 13 decimal points on Windows but only two on Linux. time.time() has 17 decimals on Linux and 16 on Windows but the actual precision is different.

I don't agree with the documentation that time.clock() should be used for benchmarking on Unix/Linux. It is not precise enough, so what timer to use depends on operating system.

On Linux, the time resolution is high in time.time():

>>> time.time(), time.time()
(1281384913.4374139, 1281384913.4374161)

On Windows, however the time function seems to use the last called number:

>>> time.time()-int(time.time()), time.time()-int(time.time()), time.time()-time.time()
(0.9570000171661377, 0.9570000171661377, 0.0)

Even if I write the calls on different lines in Windows it still returns the same value so the real precision is lower.

So in serious measurements a platform check (import platform, platform.system()) has to be done in order to determine whether to use time.clock() or time.time().

(Tested on Windows 7 and Ubuntu 9.10 with python 2.6 and 3.1)

livibetter
  • 19,832
  • 3
  • 42
  • 42
David
  • 59
  • 1
  • 1
  • 7
    Surely it's not that it returns the last value, but that you make multiple calls in a shorter span of time than the clock resolution. – daf Jan 31 '13 at 01:48
  • Your code for linux is different from your code for windows. The Linux code shows a tuple of two time values and your Windows code shows just the fractional portions and a delta. Output of similar code would be easier to compare. If this code shows anything it's that your Linux box has a 2.2 microsecond `time.time()` resolution or that your Linux calls are taking a while (box is really slow, caught the timer on a transition, etc.). I'll post some code that shows a way to resolve the question you've raised here. – cod3monk3y Jan 23 '14 at 05:50
  • 5
    In order to avoid platform-specific code, use timeit.default_timer() – tiho Mar 27 '14 at 17:21
4

The original question specifically asked for Unix but multiple answers have touched on Windows, and as a result there is misleading information on windows. The default timer resolution on windows is 15.6ms you can verify here.

Using a slightly modified script from cod3monk3y I can show that windows timer resolution is ~15milliseconds by default. I'm using a tool available here to modify the resolution.

Script:

import time

# measure the smallest time delta by spinning until the time changes
def measure():
    t0 = time.time()
    t1 = t0
    while t1 == t0:
        t1 = time.time()
    return t1-t0

samples = [measure() for i in range(30)]

for s in samples:
    print(f'time delta: {s:.4f} seconds') 

enter image description here

enter image description here

These results were gathered on windows 10 pro 64-bit running python 3.7 64-bit.

Kevin S
  • 930
  • 10
  • 19
2

The comment left by tiho on Mar 27 '14 at 17:21 deserves to be its own answer:

In order to avoid platform-specific code, use timeit.default_timer()

Justin
  • 6,611
  • 3
  • 36
  • 57
2

I observed that the resolution of time.time() is different between Windows 10 Professional and Education versions.

On a Windows 10 Professional machine, the resolution is 1 ms. On a Windows 10 Education machine, the resolution is 16 ms.

Fortunately, there's a tool that increases Python's time resolution in Windows: https://vvvv.org/contribution/windows-system-timer-tool

With this tool, I was able to achieve 1 ms resolution regardless of Windows version. You will need to be keep it running while executing your Python codes.

dbdq
  • 355
  • 2
  • 8
1

For those stuck on windows (version >= server 2012 or win 8)and python 2.7,

import ctypes

class FILETIME(ctypes.Structure):
    _fields_ = [("dwLowDateTime", ctypes.c_uint),
                ("dwHighDateTime", ctypes.c_uint)]

def time():
    """Accurate version of time.time() for windows, return UTC time in term of seconds since 01/01/1601
"""
    file_time = FILETIME()
    ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(file_time))
    return (file_time.dwLowDateTime + (file_time.dwHighDateTime << 32)) / 1.0e7

GetSystemTimePreciseAsFileTime function

Terry Shi
  • 968
  • 1
  • 10
  • 14
  • Nice answer. How much time does it take to invoke the function itself? For example, on my Linux machine, invoking `time.time()-time.time()` prints a number slightly smaller than 1e-6 (1 us). – user4815162342 Mar 27 '19 at 13:48
1

On the same win10 OS system using "two distinct method approaches" there appears to be an approximate "500 ns" time difference. If you care about nanosecond precision check my code below.

The modifications of the code is based on code from user cod3monk3y and Kevin S.

OS: python 3.7.3 (default, date, time) [MSC v.1915 64 bit (AMD64)]

def measure1(mean):
    for i in range(1, my_range+1):
        x = time.time()
        
        td = x- samples1[i-1][2]
        if i-1 == 0:
            td = 0
        td = f'{td:.6f}'
        samples1.append((i, td, x))
        mean += float(td)
        print (mean)
        sys.stdout.flush()
        time.sleep(0.001)
    
    mean = mean/my_range
    
    return mean

def measure2(nr):
    t0 = time.time()
    t1 = t0
    while t1 == t0:
        t1 = time.time()
    td = t1-t0
    td = f'{td:.6f}'
    return (nr, td, t1, t0)

samples1 = [(0, 0, 0)]
my_range = 10
mean1    = 0.0
mean2    = 0.0

mean1 = measure1(mean1)

for i in samples1: print (i)

print ('...\n\n')

samples2 = [measure2(i) for i in range(11)]

for s in samples2:
    #print(f'time delta: {s:.4f} seconds')
    mean2 += float(s[1])
    print (s)
    
mean2 = mean2/my_range

print ('\nMean1 : ' f'{mean1:.6f}')
print ('Mean2 : ' f'{mean2:.6f}')

The measure1 results:

nr, td, t0
(0, 0, 0)
(1, '0.000000', 1562929696.617988)
(2, '0.002000', 1562929696.6199884)
(3, '0.001001', 1562929696.620989)
(4, '0.001001', 1562929696.62199)
(5, '0.001001', 1562929696.6229906)
(6, '0.001001', 1562929696.6239917)
(7, '0.001001', 1562929696.6249924)
(8, '0.001000', 1562929696.6259928)
(9, '0.001001', 1562929696.6269937)
(10, '0.001001', 1562929696.6279945)
...

The measure2 results:

nr, td , t1, t0
(0, '0.000500', 1562929696.6294951, 1562929696.6289947)
(1, '0.000501', 1562929696.6299958, 1562929696.6294951)
(2, '0.000500', 1562929696.6304958, 1562929696.6299958)
(3, '0.000500', 1562929696.6309962, 1562929696.6304958)
(4, '0.000500', 1562929696.6314962, 1562929696.6309962)
(5, '0.000500', 1562929696.6319966, 1562929696.6314962)
(6, '0.000500', 1562929696.632497, 1562929696.6319966)
(7, '0.000500', 1562929696.6329975, 1562929696.632497)
(8, '0.000500', 1562929696.633498, 1562929696.6329975)
(9, '0.000500', 1562929696.6339984, 1562929696.633498)
(10, '0.000500', 1562929696.6344984, 1562929696.6339984)

End result:

Mean1 : 0.001001 # (measure1 function)

Mean2 : 0.000550 # (measure2 function)

Community
  • 1
  • 1
ZF007
  • 3,708
  • 8
  • 29
  • 48
1

Here is a python 3 solution for Windows building upon the answer posted above by CyberSnoopy (using GetSystemTimePreciseAsFileTime). We borrow some code from jfs

Python datetime.utcnow() returning incorrect datetime

and get a precise timestamp (Unix time) in microseconds

#! python3
import ctypes.wintypes

def utcnow_microseconds():
    system_time = ctypes.wintypes.FILETIME()
    #system call used by time.time()
    #ctypes.windll.kernel32.GetSystemTimeAsFileTime(ctypes.byref(system_time))
    #getting high precision:
    ctypes.windll.kernel32.GetSystemTimePreciseAsFileTime(ctypes.byref(system_time))
    large = (system_time.dwHighDateTime << 32) + system_time.dwLowDateTime
    return large // 10 - 11644473600000000

for ii in range(5):
    print(utcnow_microseconds()*1e-6)

References
https://learn.microsoft.com/en-us/windows/win32/sysinfo/time-functions
https://learn.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime
https://support.microsoft.com/en-us/help/167296/how-to-convert-a-unix-time-t-to-a-win32-filetime-or-systemtime

jschoebel
  • 11
  • 1
1

For precision timing in Python:

1. Python 3.7 or later

If using Python 3.7 or later, use the modern, cross-platform time module functions such as time.monotonic_ns(), here: https://docs.python.org/3/library/time.html#time.monotonic_ns. It provides nanosecond-resolution timestamps.

import time

time_ns = time.monotonic_ns()  # note: unspecified epoch
# or on Unix or Linux you can also use:
time_ns = time.clock_gettime_ns()
# or on Windows:
time_ns = time.perf_counter_ns()

# etc. etc. There are others. See the link above.

See also this note from my other answer from 2016, here: How can I get millisecond and microsecond-resolution timestamps in Python?:

You might also try time.clock_gettime_ns() on Unix or Linux systems. Based on its name, it appears to call the underlying clock_gettime() C function which I use in my nanos() function in C in my answer here and in my C Unix/Linux library here: timinglib.c.


Unspecified epoch:

Note that when using time.monotonic() or time.monotonic_ns(), the official documentation says:

The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid.

So, if you need an absolute datetime type timestamp instead of a precision relative timestamp, which absolute datetime contains information like year, month, date, etc., then you should consider using datetime instead. See this answer here, my comment below it, and the official datetime documentation here and specifically for datetime.now() here. Here is how to get a timestamp with that module:

from datetime import datetime

now_datetime_object = datetime.now()

Do not expect it to have the resolution nor precision nor monotonicity of time.clock_gettime_ns(), however. So, for timing small differences or doing precision timing work, prefer time.clock_gettime_ns() instead.

Another option is time.time()--also not guaranteeed to have a "better precision than 1 second". You can convert it back to a datetime using time.localtime() or time.gmtime(). See here. Here's how to use it:

>>> import time
>>> time.time()
1691442858.8543699
>>> time.localtime(time.time())
time.struct_time(tm_year=2023, tm_mon=8, tm_mday=7, tm_hour=14, tm_min=14, tm_sec=36, tm_wday=0, tm_yday=219, tm_isdst=0)

Or, even better: time.time_ns():

>>> import time
>>> time.time_ns()
1691443244384978570
>>> time.localtime(time.time_ns()/1e9)
time.struct_time(tm_year=2023, tm_mon=8, tm_mday=7, tm_hour=14, tm_min=20, tm_sec=57, tm_wday=0, tm_yday=219, tm_isdst=0)
>>> time.time_ns()/1e9
1691443263.0889063

2. Python 3.3 or later

On Windows, in Python 3.3 or later, you can use time.perf_counter(), as shown by @ereOn here. See: https://docs.python.org/3/library/time.html#time.perf_counter. This provides roughly a 0.5us-resolution timestamp, in floating point seconds. Ex:

import time

# For Python 3.3 or later
time_sec = time.perf_counter()  # Windows only, I think
# or on Unix or Linux (I think only those)
time_sec = time.monotonic()

3. Pre-Python 3.3 (ex: Python 3.0, 3.1, 3.2), or later

Summary:

See my other answer from 2016 here for 0.5-us-resolution timestamps, or better, in Windows and Linux, and for versions of Python as old as 3.0, 3.2 or 3.2 even! We do this by calling C or C++ shared object libraries (.dll on Windows, or .so on Unix or Linux) using the ctypes module in Python.

I provide these functions:

millis()
micros()
delay()
delayMicroseconds()

Download GS_timing.py from my eRCaGuy_PyTime repo, then do:

import GS_timing

time_ms = GS_timing.millis()
time_us = GS_timing.micros()
GS_timing.delay(10)                # delay 10 ms
GS_timing.delayMicroseconds(10000) # delay 10000 us

Details:

In 2016, I was working in Python 3.0 or 3.1, on an embedded project on a Raspberry Pi, and which I tested and ran frequently on Windows also. I needed nanosecond resolution for some precise timing I was doing with ultrasonic sensors. The Python language at the time did not provide this resolution, and neither did any answer to this question, so I came up with this separate Q&A here: How can I get millisecond and microsecond-resolution timestamps in Python?. I stated in the question at the time:

I read other answers before asking this question, but they rely on the time module, which prior to Python 3.3 did NOT have any type of guaranteed resolution whatsoever. Its resolution is all over the place. The most upvoted answer here quotes a Windows resolution (using their answer) of 16 ms, which is 32000 times worse than my answer provided here (0.5 us resolution). Again, I needed 1 ms and 1 us (or similar) resolutions, not 16000 us resolution.

Zero, I repeat: zero answers here on 12 July 2016 had any resolution better than 16-ms for Windows in Python 3.1. So, I came up with this answer which has 0.5us or better resolution in pre-Python 3.3 in Windows and Linux. If you need something like that for an older version of Python, or if you just want to learn how to call C or C++ dynamic libraries in Python (.dll "dynamically linked library" files in Windows, or .so "shared object" library files in Unix or Linux) using the ctypes library, see my other answer here.

Gabriel Staples
  • 36,492
  • 15
  • 194
  • 265
-1
def start(self):
    sec_arg = 10.0
    cptr = 0
    time_start = time.time()
    time_init = time.time()
    while True:
        cptr += 1
        time_start = time.time()
        time.sleep(((time_init + (sec_arg * cptr)) - time_start ))

        # AND YOUR CODE .......
        t00 = threading.Thread(name='thread_request', target=self.send_request, args=([]))
        t00.start()
forrest
  • 21
  • 2