3

Background

I have been trying to write a reliable timer with resolution of at least microseconds in Python (3.7). The purpose is to run some specific task every few us, continuously over long period of time. After some research I settled with perf_counter_ns because of its higher consistency and tested resolution among others (monotonic_ns, time_ns, process_time_ns, and thread_time_ns), details of which can be found in the time module documentation and PEP 564

Test

To ensure the precision (and accuracy) of perf_counter_ns, I set up a test to collect the delays between consecutive timestamps, as shown below.

import time
import statistics as stats
# import resource

def practical_res_test(clock_timer_ns, count, expected_res):
  counter = 0
  diff = 0
  timestamp = clock_timer_ns() # initial timestamp
  diffs = []
  while counter < count:
    new_timestamp = clock_timer_ns()
    diff = new_timestamp - timestamp
    if (diff > 0):
      diffs.append(diff)
      timestamp = new_timestamp
      counter += 1

  print('Mean: ', stats.mean(diffs))
  print('Mode: ', stats.mode(diffs))
  print('Min: ', min(diffs))
  print('Max: ', max(diffs))

  outliers = list(filter(lambda diff: diff >= expected_res, diffs))
  print('Outliers Total: ', len(outliers))

if __name__ == '__main__':

  count = 10000000

  # ideally, resolution of at least 1 us is expected
  # but let's just do 10 us for the sake of this test
  expected_res = 10000 

  practical_res_test(time.perf_counter_ns, count, expected_res)

  # other method benchmarks
  # practical_res_test(time.time_ns, count, expected_res)
  # practical_res_test(time.process_time_ns, count, expected_res)
  # practical_res_test(time.thread_time_ns, count, expected_res)
  # practical_res_test(
  #   lambda: int(resource.getrusage(resource.RUSAGE_SELF).ru_stime * 10**9),
  #   count,
  #   expected_res
  # )

Problem and Question

Question: Why are there occasional significant skips in time between timestamps? Multiple tests with 10,000,000 count on my Raspberry Pi 3 Model B V1.2 yielded similar results, one of which is as follows (time is of course in nano seconds):

Mean:  2440.1013097
Mode:  2396
Min:  1771
Max:  1450832 # huge skip as I mentioned
Outliers Total:  8724 # delays that are more than 10 us

Another test on my Windows desktop:

Mean:  271.05812 # higher end machine - better resolution
Mode:  200
Min:  200
Max:  30835600 # but there're still skips, even more significant
Outliers Total:  49021

Although I am aware that resolution will differ on different systems, it is easy to notice a much lower resolution in my test compared to what is rated in PEP 564. Most importantly, occasional skips are observed.

Please let me know if you have any insight into why this is happening. Does it have anything to do with my test, or is perf_counter_ns bound to fail in such use cases? If so do you have any suggestions for a better solution? Do let me know if there is any other info I need to provide.

Additional Info

For completion, here is the clock info from time.get_clock_info()

On my raspberry pi:

Clock: perf_counter
Adjustable: False
Implementation: clock_gettime(CLOCK_MONOTONIC)
Monotonic: True
Resolution(ns): 1

On my Windows desktop:

Clock: perf_counter
Adjustable: False
Implementation: QueryPerformanceCounter()
Monotonic: True
Resolution(ns): 100

It is also worth mentioning that I am aware of time.sleep() but from my tests and use case it is not particularly reliable as others have discussed here

vnphanquang
  • 41
  • 1
  • 6
  • I am not a python dev, but could it have something to with GC, scheduling or simply the fact that python is not made as a real time language? – munHunger Jul 29 '19 at 07:27
  • @munHunger thanks for the input. By GC you mean garbage collection? I agree that python might not be efficient for some real-time applications. – vnphanquang Jul 29 '19 at 07:38
  • yes, I mean garbage collection. as said I am not a python dev, so I am not 100% sure about how it is actually done, but if we assume that it implements a "stop-the-world" GC, then it might explain the issue – munHunger Jul 29 '19 at 07:39
  • You can disable cycle collection by importing `gc` at the top of your module, then calling `gc.disable()` just before beginning your tests (and `gc.enable()` after). – ShadowRanger Jul 29 '19 at 15:57
  • At the time scales you are considering, other processes could also take up a significant amount of time. Do you see the same anomaly if you test `process_time_ns`? – jirassimok Jul 29 '19 at 16:10
  • Also, if you want to get the most precision possible in pure Python, I would consider taking out all code except calls to `perf_counter_ns` from the timer loop, because at the nanosecond scale, everything you do takes up noticeable time. – jirassimok Jul 29 '19 at 16:13
  • From my own quick tests, using `process_time_ns` reduces the largest difference to the range of 40,000–100,000, so sleep time may account for some of what you're seeing, but it definitely does't account for all of it. – jirassimok Jul 29 '19 at 16:34
  • @jirassimok i am curious, what system were your tests run on? I did some comparison just now and they didn't seem to give much difference. Also practically speaking we want to include sleep time (which perf_counter does and process_time doesn't), don't we? – vnphanquang Jul 29 '19 at 16:39
  • @ShadowRanger, I tried disabling and enabling gc in multiple tests but noticed no consistent difference to the original method. – vnphanquang Jul 29 '19 at 16:49
  • I was running on OS X, which could explain any differences. The reason I suspected sleep time could come into play is that if the process ends up being preempted by another process for any amount of time, `perf_counter` would see a larger time between consecutive calls, while `process_time` would not. (My test also indicated that `process_time` has much lower precision (on my system), which made the results less useful than I'd hoped.) – jirassimok Jul 29 '19 at 17:08
  • @jirassimok thanks for putting in the effort! process_time did not really work on my Windows machine but produced relatively close results compared to perf_counter on my raspberry pi so i assume it's a compatibility issue. – vnphanquang Jul 29 '19 at 18:38

1 Answers1

1

If you plot the list of time differences, you will see a rather low baseline with peaks that increase over time.

This is caused by the append() operation that occasionally has to reallocate the underlying array (which is how the Python list is implemented). By pre-allocating the array, the result will improve:

import time
import statistics as stats
import gc
import matplotlib.pyplot as plt

def practical_res_test(clock_timer_ns, count, expected_res):
  counter = 0
  diffs = [0] * count
  gc.disable()
  timestamp = clock_timer_ns() # initial timestamp
  while counter < count:
    new_timestamp = clock_timer_ns()
    diff = new_timestamp - timestamp
    if diff > 0:
      diffs[counter] = diff
      timestamp = new_timestamp
      counter += 1
  gc.enable()
  print('Mean: ', stats.mean(diffs))
  print('Mode: ', stats.mode(diffs))
  print('Min: ', min(diffs))
  print('Max: ', max(diffs))

  outliers = list(filter(lambda diff: diff >= expected_res, diffs))
  print('Outliers Total: ', len(outliers))
  plt.plot(diffs)
  plt.show()

if __name__ == '__main__':

  count = 10000000

  # ideally, resolution of at least 1 us is expected
  # but let's just do 10 us for the sake of this test
  expected_res = 10000 

  practical_res_test(time.perf_counter_ns, count, expected_res)

These are the results I get:

Mean:  278.6002
Mode:  200
Min:  200
Max:  1097700
Outliers Total:  3985

In comparison, these are the results on my system with the original code:

Mean:  333.92254
Mode:  300
Min:  200
Max:  50507300
Outliers Total:  2590

To get even better performance, you might want to run on Linux and use SCHED_FIFO. But always remember that real-time tasks with microsecond precision are not done in Python. If your problem is soft real-time, you can get away with it but it all depends on the penalty for missing a deadline and your understanding of the time complexities of both your code and the Python interpreter.

  • Hey there thanks for the feedback. Allocation of the array beforehand is definitely an improvement; I did not think of that. And yes I understand Python is by no means best for this particular real-time task. However, in this project, Python has really surprised me of what it can achieve if you carefully think about the performance of every line of code you would write. Anyway my original question here is more for the curiosity of such inconsistency in these time functions. Cheers! – vnphanquang Oct 08 '20 at 01:23