2

I'm currently trying to have a function called every 10ms to acquire data from a sensor.

Basically I was triggering the callback from a gpio interrupt but I changed my sensor and the one I'm currently using doesn't have a INT pin to drive the callback.

So my goal is to have the same behavior but with an internal interrupt generated by a timer.

I tried this from this topic

import threading

def work (): 
  threading.Timer(0.25, work).start ()
  print(time.time())
  print "stackoverflow"

work ()

But when I run it I can see that the timer is not really precise and it's deviating over time as you can see.

1494418413.1584847
stackoverflow
1494418413.1686869
stackoverflow
1494418413.1788757
stackoverflow
1494418413.1890721
stackoverflow
1494418413.1992736
stackoverflow
1494418413.2094712
stackoverflow
1494418413.2196639
stackoverflow
1494418413.2298684
stackoverflow
1494418413.2400634
stackoverflow
1494418413.2502584
stackoverflow
1494418413.2604961
stackoverflow
1494418413.270702
stackoverflow
1494418413.2808678
stackoverflow
1494418413.2910736
stackoverflow
1494418413.301277
stackoverflow

So the timer is deviating by 0.2 milliseconds every 10 milliseconds which is quite a big bias after few seconds.

I know that python is not really made for "real-time" but I think there should be a way to do it.

If someone already have to handle time constraints with python I would be glad to have some advices.

Thanks.

Smart Manoj
  • 5,230
  • 4
  • 34
  • 59
Arkaik
  • 852
  • 2
  • 19
  • 39
  • It's drifting because you aren't allowing for the time the timer itself takes and the printing. If you base the timer on the difference between actual time and when the next 10ms will expire, then your timing won't drift. For example, get the time before you start the loop and maintain a 'target' increasing in increments of 10ms, and start the Timer for a period of (target-currenttime), then when the timer expires add 10ms to the target and start again. You need to confirm for yourself whether the jitter you get is acceptable - i.e. measure average and peak under all your usage scenarios – DisappointedByUnaccountableMod May 10 '17 at 14:43
  • Thanks for your answer, I know I'm not measuring the time of the timer's scheduling but I though it would not add such a drift. My point is I want to be very precise for this 10ms, the problem with this method is that I will always have the drift from the timer's scheduling. – Arkaik May 10 '17 at 15:40
  • Is there a way to define this only once at the beginning and have the timer automaticaly starting again after expiring? – Arkaik May 10 '17 at 15:49
  • With my suggested approach there will be jitter about the 10ms intervals but no drift, because at each timeout the timer is started with an appropriate delay less the previous jitter. Of course you are completely free to do this however you want - why not do some experiments and try to come up with a better scheme? – DisappointedByUnaccountableMod May 10 '17 at 19:43
  • I tried your approach but eitheir I didn't understand what you said or my code is bugged. If you can look at it maybe you'll see what I'm doing wrong – Arkaik May 12 '17 at 09:20

2 Answers2

2

This code works on my laptop - logs the delta between target and actual time - main thing is to minimise what is done in the work() function because e.g. printing and scrolling screen can take a long time.

Key thing is to start the next timer based on difference between the time when that call is made and the target.

I slowed down the interval to 0.1s so it is easier to see the jitter which on my Win7 x64 can exceed 10ms which would cause problems with passing a negative value to thte Timer() call :-o

This logs 100 samples, then prints them - if you redirect to a .csv file you can load into Excel to display graphs.

from multiprocessing import Queue
import threading
import time

# this accumulates record of the difference between the target and actual times
actualdeltas = []

INTERVAL = 0.1

def work(queue, target):
    # first thing to do is record the jitter - the difference between target and actual time
    actualdeltas.append(time.clock()-target+INTERVAL)
#    t0 = time.clock()
#    print("Current time\t" + str(time.clock()))
#    print("Target\t" + str(target))
#    print("Delay\t" + str(target - time.clock()))
#    print()
#    t0 = time.clock()
    if len(actualdeltas) > 100:
        # print the accumulated deltas then exit
        for d in actualdeltas:
            print d
        return
    threading.Timer(target - time.clock(), work, [queue, target+INTERVAL]).start()

myQueue = Queue()

target = time.clock() + INTERVAL
work(myQueue, target)

Typical output (i.e. don't rely on millisecond timing on Windows in Python):

0.00947008617187
0.0029628920052
0.0121824719378
0.00582923077099
0.00131316206917
0.0105631524709
0.00437298744466
-0.000251418553351
0.00897956530515
0.0028528821332
0.0118192949105
0.00546301269675
0.0145723546788
0.00910063698529
  • 1
    Hi @barny, finally my problem was the use of time.clock() when I tried it with time.time () instead it worked well. I could stay locked on that long ^^. For your information I have absolutely no drift and the average jitter around 200 us on my raspberry pi with last raspbian-lite, which is perfect for my application. Thank you very much for your help ;) – Arkaik May 15 '17 at 12:07
  • @Arkaik: Did you solve the problem by using your code (without `Queue`) or barny's code (with `Queue`)? I'd like to do exactly the same (run some task every 10 ms) using Raspberry Pi. – Chupo_cro Jun 07 '18 at 10:02
  • 1
    @Chupo_cro : I have added my working solution into my answer if you want to inspire from it ;) – Arkaik Jun 08 '18 at 15:02
  • This works well (when using `time.time()` instead of `time.clock()`) for sending the data over I2C using Raspberry Pi. The `multiprocessing.Queue()` seems to be redundant since it is not used in this example, but if the queue had to be used - shouldn't it then be `Queue.Queue()` which is for using with threads and not `multiprocessing.Queue()` which is for using with multiprocessing? – Chupo_cro Aug 08 '20 at 23:30
1

I tried your solution but I got strange results.

Here is my code :

from multiprocessing import Queue
import threading
import time

def work(queue, target):
    t0 = time.clock()
    print("Target\t" + str(target))
    print("Current time\t" + str(t0))
    print("Delay\t" + str(target - t0))
    print()
    threading.Timer(target - t0, work, [queue, target+0.01]).start()

myQueue = Queue()

target = time.clock() + 0.01
work(myQueue, target)

And here is the output

Target  0.054099
Current time    0.044101
Delay   0.009998

Target  0.064099
Current time    0.045622
Delay   0.018477

Target  0.074099
Current time    0.046161
Delay   0.027937999999999998

Target  0.084099
Current time    0.0465
Delay   0.037598999999999994

Target  0.09409899999999999
Current time    0.046877
Delay   0.047221999999999986

Target  0.10409899999999998
Current time    0.047211
Delay   0.05688799999999998

Target  0.11409899999999998
Current time    0.047606
Delay   0.06649299999999997

So we can see that the target is increasing per 10ms and for the first loop, the delay for the timer seems to be good.

The point is instead of starting again at current_time + delay it start again at 0.045622 which represents a delay of 0.001521 instead of 0.01000

Did I missed something? My code seems to follow your logic isn't it?


Working example for @Chupo_cro

Here is my working example

from multiprocessing import Queue
import RPi.GPIO as GPIO
import threading
import time
import os

INTERVAL = 0.01
ledState = True

GPIO.setmode(GPIO.BCM)
GPIO.setup(2, GPIO.OUT, initial=GPIO.LOW)

def work(queue, target):
    try:
        threading.Timer(target-time.time(), work, [queue, target+INTERVAL]).start()
        GPIO.output(2, ledState)
        global ledState
        ledState = not ledState
    except KeyboardInterrupt:
        GPIO.cleanup()

try:
    myQueue = Queue()

    target = time.time() + INTERVAL
    work(myQueue, target)
except KeyboardInterrupt:
    GPIO.cleanup()
Arkaik
  • 852
  • 2
  • 19
  • 39
  • You should be setting t0 immediately before starting the timer, or preferably using current time in the timer call, i.e. threading.Timer(target-time.clock(), ...) – DisappointedByUnaccountableMod May 11 '17 at 18:24
  • Don't forget that converting numbers to strings, printing, scrolling the screen, etc. affects how long the work() function takes. Try increasing the delta to (say) 1s and see if the basic principle works, then figure out how little you can do in the work() function - e.g. store timer values in a list for 1000 calls, then print the results afterwards. On my win7 laptop with 1s delta I get jitter about 5ms, but no drift because each timer is driven from the absolute target time. – DisappointedByUnaccountableMod May 11 '17 at 18:36
  • Thank you very much for the code! I'll try to squeeze sending about 60 bytes of data throuth UART or I2C or SPI inside the `work()`. Regards – Chupo_cro Jun 08 '18 at 21:54
  • Works well for sending the data over I2C using Raspberry Pi. However, `multiprocessing.Queue()` seems to be redundant since it is not used at all. But if the queue had to be used - shouldn't it then be `Queue.Queue()` which is for using with threads and not `multiprocessing.Queue()` which is for using with multiprocessing? – Chupo_cro Aug 08 '20 at 23:34