1

I have written a script to read from a instrument and writing the data to a CSV-file. The time interval between each sampling can be set by the user. One second sampling rate is quite common. I have used time.sleep and tried to extract the processing time for the script by using timeElapsed = timeEnd - timeBegin. The problem is that this isn't good enough. The time is drifting so every now and then the script jumps over a second. On my computer it happens approx every 2-3 minutes. So my question is how I can increase the accuracy of the timing.

import csv
import datetime
import time
import os.path

no_of_meas = 200
cur_meas = 1
time_interval= 1    #time between each sample given in seconds, 1 second is the lowest recommended value

with open('test.csv', 'a', newline='') as fp:
    while cur_meas <= no_of_meas:
        timeBegin = time.time()
        cur_time = datetime.datetime.strftime(datetime.datetime.now(), '%H:%M:%S.%f')  
        a = csv.writer(fp, delimiter='\t')
        data = [[cur_time, cur_meas]]
        a.writerows(data)
        fp.flush()   #flush and os.fsync to be sure that data is written to disk
        os.fsync(fp.fileno())
        print(', '.join(map(str, data)))
        cur_meas += 1
        timeEnd = time.time()
        timeElapsed = timeEnd - timeBegin
        time.sleep(time_inter-timeElapsed)

2 Answers2

0

You should not reset your base time on every call, since you are creating a new time scale which is slightly off. You have to do the calculation in place from a global time base. And you need to use floats in order to make sure Python does not do any integer rounding.

Just the relevant parts:

interval = 1.0
start_time = time.time()

with open('test.csv', 'a', newline='') as fp:
    while cur_meas <= no_of_meas:
        next_in = (start_time + (cur_meas - 1.0) * interval) - time.time()
        if next_in > 0.0:
            time.sleep(next_in)
        # your measuring code.

This way the code will calculate when the current measurement would be expected and sleeps until then or continues immediately when the time is already reached.

Klaus D.
  • 13,874
  • 5
  • 41
  • 48
  • Thank you this works really good. I have ran the script over night on two computers, approx 62000 samples, I haven't missed a second and it hasn't jumped over any seconds. – Thomas Gundersen Jul 21 '15 at 05:38
0

You need to base the delay on the actual time rather than a per iteration delay. The following is a possible alternative way of dealing with it which will not drift over a long time, but will use fractionally more CPU.

import time

def tick(time_interval):
    next_tick = time.time() + time_interval
    while True:
        time.sleep(0.2)     # Minimum delay to allow for catch up
        while time.time() < next_tick:
            time.sleep(0.2)

        yield True
        next_tick += time_interval

time_interval = 1.0
ticker = tick(time_interval)

while cur_meas <= no_of_meas:
    # Do measurement code
    next(ticker)

Individual iterations will be to the next 0.2s (or however fine you need). If an iteration is delayed (e.g. CPU loading), this will allow a 'catch up' every 0.2s until time is synchronised again.

Martin Evans
  • 45,791
  • 17
  • 81
  • 97
  • This didn't work for me. I could be that i mistyped something but the time jumped quite a lot. Sometimees I got 2-3 samples each seconds, other times it jumped ovre a second or two. – Thomas Gundersen Jul 21 '15 at 05:43
  • Can you copy/paste again, I had made a change a little while after posting it. 2-3 samples will only occur if the script froze for over a second, it will then tick every 0.2s until it catches up. – Martin Evans Jul 21 '15 at 06:52
  • It works better now. It samples every second and the accuracy seems pretty good. – Thomas Gundersen Jul 21 '15 at 07:12