4

I have a unit of work I want to have occur every N seconds. If I use the simplistic

minute = 60
while True:
    doSomeWork()
    time.sleep(minute)

depending on how long doSomeWork() takes, the real loop period will be one minute plus that time. If the time taken by doSomeWork() is not deterministic, then the period of the work is even more unpredictable.

What I'd like to do is something like this

minute = 60
start = time.process_time() #? i can imagine using this, but maybe there's something better?
while True:
    doSomeWork()
    start += minute
    sleep_until(start) #? this is the function I'm in search of

(I'm using python 3.3)

Update:

On Linux/OSX, I can use an itimer from signal to do what I'm looking for:

import signal
import datetime

def tick(_, __):
    # doSomeWork()
    print(datetime.datetime.now())

signal.setitimer(signal.ITIMER_REAL, 60, 60)
signal.signal(signal.SIGALRM, tick)

while True:
    signal.pause()

It looks like the tulip stuff being developed for python3.4 will also make this easy to do.

Travis Griggs
  • 21,522
  • 19
  • 91
  • 167

1 Answers1

5

sleep_until(timestamp) is basically time.sleep(timestamp - time.time()).

Your code is fine actually (making sure you don't pass negative times to sleep is still a good idea though):

import time

minute = 60
next_time = time.time()
while True:
    doSomeWork()
    next_time += minute
    sleep_time = next_time - time.time()
    if sleep_time > 0:
        time.sleep(sleep_time)

 

I personally would make a generator of 60-second-spaced timestamps and use it:

import time
import itertools

minute = 60

for next_time in itertools.count(time.time() + minute, minute):
    doSomeWork()
    sleep_time = next_time - time.time()
    if sleep_time > 0:
        time.sleep(sleep_time)
Pavel Anossov
  • 60,842
  • 14
  • 151
  • 124
  • Won't this still introduce some skew (albeit less)? The expression that computes the time burnt up already, plus the time it takes to queue the remaining sleep, will burn up some (small but) unaccounted for time. – Travis Griggs Mar 17 '13 at 00:09
  • How much accuracy do you need? `time.time() - start` takes 100 ns on my machine. You'll have to doSomeWork 10 million times for a second to accumulate. – Pavel Anossov Mar 17 '13 at 00:13
  • And if you're doing this over and over again, you could even account for the amount of time that `time.time` takes (on average) by using `timeit`. – mgilson Mar 17 '13 at 00:14
  • @PavelAnossov I guess it's the principle of the thing. I've done that idiom in many other environments (languages/libraries). Once you can do that, you don't ever have to care about how long that part takes and do the math of how much matters. – Travis Griggs Mar 17 '13 at 00:16
  • I'm suspecting by the comments here that maybe python just doesn't have this functionality, which surprises me I guess. – Travis Griggs Mar 17 '13 at 00:17
  • Updated my answer with another way to do it. Could you provide examples of other languages that have it built in? – Pavel Anossov Mar 17 '13 at 00:19
  • Very nice. I'd keep just the second version. – user4815162342 Mar 17 '13 at 00:21
  • 3
    You probably want to skip `sleep` if the next_schedule is before the current time - which will happen if doSomeWork takes over a minute. And since you're using an `itertool`, you can use `for` instead of `while`: `for next_time in scnedule: if next_time >= time.time(): sleep(next_time - time.time()) ...` – user4815162342 Mar 17 '13 at 00:26
  • Very true, but two calls to `time` bother me (still a slim possibility of an `IOError` with negative `sleep` argument). – Pavel Anossov Mar 17 '13 at 00:29
  • What happens if the user puts his laptop to sleep and doSomeWork() appears to take 3 hours (180 minutes)? Do you want your application to call doSomeWork() 180 times without sleeping to make up for the lost time, or do you want the backlog of "missed" calls to have a maximum length? – picomancer Mar 17 '13 at 06:14
  • I think I missed an update? The code you have there is exactly what I want now. Or maybe I paid to much attention to the generator version. The version where you maintain and increment next_time is perfect. And while it may have a 100 ns bias, it will be regular. Thanks! – Travis Griggs Mar 17 '13 at 18:43
  • I just swapped the generator and non-generator versions :) – Pavel Anossov Mar 17 '13 at 22:57