0

Complete newbie here so bare with me. I've got a number of devices that report status updates to a singular location, and as more sites have been added, drift with time.sleep(x) is becoming more noticeable, and with as many sites connected now it has completely doubles the sleep time between iterations.

import time


...
def client_list():
    sites=pandas.read_csv('sites')
    return sites['Site']


def logs(site):
    time.sleep(x)
    if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
        stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
        log = open(f"{site}/log", 'a')
        log.write(f",{stamp},{site},hit\n")
        log.close()
        os.remove(f"{site}/target/hit")
    else:
        stamp = time.strftime('%Y-%m-%d,%H:%M:%S')
        log = open(f"{site}/log", 'a')
        log.write(f",{stamp},{site},miss\n")
        log.close()
...


if __name__ == '__main__':
    while True:
        try:
            client_list()
            with concurrent.futures.ThreadPoolExecutor() as executor:
                executor.map(logs, client_list())
...

I did try adding calculations for drift with this:

from datetime import datetime, timedelta


def logs(site):
    first_called=datetime.now()
    num_calls=1
    drift=timedelta()
    time_period=timedelta(seconds=5)
    while 1:
        time.sleep(n-drift.microseconds/1000000.0)
        current_time = datetime.now()
        num_calls += 1
        difference = current_time - first_called
        drift = difference - time_period* num_calls
        if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
...

It ends up with a duplicate entries in the log, and the process still drifts. Is there a better way to schedule the function to run every x seconds and account for the drift in start times?

pippo1980
  • 2,181
  • 3
  • 14
  • 30

2 Answers2

0

Create a variable equal to the desired system time at the next interval. Increment that variable by 5 seconds each time through the loop. Calculate the sleep time so that the sleep will end at the desired time. The timings will not be perfect because sleep intervals are not super precise, but errors will not accumulate. Your logs function will look something like this:

def logs(site):
    next_time = time.time() + 5.0
    while 1:
        time.sleep(time.time() - next_time)
        next_time += 5.0
        if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
            # do something that takes a while
Paul Cornelius
  • 9,245
  • 1
  • 15
  • 24
0

So I managed to find another route that doesn't drift. The other method still drifted over time. By capturing the current time and seeing if it is divisible by x (5 in the example below) I was able to keep the time from deviating.

def timer(t1,t2)
    return True if t1 % t2 == 0 else False

def logs(site):
  while 1:
    try:
      if timer(round(time.time(), 0), 5.0):
         if os.path.isfile(os.path.join(f'{site}/target/', 'hit')):
            # do something that takes a while
            time.sleep(1) ''' this kept it from running again immediately if the process was shorter than 1 second. '''
...