I am attempting to test misfired tasks with APScheduler, but I am not seeing the missed tasks run when I restart APScheduler. I have configured APScheduler as follows:
scheduler.py
def configure_scheduler():
jobstores = {
'default': SQLAlchemyJobStore(url=config('DATABASE_URL'))
}
sched = BlockingScheduler()
sched.configure(jobstores=jobstores)
sched.add_job(
test_task,
id='test_task',
'interval',
hours=1,
coalesce=True,
max_instances=1,
misfire_grace_time=360,
replace_existing=True
)
return sched
if __name__ == '__main__':
scheduler = configure_scheduler()
scheduler.start()
When I start the scheduler the first time, test_task
is added to the apscheduler_jobs
table in my Postgres database with a next_run_time
of one hour from when I start the scheduler. I then attempt to test a misfire by:
- Changing
next_run_time
in my database to the current time - Waiting 15 seconds
- Starting the scheduler
When I follow this procedure, the next_run_time
is again set to an hour from the current time. The next_run_time
appears to be updated in the update_job
method of the SQLAlchemy jobstore. I have seen one similar question related to persistent job store tasks not running after a misfire. The solution to most other questions I have seen is to add the misfire_grace_time
argument to add_job
. I have tried this per my configuration above but have not had any luck running missed jobs on scheduler startup. Am I missing something related to how the replace_existing
and misfire_grace_time
arguments interact? Do I need to manually check if the next_run_time
of any jobs is in the past, then run these jobs before starting the scheduler?
I am using v3.6.1 of the APScheduler library.
For additional context, I will be deploying the scheduler on Heroku and I am attempting to work around Heroku's automatic dyno cycling which occurs at least once per day.