Is there a way to modify the retry delay for celery tasks at runtime? Or is there a global config value that can be changed to override the 180s default value?
I have set up tasks with exponential back-off (as described here: Retry Celery tasks with exponential back off) but I want to override this value when integration testing.
The reason is that I often end up triggering the 180s default value if exceptions are raised within an exception handler, which seems to bypass and ignore the countdown argument.
class BaseTask(celery.Task):
def on_retry(self, exc, task_id, args, kwargs, einfo):
"""Log the exceptions at retry."""
logger.exception(exc)
logger.warning('Retry: {}.'.format(self.request))
super().on_retry(exc, task_id, args, kwargs, einfo)
def on_failure(self, exc, task_id, args, kwargs, einfo):
"""Log the exceptions on failure."""
logger.exception(exc)
logger.error('Failure: {}.'.format(self.request))
super().on_failure(exc, task_id, args, kwargs, einfo)
@property
def backoff_countdown(self):
return int(random.uniform(2, 4) ** self.request.retries)
@celery.task(bind=True, base=BaseTask)
def process(self, data):
try:
return some_task(data)
except Exception as exc:
raise self.retry(exc=exc, coundown=self.backoff_countdown)
Regardless of what I set for self.backoff_countdown
(even just returning 1
) I end up with tasks being retried in 180s
, which makes it really hard to run integration tests with reasonable timeouts.