This is what I have:
import youtube_dl # in case this matters
class ErrorCatchingTask(Task):
# Request = CustomRequest
def on_failure(self, exc, task_id, args, kwargs, einfo):
# If I comment this out, all is well
r = requests.post(server + "/error_status/")
....
@app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
def process(self, param_1, param_2, param_3):
...
raise IndexError
...
The worker will throw exception and then seemingly spawn a new task with a different task id Received task: process[{task_id}
Here are a couple of things I've tried:
- Importing
from celery.worker.request import Request
and overridingon_failure
andon_success
functions there instead. app.conf.broker_transport_options = {'visibility_timeout': 99999999999}
@app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
- Turn off
DEBUG
mode - Set logging to
info
- Set
CELERY_IGNORE_RESULT
to false (Can I use Python requests with celery?) import requests as apicall
to rule out namespace conflict- Money patch
requests
Celery + Eventlet + non blocking requests - Move
ErrorCatchingTask
into a separate file
If I don't use any of the hook functions, the worker will just throw the exception and stay idle until the next task is scheduled, which what I expect even when I use the hooks. Is this a bug? I searched through and through on github issues, but couldn't find the same problem. How do you debug a problem like this?
Django 1.11.16 celery 4.2.1