34

I have set up celery to work with my django application using their daemonization instructions (http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#daemonizing)

Here is my test task

@periodic_task(run_every=timedelta(seconds=10))
def debugger():
    logger.info("Running debugger")
    raise Exception('Failed')

I need a way of knowing that this task (debugger) failed due to the exception. Celery's logging file prints the logger.info("running debugger") log, but it does not log the exception. Am I missing something, or am I supposed to find failed tasks some other way?

zimkies
  • 1,067
  • 1
  • 9
  • 20
  • 1
    What do want from Celery? I t couldn't crash like desktop app. You could use two easy ways. 1. Use result backend and mark task as falled. 2. Wrap all your code in celery to Try Except. – Rustem Jun 18 '13 at 21:23
  • 4
    @Rustem I'd like Celery to catch exceptions and write them to a log file instead of apparently swallowing them... – Dan Passaro Sep 18 '14 at 19:23
  • I had the same problem. – Jamil Noyda Oct 17 '18 at 11:08

5 Answers5

22

The question:

I'd like Celery to catch exceptions and write them to a log file instead of apparently swallowing them...

The current top answer here is so-so for purposes of a professional solution. Many python developers will consider blanket error catching on a case-by-case basis a red flag. A reasonable aversion to this was well-articulated in a comment:

Hang on, I'd expect there to be something logged in the worker log, at the very least, for every task that fails...

Celery does catch the exception, it just isn't doing what the OP wanted it to do with it (it stores it in the result backend). The following gist is the best the internet has to offer on this problem. It's a little dated, but note the number of forks and stars.

https://gist.github.com/darklow/c70a8d1147f05be877c3

The gist is taking the failure case and doing something custom with it. This is a superset of the OP's problem. Here is how to adjust the solution in the gist to log the exception.

import logging

logger = logging.getLogger('your.desired.logger')


class LogErrorsTask(Task):
    def on_failure(self, exc, task_id, args, kwargs, einfo):
        logger.exception('Celery task failure!!!1', exc_info=exc)
        super(LogErrorsTask, self).on_failure(exc, task_id, args, kwargs, einfo)

You will still need to make sure all your tasks inherit from this task class, and the gist shows how to do this if you're using the @task decorator (with the base=LogErrorsTask kwarg).

The benefit of this solution is to not nest your code in any additional try-except contexts. This is piggybacking on the failure code path that celery is already using.

AlanSE
  • 2,597
  • 2
  • 29
  • 22
8

You can look at Celery User Guide:

from celery.utils.log import get_task_logger

logger = get_task_logger(__name__)

@app.task
def div():
    try:
        1 / 0
    except ZeroDivisionError:
        logger.exception("Task error")

From documentation for python logging module:

Logger.exception(msg, *args)

Logs a message with level ERROR on this logger. The arguments are interpreted as for debug(). Exception info is added to the logging message. This method should only be called from an exception handler.

Community
  • 1
  • 1
Max Kamenkov
  • 2,478
  • 3
  • 22
  • 19
8

To receive all unhandled exceptions from Celery tasks, I registered a signal handler. In that I'm formatting a logging.error message, which can then be handled by the default Python logging configuration.

Here is the relevant part

from celery import signals

@signals.task_retry.connect
@signals.task_failure.connect
@signals.task_revoked.connect
def on_task_failure(**kwargs):
    """Abort transaction on task errors.
    """
    # celery exceptions will not be published to `sys.excepthook`. therefore we have to create another handler here.
    from traceback import format_tb

    log.error('[task:%s:%s]' % (kwargs.get('task_id'), kwargs['sender'].request.correlation_id, )
              + '\n'
              + ''.join(format_tb(kwargs.get('traceback', [])))
              + '\n'
              + str(kwargs.get('exception', '')))

Note that this signal handler works for all tasks automatically; i.e. it does not require to change your task decorators.

pansen
  • 341
  • 4
  • 5
  • This works well, but it seems to log duplicate tracebacks in the logs, anywhere from two to four. Anyone else have this issue? – Banjer Jul 09 '21 at 17:17
1

Use the traceback module to capture the trace as a string and send that to the logger.

try:
    ...
except:
    import traceback
    logger.info(traceback.format_exc())
joshua
  • 2,509
  • 17
  • 19
  • 3
    fwif, python logger can include the trace back on any level. all you have to do is add exc_info=1 to the call. e.g. logger.info('something failed b/c of the other thing', exc_info=1) – user2399268 Jul 11 '14 at 15:24
1

You may also override celery app, to avoid adding base kwarg to each @app.task decorator:

import logging
from celery import Celery, Task

logger = logging.getLogger(__name__)

class LoggingTask(Task):
    def on_failure(self, exc, task_id, args, kwargs, einfo):
        logger.exception('Task failed: %s' % exc, exc_info=exc)
        super(LoggingTask, self).on_failure(exc, task_id, args, kwargs, einfo)

class LoggingCelery(Celery):
    def task(self, *args, **kwargs):
        kwargs.setdefault('base', LoggingTask)
        return super(LoggingCelery, self).task(*args, **kwargs)

app = LoggingCelery(__name__)
Jakub QB Dorňák
  • 1,231
  • 1
  • 9
  • 6