2

I'm trying to make a worker run only one task at a time, then shutdown. I've got the shutdown part working correctly (some background here: celery trying shutdown worker by raising SystemExit in task_postrun signal but always hangs and the main process never exits), but when it shuts down, I'm getting an error:

[2013-02-13 12:19:05,689: CRITICAL/MainProcess] Couldn't ack 1, reason:AttributeError("'NoneType' object has no attribute 'method_writer'",)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/kombu/transport/base.py", line 104, in ack_log_error
    self.ack()
  File "/usr/local/lib/python2.7/site-packages/kombu/transport/base.py", line 99, in ack
    self.channel.basic_ack(self.delivery_tag)
  File "/usr/local/lib/python2.7/site-packages/amqplib/client_0_8/channel.py", line 1742, in basic_ack
    self._send_method((60, 80), args)
  File "/usr/local/lib/python2.7/site-packages/amqplib/client_0_8/abstract_channel.py", line 75, in _send_method
    self.connection.method_writer.write_method(self.channel_id,
AttributeError: 'NoneType' object has no attribute 'method_writer'

Why is this happening? Not only does it not ack, but it also purges all of the other tasks that are left in the queue (big problem).

How do I fix this?





UPDATE

Below is the stack trace with everything updated (pip install -U kombu amqp amqplib celery):

[2013-02-13 11:58:05,357: CRITICAL/MainProcess] Internal error: AttributeError("'NoneType' object has no attribute 'method_writer'",)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 372, in process_task
    req.execute_using_pool(self.pool)
  File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 219, in execute_using_pool
    timeout=task.time_limit)
  File "/usr/local/lib/python2.7/dist-packages/celery/concurrency/base.py", line 137, in apply_async
    **options)
  File "/usr/local/lib/python2.7/dist-packages/celery/concurrency/base.py", line 27, in apply_target
    callback(target(*args, **kwargs))
  File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 333, in on_success
    self.acknowledge()
  File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 439, in acknowledge
    self.on_ack(logger, self.connection_errors)
  File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 98, in ack_log_error
    self.ack()
  File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 93, in ack
    self.channel.basic_ack(self.delivery_tag)
  File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 1562, in basic_ack
    self._send_method((60, 80), args)
  File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 57, in _send_method
    self.connection.method_writer.write_method(
AttributeError: 'NoneType' object has no attribute 'method_writer'
Community
  • 1
  • 1
Darryl Lickt
  • 183
  • 2
  • 9
  • I upgraded every related python lib I could think of and still came up with an error (slightly different stack trace though). See updated question – Darryl Lickt Feb 13 '13 at 18:00

1 Answers1

0

Exiting in task_postrun is not recommended as task_postrun is executed outside of the "task body" error handling.

Exactly what happens when a task calls sys.exit is not well defined, and actually it depends on the pool being used.

With multiprocessing the child process will simply be replaced by a new one. In other pools the worker will shutdown, but this is something that is likely to change so that it's consistent with multiprocessing behavior.

Calling exit outside of the task body is regarded as an internal error (crash).

The "task body" is whatever executes at task.__call__()

I think maybe a better solution for this would be to use a custom execution strategy:

from celery.worker import strategy
from functools import wraps

@staticmethod
def shutdown_after_strategy(task, app, consumer):

    default_handler = strategy.default(task, app, consumer)

    def _shutdown_to_exit_after(fun):
        @wraps(fun)
        def _inner(*args, **kwargs):
            try:
                return fun(*args, **kwargs)
            finally:
                raise SystemExit()
       return _inner
    return _decorate_to_exit_after(default_handler)

@celery.task(Strategy=shutdown_after_strategy)
def shutdown_after():
    print('will shutdown after this')

This isn't exactly beautiful, but the execution strategy is there to optimize task execution and not to be easily extendable (the worker "precompiles" the execution path for each task type by caching Task.Strategy)

In Celery 3.1 you can extend the worker and consumer using "bootsteps", so likely there will be a pretty solution then.

asksol
  • 19,129
  • 5
  • 61
  • 68