5

I have celery running in a docker container processing tasks from rabbitmq. I am trying to stop and remove the celery container, while allowing the current running tasks to complete. The docs suggest that sending the TERM or INT signals to the main process should warm shutdown celery, although I am finding that the child processes are just being killed.

When I send TERM the running processes it throws:

WorkerLostError('Worker exited prematurely: signal 15 (SIGTERM).',)

When I send INT the running process just exits with no error, although it too doesn't allow the tasks to finish as the docs suggest.

I am starting the docker container with the command: su -m celery_user -c "python manage.py celery worker -Q queue-name"

Any thoughts on why this might be happening? Could it be that the signal is terminating the container as well as the celery process?

I am sending the signal with: docker kill --signal="TERM" containerid or docker exec containerid kill -15 1

Andrew
  • 173
  • 2
  • 6

1 Answers1

1

docker kill will kill the container. What you need to do is to send the signal only to the main celery process.

Personally I use supservisord inside the docker container to manage the celery worker. By default supervisord will send SIGTERM to stop the process.

Here's a sample supervisor config for celery

[program:celery]
command=celery worker -A my.proj.tasks --loglevel=DEBUG -Ofair --hostname celery.host.domain.com --queues=celery
environment=PYTHONPATH=/etc/foo/celeryconfig:/bar/Source,PATH=/foo/custom/bin:/usr/kerberos/bin
user=celery-user
autostart=true
stdout_logfile=/var/log/supervisor/celery.log
redirect_stderr=true
scytale
  • 12,346
  • 3
  • 32
  • 46
  • Ahh -- yes I had it in my head that supervisor would cause the same. Supervisors is working as expected, thanks. – Andrew Feb 02 '16 at 17:12