I am using Fabric to deploy a Celery broker (running RabbitMQ) and multiple Celery workers with celeryd
daemonized through supervisor
. I cannot for the life of me figure out how to reload the tasks.py
module short of rebooting the servers.
/etc/supervisor/conf.d/celeryd.conf
[program:celeryd]
directory=/fab-mrv/celeryd
environment=[RABBITMQ crendentials here]
command=xvfb-run celeryd --loglevel=INFO --autoreload
autostart=true
autorestart=true
celeryconfig.py
import os
## Broker settings
BROKER_URL = "amqp://%s:%s@hostname" % (os.environ["RMQU"], os.environ["RMQP"])
# List of modules to import when celery starts.
CELERY_IMPORTS = ("tasks", )
## Using the database to store task state and results.
CELERY_RESULT_BACKEND = "amqp"
CELERYD_POOL_RESTARTS = True
Additional information
celery --version
3.0.19 (Chiastic Slide)python --version
2.7.3lsb_release -a
Ubuntu 12.04.2 LTSrabbitmqctl status
... 2.7.1 ...
Here are some things I have tried:
- The
celeryd --autoreload
flag sudo supervisorctl restart celeryd
celery.control.broadcast('pool_restart', arguments={'reload': True})
ps auxww | grep celeryd | grep -v grep | awk '{print $2}' | xargs kill -HUP
And unfortunately, nothing causes the workers to reload the tasks.py module (e.g. after running git pull
to update the file). The gist of the relevant fab functions is available here.
The brokers/workers run fine after a reboot.