30

If I make a change to tasks.py while celery is running, is there a mechanism by which it can re-load the updated code? or do I have to shut Celery down a re-load?

I read celery had an --autoreload argument in older versions, but I can't find it in the current version:

celery: error: unrecognized arguments: --autoreload

JasonGenX
  • 4,952
  • 27
  • 106
  • 198

4 Answers4

57

Unfortunately --autoreload doesn't work and it is deprecated.

You can use Watchdog which provides watchmedo a shell utilitiy to perform actions based on file events.

pip install watchdog

You can start worker with

watchmedo auto-restart -- celery worker -l info -A foo

By default it will watch for all files in current directory. These can be changed by passing corresponding parameters.

watchmedo auto-restart -d . -p '*.py' -- celery worker -l info -A foo

Add -R option to recursively watch the files.

If you are using django and don't want to depend on watchdog, there is a simple trick to achieve this. Django has autoreload utility which is used by runserver to restart WSGI server when code changes.

The same functionality can be used to reload celery workers. Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows. For Django >= 2.2

import sys

import shlex
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload


class Command(BaseCommand):
    def handle(self, *args, **options):
        autoreload.run_with_reloader(self._restart_celery)

    @classmethod
    def _restart_celery(cls):
        if sys.platform == "win32":
            cls.run('taskkill /f /t /im celery.exe')
            cls.run('celery -A phoenix worker --loglevel=info --pool=solo')
        else:  # probably ok for linux2, cygwin and darwin. Not sure about os2, os2emx, riscos and atheos
            cls.run('pkill celery')
            cls.run('celery worker -l info -A foo')

    @staticmethod
    def run(cmd):
        subprocess.call(shlex.split(cmd))

For django < 2.2

import sys

import shlex
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload


class Command(BaseCommand):
    def handle(self, *args, **options):
        autoreload.main(self._restart_celery)

    @classmethod
    def _restart_celery(cls):
        if sys.platform == "win32":
            cls.run('taskkill /f /t /im celery.exe')
            cls.run('celery -A phoenix worker --loglevel=info --pool=solo')
        else:  # probably ok for linux2, cygwin and darwin. Not sure about os2, os2emx, riscos and atheos
            cls.run('pkill celery')
            cls.run('celery worker -l info -A foo')

    @staticmethod
    def run(cmd):
        subprocess.call(shlex.split(cmd))

Now you can run celery worker with python manage.py celery which will autoreload when codebase changes.

This is only for development purposes and do not use it in production.

Chillar Anand
  • 27,936
  • 9
  • 119
  • 136
  • would this work with or how could this work with a celery worker on a separate docker container? – Ryan Skene Mar 30 '18 at 20:38
  • 3
    If your directory is mounted on docker, so that any changes on host are reflected on docker, then it should work. – Chillar Anand Mar 31 '18 at 04:54
  • 2
    What about shutting down gracefully? I want pending tasks to finish. – Boris Verkhovskiy Feb 13 '20 at 21:17
  • I think Celery handles KeyboardInterrupt and waits for pending tasks to finish. Since this is for a development environment I thought losing tasks is okay. Need to check the celery source code and see how to handle the same in management command. @Boris – Chillar Anand Mar 20 '20 at 14:11
  • 1
    Should add that `watchmedo auto-restart` doesn't do much without `-R` flag – Suor Apr 15 '20 at 08:07
2

You could try SIGHUP on the parent worker process, it restarts the worker, but I'm not sure if it picks up new tasks. Worth a shot, thought :)

ACimander
  • 1,852
  • 13
  • 17
2

FYI, for anyone using Docker, I couldn't find an easy way to make the above options work, but I found (along with others) another little script here which does use watchdog and works perfectly.

Save it as some_name.py file in your main directory, add pip install psutil and watchdog to requirements.txt, update the path/cmdline variables at the top, then in the worker container of your docker-compose.yml insert:

command: python ./some_name.py
Ryan Skene
  • 864
  • 1
  • 12
  • 29
  • 1
    [Here](https://gist.github.com/jsheedy/fda57e82c27f612d9aa875d9d869003f) is a gist with a Dockerfile and docker-compose.yml which starts up an autoreloading celery worker using the above options – Joseph Sheedy Jul 02 '18 at 19:19
0

Watchmedog doesn't work for me inside a docker container.

This is the way I made it work with Django:

# worker_dev.py (put it next to manage.py)
from django.utils import autoreload


def run_celery():
    from projectname import celery_app

    celery_app.worker_main(["-Aprojectname", "-linfo", "-Psolo"])


print("Starting celery worker with autoreload...")
autoreload.run_with_reloader(run_celery)

Then run python worker_dev.py or set it as your Dockerfile CMD or docker-compose command.

Suor
  • 2,845
  • 1
  • 22
  • 28
  • I think django's autoreload is made for restarting the development server. So, it **may not** be efficient to reload celery with autoreload – Mohammed Shareef C Feb 02 '21 at 07:34