5

I am getting this issue while re-build and re-start cookiecutter-django docker-compose in production. I am able to solve this by either removing all stopped docker containers or by adding rm -f './celerybeat.pid' in /compose/production/django/celery/beat/start.sh similar to /compose/local/django/celery/beat/start.sh. Is there any reason for not including this specific code in production version of compose file?

Binoy Mathew
  • 123
  • 1
  • 2
  • 7
  • Hi. I'm facing the same issue. Did you figure out another way to do this? Or did you just stick with `rm ...` ? – Shadi Jun 27 '18 at 07:53

5 Answers5

1

If you can live without beat, there's a way for celery to handle periodic tasks by passing in the 'B' flag. When you do this, no .pid file is generated, a celerybeat-schedule file is generated. When you rerun celery, it won't complain about reusing this file. As far as source control does, just add it to your .gitignore.

Here's the command in full form:

celery -A <appname> worker -l info -BE

mustang
  • 151
  • 3
  • 9
  • 2
    Embedded Beat Options: -B, --beat Also run the celery beat periodic task scheduler. Please note that there __must only be one instance of this service.__ .. note:: __-B is meant to be used for development purposes.__ For production environment, you need to start celery beat separately. From `celery worker --help` – Jay Lim Apr 16 '19 at 09:54
1

Please, take a look here:

Disable pidfile for celerybeat

You can specify pidfile without any location, so that it will be recreated each time the celery starts

--pidfile=
Artur Drożdżyk
  • 605
  • 1
  • 6
  • 18
1

There was an earlier post on how to fix this issue by setting the PID file to an empty value in the run command but the solution was not complete and took me a tiny bit of trial and error to get it working on my production system so I figured I'd post a docker-compose file that has a beats service that is run with a command to create a new celerybeats.pid file when it starts.

As a note I am using django-celery-beat: https://pypi.org/project/django-celery-beat/

version: '3'

services:

  redis:
    image: redis
    restart: unless-stopped
    ports:
      - "6379"

  beats:
    build: .
    user: user1
    # note the --pidfile= in this command
    command: celery --pidfile= -A YOURPROJECT beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
    env_file: ./.env.prod
    restart: unless-stopped
    volumes:
      - .:/code
      - tmp:/tmp
    links:
      - redis
    depends_on:
      - redis

volumes:
  tmp:

Doing this I no longer get the ERROR: Pidfile (celerybeat.pid) already exists error, and I do not have to run a rm command.

ViaTech
  • 2,143
  • 1
  • 16
  • 51
0

You can use celery worker --pidfile=/path/to/celeryd.pid to specify a non mounted path so that it is not mirror on the host.

Siyu
  • 11,187
  • 4
  • 43
  • 55
0

Other way, create a django command celery_kill.py

import shlex
import subprocess

from django.core.management.base import BaseCommand


class Command(BaseCommand):
    def handle(self, *args, **options):
        kill_worker_cmd = 'pkill -9 celery'
        subprocess.call(shlex.split(kill_worker_cmd))

docker-compose.yml :

celery:
    build: ./src
    restart: always
    command: celery -A project worker -l info
    volumes:
      - ./src:/var/lib/celery/data/
    depends_on:
      - db
      - redis
      - app

  celery-beat:
    build: ./src
    restart: always
    command: celery -A project beat -l info --pidfile=/tmp/celeryd.pid
    volumes:
      - ./src:/var/lib/beat/data/
    depends_on:
      - db
      - redis
      - app

and Makefile:

run:
    docker-compose up -d --force-recreate
    docker-compose exec app python manage.py celery_kill
    docker-compose restart
    docker-compose exec app python manage.py migrate