21

How to use Django with AWS Elastic Beanstalk that would also run tasks by celery on main node only?

jpotts18
  • 4,951
  • 5
  • 31
  • 31
smentek
  • 2,820
  • 1
  • 28
  • 32
  • 1
    If you want something lighter than celery, you can try https://pypi.org/project/django-eb-sqs-worker/ package - it uses Amazon SQS for queueing tasks. – DataGreed Jun 22 '20 at 23:10

3 Answers3

36

This is how I set up celery with django on elastic beanstalk with scalability working fine.

Please keep in mind that 'leader_only' option for container_commands works only on environment rebuild or deployment of the App. If service works long enough, leader node may be removed by Elastic Beanstalk. To deal with that, you may have to apply instance protection for your leader node. Check: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection-instance

Add bash script for celery worker and beat configuration.

Add file root_folder/.ebextensions/files/celery_configuration.txt:

#!/usr/bin/env bash

# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}

# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A django_app --loglevel=INFO

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv

[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A django_app --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf

# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread

# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update

# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker

Take care about script execution during deployment, but only on main node (leader_only: true). Add file root_folder/.ebextensions/02-python.config:

container_commands:
  04_celery_tasks:
    command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
    leader_only: true
  05_celery_tasks_run:
    command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
    leader_only: true

File requirements.txt

celery==4.0.0
django_celery_beat==1.0.1
django_celery_results==1.0.1
pycurl==7.43.0 --global-option="--with-nss"

Configure celery for Amazon SQS broker (Get your desired endpoint from list: http://docs.aws.amazon.com/general/latest/gr/rande.html) root_folder/django_app/settings.py:

...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'sqs://%s:%s@' % (aws_access_key_id, aws_secret_access_key)
# Due to error on lib region N Virginia is used temporarily. please set it on Ireland "eu-west-1" after fix.
CELERY_BROKER_TRANSPORT_OPTIONS = {
    "region": "eu-west-1",
    'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev'),
    'visibility_timeout': 360,
    'polling_interval': 1
}
...

Celery configuration for django django_app app

Add file root_folder/django_app/celery.py:

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_app.settings')

app = Celery('django_app')

# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django app configs.
app.autodiscover_tasks()

Modify file root_folder/django_app/__init__.py:

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django_app.celery import app as celery_app

__all__ = ['celery_app']

Check also:

smentek
  • 2,820
  • 1
  • 28
  • 32
  • 1
    Could you take a look at this question I followed your example but got the following error http://stackoverflow.com/questions/43481540/django-celery-elastic-beanstalk-supervisord-no-such-process-error – Borko Kovacev Apr 18 '17 at 21:49
  • 1
    @BorkoKovacev Thanks, I've updated set fix for supervisorctl restart. – smentek Apr 19 '17 at 14:12
  • @smentek this may be a little late but I did exactly what you said above and it was worked fine so far except that when I deploy i get this happening to me https://stackoverflow.com/questions/44268539/running-celery-as-daemon-with-supervisor-and-django-on-elastic-beanstalk seems to have something to do with pycurl not being found? – Jay Bell May 30 '17 at 18:27
  • 1
    @smentek small edit - adding | sed 's/%/%%/g' to the celeryenv line helps prevent a problem a few people are running into with this config, see https://stackoverflow.com/questions/41231489/run-celery-with-django-on-aws-elastic-beanstalk-using-environment-variables/45243273#45243273 – Keeth Jul 21 '17 at 17:41
  • is the LAMIA dictionary something you created or does eb provide that? – Julian Aug 24 '17 at 08:10
  • 1
    "If service works long enough, leader node may be removed by Elastic Beanstalk. " - > You can protect specific instances from being removed by the load balancer. – Julian Aug 24 '17 at 08:22
  • Im unsure how I connect to a specific sqs queue, I created an IAM user to obtain extra access_key_id and secret_key but your configuration file never mentions the queue name? Ah, nvm its farther below. The prefix equals the name of the sqs queue? – Julian Aug 24 '17 at 08:29
  • Since we are using celery that rely on kombu, the queue will be created automatically. The queue name: 'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev') will give: 'django_app-dev-celery'. It is worth to remember about app env's if you use them. – smentek Aug 24 '17 at 08:37
  • I see, I created a sqs queue manually, ill try with different prefix then. Do you know which permissions the IAM user needs to create a queue? – Julian Aug 24 '17 at 08:39
  • 1
    Thanks for mentioning about instance protection. – smentek Aug 24 '17 at 08:41
  • I get: Could not connect to the endpoint URL: "https://sqs.sqs.us-west-1.amazonaws.com.amazonaws.com/" ist possible you have an sqs too much? – Julian Aug 24 '17 at 11:40
  • Use aws region endpoint identifier instead. I modified the post for that. – smentek Aug 24 '17 at 11:47
  • yup, tried that after the comment, seems to work. i am now getting botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the ListQueues operation: The security token included in the request is invalid. Any ideas? – Julian Aug 24 '17 at 11:53
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/152745/discussion-between-julian-and-smentek). – Julian Aug 24 '17 at 11:53
  • "but only on main node (leader_only: true)" why only on main node, and the others node ? – user391990 Apr 05 '18 at 16:57
  • Since task initialization should happen once only. – smentek Apr 10 '18 at 09:54
  • 1
    Why do you copy the run_supervised_celeryd.sh script to /opt/elasticbeanstalk/hooks/appdeploy/post/ AND also run it as a container_command? Just copying it to /opt/elasticbeanstalk/hooks/appdeploy/post/ should be enough. Running it as a container_command is unnecessary, and is also too early - the app isn't completely installed at that point. – Scott Talbert Nov 28 '18 at 22:34
  • pycurl==7.43.0 --global-option="--with-nss" didn't work for me- I had to place another command at the top of celery.config containing PYCURL_SSL_LIBRARY=nss /opt/python/run/venv/bin/pip install pycurl==7.43.0 (and leave pycurl out of requirements.txt entirely) – cmc Jan 14 '19 at 10:30
  • does this create an additional ec2 instance? how do I scale it if needed? – DataGreed Jan 08 '20 at 14:45
  • After following the instruction, this is the error I get, could anyone help? Thanks! https://stackoverflow.com/questions/64673507/elastic-beanstalk-celery-deployment-fatal-python-error-py-initialize – LYu Nov 04 '20 at 02:27
  • This is one solution to avoid getting your leader node removed: https://ajbrown.org/2017/02/10/leader-election-with-aws-auto-scaling-groups.html – Cyzanfar Mar 15 '22 at 22:15
  • This solution will not work easily on Amazon Linux 2, which is now the only configuration available by default through AWS EB. Notably, supervisor is not installed by default. – Scott Jul 03 '22 at 17:18
  • This is for Amazon Linux 1. On Amazon Linux 2 it need a rework on all the paths + you can put it directly on your project directory inside .platform/hooks/predeploy/run_supervised_celeryd.sh (with the whole .txt content) With that you can skip the 02-python commands – Pol Frances Nov 15 '22 at 11:39
6

This is how I extended the answer by @smentek to allow for multiple worker instances and a single beat instance - same thing applies where you have to protect your leader. (I still don't have an automated solution for that yet).

Please note that envvar updates to EB via the EB cli or the web interface are not relflected by celery beat or workers until app server restart has taken place. This caught me off guard once.

A single celery_configuration.sh file outputs two scripts for supervisord, note that celery-beat has autostart=false, otherwise you end up with many beats after an instance restart:

# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}

# create celery beat config script
celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A lexvoco --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

# create celery worker config script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A lexvoco --loglevel=INFO

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999

environment=$celeryenv"

# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf

# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

Then in container_commands we only restart beat on leader:

container_commands:
  # create the celery configuration file
  01_create_celery_beat_configuration_file:
    command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
  # restart celery beat if leader
  02_start_celery_beat:
    command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat"
    leader_only: true
  # restart celery worker
  03_start_celery_worker:
    command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker"
Chris Berry
  • 568
  • 7
  • 11
  • I wonder how you deployed this on AWS. Did you make use of Worker Environments like shown here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-worker.html?icmpid=docs_elasticbeanstalk_console. What do you mean with beat instance? Running beat just sends tasks to the queue, so I don't understand why one should have a separate machine for this. Do you have a separate EC2 instance running the web application? – Greg Holst Aug 02 '19 at 12:56
  • how do you set this up? How do you make sure you won't have multiple instances of celery running when scaling occurs? – DataGreed Jan 16 '20 at 19:06
  • Multiple instances of celery workers is fine. You only want one beat though. Honestly I stopped using elastic beanstalk a while back and have moved everything to kubernetes, I recommend you do the same. @GregHolst worker environments ended up being unsuitable for some reason. – Chris Berry Jan 23 '20 at 08:18
3

If someone is following smentek's answer and getting the error:

05_celery_tasks_run: /usr/bin/env bash does not exist.

know that, if you are using Windows, your problem might be that the "celery_configuration.txt" file has WINDOWS EOL when it should have UNIX EOL. If using Notepad++, open the file and click on "Edit > EOL Conversion > Unix (LF)". Save, redeploy, and error is no longer there.

Also, a couple of warnings for really-amateur people like me:

  • Be sure to include "django_celery_beat" and "django_celery_results" in your "INSTALLED_APPS" in settings.py file.

  • To check celery errors, connect to your instance with "eb ssh" and then "tail -n 40 /var/log/celery-worker.log" and "tail -n 40 /var/log/celery-beat.log" (where "40" refers to the number of lines you want to read from the file, starting from the end).

Hope this helps someone, it would've saved me some hours!

jaume
  • 163
  • 2
  • 11