19

I've been having a hard time trying to get a successful deployment of my Django Web App to AWS' Elastic Beanstalk. I am able to deploy my app from the EB CLI on my local machine with no problem at all until I add a list of container_commands config file inside a .ebextensions folder.

Here are the contents of my config file:

container_commands:
  01_makeAppMigrations:
    command: "django-admin.py makemigrations"
    leader_only: true
  02_migrateApps:
    command: "django-admin.py migrate"
    leader_only: true
  03_create_superuser_for_django_admin:
    command: "django-admin.py createfirstsuperuser"
    leader_only: true
  04_collectstatic:
    command: "django-admin.py collectstatic --noinput"

I've dug deep into the logs and found these messages in the cfn-init-cmd.log to be the most helpful:

2020-06-18 04:01:49,965 P18083 [INFO] Config postbuild_0_DjangoApp_smt_prod
2020-06-18 04:01:49,991 P18083 [INFO] ============================================================
2020-06-18 04:01:49,991 P18083 [INFO] Test for Command 01_makeAppMigrations
2020-06-18 04:01:49,995 P18083 [INFO] Completed successfully.
2020-06-18 04:01:49,995 P18083 [INFO] ============================================================
2020-06-18 04:01:49,995 P18083 [INFO] Command 01_makeAppMigrations
2020-06-18 04:01:49,998 P18083 [INFO] -----------------------Command Output-----------------------
2020-06-18 04:01:49,998 P18083 [INFO]   /bin/sh: django-admin.py: command not found
2020-06-18 04:01:49,998 P18083 [INFO] ------------------------------------------------------------
2020-06-18 04:01:49,998 P18083 [ERROR] Exited with error code 127

I'm not sure why it can't find that command in this latest environment. I've deployed this same app with this same config file to a prior beanstalk environment with no issues at all. The only difference now is that this new environment was launched within a VPC and is using the latest recommended platform.

Old Beanstalk environment platform: Python 3.6 running on 64bit Amazon Linux/2.9.3

New Beanstalk environment platform: Python 3.7 running on 64bit Amazon Linux 2/3.0.2

I've ran into other issues during this migration related to syntax updates with this latest platform. I'm hoping this issue is also just a simple syntax issue, but I've dug far and wide with no luck...

If someone could point out something obvious that I'm missing here, I would greatly appreciate it! Please let me know if I can provide some additional info!

Yamen Alghrer
  • 651
  • 6
  • 12
  • Is django installed? When you login to the instance, can you run these commands manually? – Marcin Jun 18 '20 at 04:18
  • @Marcin when I follow this post: https://stackoverflow.com/a/20070161/3814008 on running these commands via SSH into the instance, I am unable to get past step 2. Running "source /opt/python/run/venv/bin/activate" returns a "-bash: /opt/python/run/venv/bin/activate: No such file or directory" When I followed these same steps on my previous environment I had no issue. In my requirements.txt file, I do have Django 2.2.6 listed. I don't remember having to do anything else in order to have Django installed the AWS instance? – Yamen Alghrer Jun 18 '20 at 16:51
  • I'd like to note that in the above post linked, I see that someone commented under the answer that the answer is no longer valid for the latest Python 3.7, Amazon 2 platform in Beanstalk. Why is that the case? – Yamen Alghrer Jun 18 '20 at 16:57
  • Same problem with me using latest environment Python 3.7 running on 64bit Amazon Linux 2/3.0.2. – Waheed Jun 22 '20 at 13:05
  • @WaheedAhmed I was able to finally get it up and running on the latest Beanstalk platform. Check out my answer below! I hope it can help you out! – Yamen Alghrer Jun 24 '20 at 01:16

3 Answers3

31

Finally got to the bottom of it all, after deep-diving through the AWS docs and forums...

Essentially, there were a lot of changes that came along with Beanstalk moving from Amazon Linux to Amazon Linux 2. A lot of these changes are vaguely mentioned here.

One major difference for the Python platform as mentioned in the link above is that "the path to the application's directory on Amazon EC2 instances of your environment is /var/app/current. It was /opt/python/current/app on Amazon Linux AMI platforms." This is crucial for when you're trying to create the Django migrate scripts as I'll explain further in detail below, or when you eb ssh into the Beanstalk instance and navigate it yourself.

Another major difference is the introduction of Platform hooks, which is mentioned in this wonderful article here. According to this article, "Platform hooks are a set of directories inside the application bundle that you can populate with scripts." Essentially these scripts will now handle what the previous container_commands handled in the .ebextensions config files. Here is the directory structure of these Platform hooks: Platform hooks directory structure

Knowing this, and walking through this forum here, where wonderful community members went through the trouble of filling in the gaps in Amazon's docs, I was able to successfully deploy with the following file set up:

(Please note that "MDGOnline" is the name of my Django app)

.ebextensions\01_packages.config:

packages:
  yum:
    git: []
    postgresql-devel: []
    libjpeg-turbo-devel: []

.ebextensions\django.config:

container_commands:
  01_sh_executable:
    command: find .platform/hooks/ -type f -iname "*.sh" -exec chmod +x {} \;
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: MDGOnline.settings
  aws:elasticbeanstalk:environment:proxy:staticfiles:    
    /static: static
    /static_files: static_files
  aws:elasticbeanstalk:container:python:
    WSGIPath: MDGOnline.wsgi:application

.platform\hooks\predeploy\01_migrations.sh:

#!/bin/bash

source /var/app/venv/*/bin/activate
cd /var/app/staging

python manage.py makemigrations
python manage.py migrate
python manage.py createfirstsuperuser
python manage.py collectstatic --noinput

Please note that the '.sh' scripts need to be linux-based. I ran into an error for a while where the deployment would fail and provide this message in the logs: .platform\hooks\predeploy\01_migrations.sh failed with error fork/exec .platform\hooks\predeploy\01_migrations.sh: no such file or directory . Turns out this was due to the fact that I created this script on my windows dev environment. My solution was to create it on the linux environment, and copy it over to my dev environment directory within Windows. There are methods to convert DOS to Unix out there I'm sure. This one looks promising dos2unix!

I really wish AWS could document this migration better, but I hope this answer can save someone the countless hours I spent getting this deployment to succeed.

Please feel free to ask me for clarification on any of the above!

EDIT: I've added a "container_command" to my config file above as it was brought to my attention that another user also encountered the "permission denied" error for the platform hook when deploying. This "01_sh_executable" command is to chmod all of the .sh scripts within the hooks directory of the app, so that Elastic Beanstalk can have the proper permission to execute them during the deployment process. I found this container command solution in this forum here:

Yamen Alghrer
  • 651
  • 6
  • 12
  • I got this error: Stderr:.platform/hooks/predeploy/01_migrations.sh: line 3: /var/app/env/*/bin/activate: No such file or directory. It seems that they changed the instance structure. – David Weinberg Aug 18 '20 at 13:41
  • 1
    @DavidWeinberg Hey David, that errors sounds like the error I mentioned in the answer above. Did you create the .sh script in a windows environment? – Yamen Alghrer Aug 20 '20 at 23:21
  • @Adam I'm so glad to hear it's working for others! :) – Yamen Alghrer Sep 10 '20 at 00:03
  • 2
    Love you man!!!! I was looking for a solution for days and it worked! Thank you! – Quba Sep 16 '20 at 06:07
  • I followed your guide and finally managed to deploy but via mysql, however when I try to login through django admin panel it just hangs no error or logging in just hangs. I use aws rds database as well and had previously connected from local project and already created a superuser and logged in fine through local project to the aws rds. I would really appreciate it if you may be able to help on this? Thank you – rob Nov 12 '21 at 00:38
  • Hey @rob I apologize, I'm seeing this comment a few days late...were you able to get this resolved? – Yamen Alghrer Nov 18 '21 at 18:44
  • 1
    @YamenAlghrer yes managed to get this issue resolved in the end, Thank You – rob Nov 20 '21 at 15:23
  • adding commands to the `hooks` directory didn't do the trick for me. So, I just added them directly in `django.config` under `container_commands` and it worked. – JovanToroman Mar 04 '22 at 14:10
1

This might work .ebextensions/django.config

   option_settings:
  aws:elasticbeanstalk:container:python:
    WSGIPath: mysite.wsgi:application
  aws:elasticbeanstalk:environment:proxy:staticfiles:
    /static: static
packages: 
  yum:
    python3-devel: []
    mariadb-devel: []
container_commands:
  01_collectstatic:
    command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py collectstatic --noinput"
  02_migrate:
    command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py migrate --noinput"
    leader_only: true
Prajwol KC
  • 398
  • 6
  • 13
0

This works for me.

container_commands:
  01_migrate:
    command: "source /var/app/venv/*/bin/activate && python /var/app/staging/manage.py migrate --noinput"
    leader_only: true