5

I am trying to deploy a webapp made on django to AWS Elastic BeanStalk but it is showing the following error:

cfnbootstrap.construction_errors.ToolError: Command 01_migrate failed

Traceback:

2021-08-04 09:49:56,443 [ERROR] -----------------------BUILD FAILED!------------------------
2021-08-04 09:49:56,443 [ERROR] Unhandled exception during build: Command 01_migrate failed
Traceback (most recent call last):
  File "/opt/aws/bin/cfn-init", line 176, in <module>
    worklog.build(metadata, configSets)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 135, in build
    Contractor(metadata).build(configSets, self)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 561, in build
    self.run_config(config, worklog)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 573, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 273, in build
    self._config.commands)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
    raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command 01_migrate failed

db-migrate.config

on LM1 was:

container_commands:
  01_migrate:
    command: "django-admin.py migrate --noinput"
    leader_only: true
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: packsapp.settings

for LM2 I tried this:

container_commands:
    01_migrate:
        command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
        leader_only: true
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: packsapp.settings

and also tried this as well

container_commands:
  01_migrate:
    command: "source /var/app/venv/*/bin/activate && django-admin.py migrate --noinput"
    leader_only: true
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: packsapp.settings

But it still fails. What do I need to change?

Rahul Sharma
  • 2,187
  • 6
  • 31
  • 76
  • If you ssh to your EB instance, can you make it work manually? This try can give you some extra insights on what's happening. – Marcin Aug 09 '21 at 21:30
  • If you SSH into instance and tail `eb-activity.log`, it might have useful information on why migrate has failed. – Chillar Anand Aug 10 '21 at 02:25
  • Does [this](https://stackoverflow.com/questions/62457165/deploying-django-to-elastic-beanstalk-migrations-failed/63074781#63074781) help? – Brian Destura Aug 10 '21 at 03:54

2 Answers2

3

As mentioned in the previous answer, Amazon Linux 2 (AL2) is very different from Amazon Linux 1 (AL1) - to add to the frustration, the AWS docs do not provide an accurate way to conduct migrations. After many hours of digging, here is the approach I found that works:

Hooks

AL2 introduces hooks as a method to run shell commands at different stages of deployment. The hook we are looking for is the postdeploy hook illustrated below:

First, our directory structure should look something like this:

my_project
|-- .platform/
|    |-- hooks/              
|        |-- postdeploy/
|            |-- 01_migrate.sh
|-- my_first_app
|-- my_second_app

Next, we add the the 01_migrate.sh script to execute on the leader only:

#!/bin/bash

source "$PYTHONPATH/activate" && {
    
    if [[ $EB_IS_COMMAND_LEADER == "true" ]];
    then 
        # log which migrations have already been applied
        python manage.py showmigrations;
        
        # migrate
        python manage.py migrate --noinput;
    else 
        echo "this instance is NOT the leader";
    fi
    
}

Logs

If configured correctly, we'll see the migrations printed under the /var/log/eb-hooks.log section of the eb logs (i.e. run eb logs):

----------------------------------------
/var/log/eb-hooks.log
----------------------------------------
my_first_app
 [X] 0001_my_first_app_migration_1
 [X] 0002_my_first_app_migration_2
my_second_app
 [X] 0001_my_second_app_migration_1
 [X] 0002_my_second_app_migration_2

We can now remove the container command that conducts our migrations.

Notes

  1. you need to add the shell file above to the postdeploy hook which, in my opinion, is completely non-intuitive
  2. you can add other django commands like collectstatic (typically I just include it in the file above)
  3. like commands on AL1, the shell script "hooks" are executed in alphabetical order (hence the 01 prefix)

Good luck!

Daniel
  • 3,228
  • 1
  • 7
  • 23
  • One gotcha I ran across switching to this approach (and admittedly I don't understand all the moving parts here) was that I needed to explicitly set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables in Elastic Beanstalk (my database is in RDS so Django needed those values to connect to the DB). Previously, the container_commands version seems to have those magically injected from the IAM instance profile. I tried using approaches I've found that use `/opt/elasticbeanstalk/bin/get-config environment` but those values were not included from the output of that script. – Nic Nov 19 '21 at 01:07
0

Amazon Linux 2 has a fundamentally different setup than AL1, and the current documentation as of Jul 24, 2020 is out of date. django-admin of the installed environment by beanstalk does not appear to be on the path, so you can source the environment to activate and make sure it is.

More answer here as well which goes into much more detail in how arrived at this answer, but the solution (which I don't love) is:


container_commands:
    01_migrate:
        command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
        leader_only: true