14

I have a Django project which is deployed to Elastic Beanstalk Amazon Linux 2 AMI. I installed PyMySQL for connecting to the db and i added these lines to settings.py such as below;

import pymysql

pymysql.version_info = (1, 4, 6, "final", 0)
pymysql.install_as_MySQLdb()

And also i have a .config file for migrating the db;

container_commands:
  01_migrate:
    command: "django-admin.py migrate"
    leader_only: true
option_settings:
  aws:elasticbeanstalk:application:environment:
    DJANGO_SETTINGS_MODULE: mysite.settings

Normally, i was using mysqlclient on my Linux AMI with this .config file but it doesn't work on Linux 2 AMI so i installed the PyMySQL. Now, i'm trying to deploy the updated version of my project but i'm getting an error such as below;

Traceback (most recent call last):
  File "/opt/aws/bin/cfn-init", line 171, in <module>
    worklog.build(metadata, configSets)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 129, in build
    Contractor(metadata).build(configSets, self)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 530, in build
    self.run_config(config, worklog)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 542, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 260, in build
    changes['commands'] = CommandTool().apply(self._config.commands)
  File "/usr/lib/python2.7/site-packages/cfnbootstrap/command_tool.py", line 117, in apply
    raise ToolError(u"Command %s failed" % name)
ToolError: Command 01_migrate failed

How can i fix this issue?

Aslı Kök
  • 616
  • 8
  • 19

3 Answers3

17

Amazon Linux 2 has a fundamentally different setup than AL1, and the current documentation as of Jul 24, 2020 is out of date. django-admin of the installed environment by beanstalk does not appear to be on the path, so you can source the environment to activate and make sure it is.

I left my answer here as well which goes into much more detail in how I arrived at this answer, but the solution (which I don't love) is:

container_commands:
    01_migrate:
        command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
        leader_only: true

Even though I don't love it, I have verified with AWS Support that this is in fact the recommended way to do this. You must source the python environment, as with AL2 they use virtual environments in an effort to stay more consistent.

Nick Brady
  • 6,084
  • 1
  • 46
  • 71
  • Note that this activates the correct python (sets the PATH) but not the environmental values that are set by the EBS UI. If dango uses any of them (credentials BAD, secret name, etc...) it will not be set for the migration command. – Matan Drory Dec 21 '20 at 19:02
  • 1
    This command relies (at least for me) on environment variables I set in the GUI for some of the secrets and it works just fine. I'm not sure what you mean @MatanDrory – Nick Brady Dec 21 '20 at 19:36
  • That's strange. I just tested it on the EC2 machine my EBS created. I ran `printenv` and did not see my environmental variables, then ran `source /var/app/venv/*/bin/activate && printenv` and again I don't see the environmental variables, the only change is in PATH. I currently have an ugly workaround with a prebuild hook that copies the env file into export format and I source that instead. The problem with that is that the env file at "/opt/elasticbeanstalk/deployment/env" is only created after a successful deployment forcing me to start with the sample app – Matan Drory Dec 22 '20 at 03:17
12

The answer from @nick-brady is great, and it provides the basic solution.

However, the AWS docs on migrating to Amazon Linux 2 suggest that we should do things like this using .platform hooks:

We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.

and from the AWS Knowledge Center:

... it's a best practice to use platform hooks instead of providing files and commands in .ebextension configuration files.

As a bonus, output from the platform hooks is collected in a separate log file (/var/log/eb-hooks.log), which is included in bundle and tail logs by default. This makes debugging a bit easier.

The basic idea is to create a shell script in your application source bundle, e.g. .platform/hooks/postdeploy/01_django_migrate.sh. This is described in more detail in the platform hooks section in the docs for extending EB linux platforms.

The file must be executable, so: chmod +x .platform/hooks/postdeploy/01_django_migrate.sh

The file content could look like this (based on @nick-brady's answer):

#!/bin/bash

source "$PYTHONPATH/activate" && {
# log which migrations have already been applied
python manage.py showmigrations;
# migrate
python manage.py migrate --noinput;
}

You can do the same with collectstatic etc.

Note that the path to the Python virtual environment is available to platform hooks as the environment variable PYTHONPATH. You can verify this by inspecting the file /opt/elasticbeanstalk/deployment/env on your instance, e.g. via ssh. Also see AWS knowledge center.

For those wondering, the && in the shell script is a kind of conditional execution: only do the following if the preceding succeeded. See e.g. here.

Leader only

During deployment, there should be an EB_IS_COMMAND_LEADER environment variable, which can be tested in order to implement leader_only behavior in .platform hooks (based on this post):

...

if [[ $EB_IS_COMMAND_LEADER == "true" ]];
then 
  python manage.py migrate --noinput;
  python manage.py collectstatic --noinput;
else 
  echo "this instance is NOT the leader";
fi

...
djvg
  • 11,722
  • 5
  • 72
  • 103
  • Is there a way to know if this is the leader as you can do in commands? I like this solution more because the environmental variable are accessible (I haven't tested but I see you are using $PYTHONPATH). The only issue is that I'd rather migrations to only run on the leader. – Matan Drory Dec 21 '20 at 19:04
  • @MatanDrory: Not sure about that. I have not been able to find anything explicit in the documentation, and haven't tried it myself, yet. It looks like you could still use `leader_only` in .ebextensions, or maybe achieve something similar using tests in your .platform scripts (also note the distinction between [hooks and config-hooks](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html)). – djvg Dec 21 '20 at 19:17
  • @MatanDrory: I found the answer: During deployment, there is an environment property called `EB_IS_COMMAND_LEADER` which you can check, as described in [this post](https://github.com/aws/elastic-beanstalk-roadmap/issues/88#issue-685912856). I'll update the answer. – djvg Jan 08 '21 at 10:34
  • NOTE: to activate python when logged in to the instance (e.g. through `eb ssh`), you can use the `get_config` tool to get the value of `PYTHONPATH`: `source "$(/opt/elasticbeanstalk/bin/get-config environment -k PYTHONPATH)/activate"` – djvg Oct 08 '21 at 12:13
1

in my case worked this .config

container_commands: 01_migrate: command: "django-admin.py migrate" leader_only: true 02_collectstatic: command: "django-admin.py collectstatic --noinput"

i had this command: "source /var/app/venv/*/bin/activate && python3 manage.py config until 4 Jan and suddenly i got a deployment error