Here's my workaround which is laborious enough to make me think I'm approaching this entirely wrongly, but it works. Three steps:
1. A shell script that will run our commands. This is designed to be run within the pipenv environment. This is scripts/pipenv-run-tests.sh
:
#!/bin/sh
set -e
# Runs the Django test suite, flake8, and generates HTML coverage reports.
# This script is called by using the shortcut defined in Pipfile:
# pipenv run tests
# You can optionally pass in a test, or test module or class, as an argument, e.g.
# ./pipenv-run-tests.sh tests.appname.test_models.TestClass.test_a_thing
TESTS_TO_RUN=${1:-tests}
coverage run --source=. ./manage.py test $TESTS_TO_RUN
flake8
coverage html
2. We define a Custom Script Shortcut in our Pipfile
:
[scripts]
tests = "./scripts/pipenv-run-tests.sh"
This means that if we log into the shell on the Docker container we could do: pipenv run tests
and our scripts/pipenv-run-tests.sh
would be run, within the pipenv environment. Any arguments included after pipenv run tests
are passed on tot he script.
3. Finally we have a script designed to be run from the host machine, that runs our Custom Script Shortcut within Docker. This is scripts/run-tests.sh
:
#!/bin/bash
set -e
# Call this from the host machine.
# It will call the `tests` shortcut defined in Pipfile, which will run
# a script within the pipenv environment.
# You can optionally pass in a test, or test module or class, as an argument, e.g.
# ./run-tests.sh tests.appname.test_models.TestClass.test_a_thing
TESTS_TO_RUN=${1:-tests}
docker exec hines_web /bin/sh -c "pipenv run tests $TESTS_TO_RUN"
So now, on the host machine, we can do: ./scripts/run-tests.sh
and that will, within the Docker container, call the pipenv shortcut, which will run the scripts/pipenv-run-tests.sh
script. Any arguments provided are passed on to the final script.
(Note that hines_web
in the above is the name of my Docker container defined in my docker-compose.yml
.)