I want to use a Docker container to run a utility (specifically Terraform) using local files. In order to quickly iterate on the code (such as my_stuff.tf
) I want to bind mount my working directory. However, I want to consider some things as relatively stable and static, such as plugins. Basically there are three ways I want things to be handled:
.terraform/
is stable stuff that should exist in the container but not my host directory, but needs to be preserved even after mounting.my_stuff.tf
exists both within container (becauseinit
needs it) and in host dir (because I want to edit it). I want my host directory version of this to override the container version.terraform.tfstate
might not exist in either place to start with, but gets generated during running. I want it to persist in my host directory as soon as it does exist.
(and I guess a 4th category like README.md
where I do not care whether it is there or not)
In my case, TF expects both .terraform/
where plugins are configured and terraform.tfstate
(one of the outputs I want to catch with my bind-mount) to be in the same directory, so I cannot just use different directories for the container-internal stuff and the bind-mounted stuff.
# Dockerfile
FROM plugin_source AS plugins
FROM terraform_base
COPY --from=plugins terraform-provider-X /bin/
COPY my_stuff.tf /app/
WORKDIR /app
RUN /bin/terraform init
And my run command:
docker run --rm -i -t --mount source=$PWD,target=/app,type=bind my_terraform <some-tf-command>
Is there a slick way to make bind mounts behave how named volumes do on first initialization, as described in https://docs.docker.com/storage/bind-mounts/#mount-into-a-non-empty-directory-on-the-container? At present, it seems like I might have to write a little entrypoint script that symlinks the stable stuff into my work directory.