5

Terraform has a dedicated "docker" provider which works with images and containers and which can use a private registry and supply it with credentials, cf. the registry documentation. However, I didn't find any means to supply a Dockerfile directly without use of a separate registry. The problem of handling changes to docker files itself is already solved e.g. in this question, albeit without the use of terraform.

I could do a couple of workarounds: not using the dedicated docker provider, but use some other provider (although I don't know which one). Or I could start my own private registry (possibly in a docker container with terraform), run the docker commands locally which generate the images files (from terraform this could be done using the null_resource of the null provider) and then continue with those.

None of these workarounds make much sense to me. Is there a way to deploy docker containers described in a docker file directly using terraform?

user8472
  • 3,268
  • 3
  • 35
  • 62
  • 1
    https://github.com/terraform-providers/terraform-provider-docker/issues/33 is an outstanding feature request with an abandoned pull request attached to it. If you want to avoid using a `local-exec` provisioner and use Terraform resources instead then you'd probably need to see some traction on that pull request either by contributing yourself or asking if anyone else can help with it. – ydaetskcoR Dec 19 '19 at 15:06

1 Answers1

8

Terraform is a provisioning tool rather than a build tool, so building artifacts like Docker images from source is not really within its scope.

Much as how the common and recommended way to deal with EC2 images (AMIs) is to have some other tool build them and Terraform simply to use them, the same principle applies to Docker images: the common and recommended path is to have some other system build your Docker images -- a CI system, for example -- and to publish the results somewhere that Terraform's Docker provider will be able to find them at provisioning time.

The primary reason for this separation is that it separates the concerns of building a new artifact and provisioning infrastructure using artifacts. This is useful in a number of ways, for example:

  • If you're changing something about your infrastructure that doesn't require a new image then you can just re-use the image you already built.
  • If there's a problem with your Dockerfile that produces a broken new image, you can easily roll back to the previous image (as long as it's still in the registry) without having to rebuild it.

It can be tempting to try to orchestrate an entire build/provision/deploy pipeline with Terraform alone, but Terraform is not designed for that and so it will often be frustrating to do so. Instead, I'd recommend treating Terraform as just one component in your pipeline, and use it in conjunction with other tools that are better suited to the problem of build automation.


If avoiding running a separate registry is your goal, I believe that can be accomplished by skipping using docker_image altogether and just using docker_container with an image argument referring to an image that is already available to the Docker daemon indicated in the provider configuration.

docker_image retrieves a remote image into the daemon's local image cache, but docker build writes its result directly into the local image cache of the daemon used for the build process, so as long as both Terraform and docker build are interacting with the same daemon, Terraform's Docker provider should be able to find and use the cached image without interacting with a registry at all.

For example, you could build an automation pipeline that runs docker build first, obtains the raw id (hash) of the image that was built, and then runs terraform apply -var="docker_image=$DOCKER_IMAGE" against a suitable Terraform configuration that can then immediately use that image.

Having such a tight coupling between the artifact build process and the provisioning process does defeat slightly the advantages of the separation, but the capability is there if you need it.

Martin Atkins
  • 62,420
  • 8
  • 120
  • 138
  • The original issue is thus my misunderstanding of terraform: terraform manages to understand whether an image has changed and re-deployment is necessary. It does NOT understand whether the "source" code of the image, i.e., the `Dockerfile`, has changed or not. This is akin to having a variant of `make` that offers all sorts of of ways to deploy a binary and detect changes (and requires an Artifactory) but cannot understand that the binary is built from source files under version control. So terraform is simply not the right tool for my use case. – user8472 Dec 20 '19 at 09:54