0

I have a Laravel application with a Dockerfile and a docker-compose.yml which i use for local development. It does some volume sharing currently so that code updates are reflected immediately. The docker-compose also spins up containers for MySQL, Redis, etc.

However, in preparation for deploying my container to production (ECS), I wonder how best to structure my Dockerfile.

Essentially, on production, there are several other steps I would need to do that would not be done locally in the Dockerfile:

  • install dependencies
  • modify permissions
  • download a production env file

My first solution was to have a build script, which essentially takes the codebase, copies it to an empty sub-folder, runs the above three commands in that folder, and then runs docker build. This way, the Dockerfile doesn't need to change between dev and production and i can include the extra steps before the build process.

However, the drawback is that the first 3 commands don't get included as part of the docker image layering. So even if my dependencies haven't changed in the last 100 builds, it'll still download them all from scratch each time, which is fairly time consuming.

Another option would be to have multiple docker files, but that doesn't seem very dry.

Is there a preferred or standardized approach for handling this sort of situation?

djt
  • 7,297
  • 11
  • 55
  • 102
  • Well both of your solutions seems a reasonable approach. You could have 2 different Dockerfiles, for example if you just need to do those 3 steps than I would extend from the Dev image and run those commands, probably using a CD application like Gitlab CI or Jenkins. The other solution you've came up with is also good too if you use Docker image caching – Sergiu Sep 27 '17 at 21:18
  • 1
    Try and [use the same image](https://stackoverflow.com/questions/40914696/how-do-i-build-docker-images-when-my-development-environment-mounts-my-codebase/40921548#40921548) or as close as possible – Matt Sep 27 '17 at 23:01

0 Answers0