Yes this is possible, though not recommended (I'll explain why in a second). First, how you would accomplish what you asked:
Docker Build
The command to build an image in its simplest form is docker build .
which performs a build with a build context pulled from the current directory. That means the entire current directory is sent to the docker service, and the service will use it to build an image. This build context should contain all of the local resources docker needs to build your image. In this case, docker will also assume the existence of a file called Dockerfile
inside of this context, and use it for the actual build.
However, we can override the default behavior by specifying a the -f
flag in our build command, e.g. docker build -f /path/to/some.dockerfile .
This command uses your current directory as the build context, but uses it's own Dockerfile that can be defined elsewhere.
So in your case, let's we assume the code for ProjectA is housed in the directory project-a
and project-deploy in project-deploy
. You can build and tag your docker image as project-a:latest
like so:
docker build -f project-deploy/Dockerfile.ProjectA -t project-a:latest project-a/
Why this is a bad idea
There are many benefits to using containers over traditional application packaging strategies. These benefits stem from the extra layer of abstraction that a container provides. It enables operators to use a simple and consistent interface for deploying applications, and it empowers developers with greater control and ownership of the environment their application runs in.
This aligns well with the DevOps philosophy, increases your team's agility, and greatly alleviates operational complexity.
However, to enjoy the advantages containers bring, you must make the organizational changes to reflect them or all your doing is making thing more complex, and further separating operations and development:
- If your operators are writing your dockerfiles instead of your developers, then you're just adding more complexity to their job with few tangible benefits;
- If your developers are not in charge of their application environments, there will continue to be conflict between operations and development, accomplishing basically nothing for them either.
In short, Docker is a tool, not a solution. The real solution is to make organizational changes that empower and accelerate the individual with logically consistent abstractions, and docker is a great tool designed to complement that organizational change.
So yes, while you could separate the application's environment (the Dockerfile) from its code, it would be in direct opposition to the DevOps philosophy. The best solution would be to treat the docker image as an application resource and keep it in the application project, and allow operational configuration (like environment variables and secrets) to be accomplished with docker's support for volumes and variables.