If you need to override the command that's eventually being run, you can directly specify that when you run the container; you shouldn't need to rebuild the image to run a different command or have a different input file.
docker run your-image \
supervisord -c /etc/supervisor/supervisord_foo.conf
The way you've written your Dockerfile, though, there will never be more than one configuration file in the image. When you COPY
it in, you can use a fixed name.
ARG ARG1
# Copy the file to its default location; then you don't need a -c option
COPY files/etc/supervisor/supervisord_"$ARG1".conf /etc/supervisor/supervisord.conf
CMD ["/usr/bin/supervisord"]
Many tools and configuration frameworks support passing most or all of their options as environment variables, and in your own application you should consider this pattern as well. Setting environment variables directly (with a docker run -e
option) tends to be much easier than overriding the command.
Finally: this overall setup looks like you're trying to run multiple processes in a container, and then configure the container to run different subsets of those processes. Usual best practice is to run only one process in a container, and avoid process managers like supervisord. You might consider whether a tool like Docker Compose that can start multiple containers together is a good match for your application; you can run commands like docker-compose up -d backend
to start only a part of your application stack.