I'm not sure that I am using the correct terminology here but I thought I would at least try to explain my issue:
Goal:
I want to create a docker image that people can use that will have all the files preloaded and ready to go because they will have to make an edit to a config.json file basically. So I want people to be able to map their docker-compose (or just CLI) to something like "/my/host/path/config:/config" and when they spin up the image, all the files will be available at that location on their host machine in persistent storage.
Issue:
When you spin up the image for the first time, the directories get created on the host but there are no files for end-user to modify. So they are left with manually copying files into this folder to make it work and that is not acceptable in my humble opinion.
Quick overview of the image:
Python script that uses Selenium to perform some actions on a website
Dockerfile:
FROM python:3.9.2
RUN apt-get update
RUN apt-get install -y cron wget apt-transport-https
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# Set TZ
RUN apt-get install -y tzdata
ENV TZ=America/New_York
RUN mkdir -p /config
COPY . /config
WORKDIR /config
RUN pip install --no-cache-dir -r requirements.txt
# set display port to avoid crash
ENV DISPLAY=:99
# Custom Env Vars
ENV DOCKER_IMAGE=true
# Setup Volumes
VOLUME [ "/config" ]
# Run the command on container startup
CMD ["python3", "-u", "/config/RunTests.py"]
Any help would be greatly appreciated.