0

I'm not sure that I am using the correct terminology here but I thought I would at least try to explain my issue:

Goal:

I want to create a docker image that people can use that will have all the files preloaded and ready to go because they will have to make an edit to a config.json file basically. So I want people to be able to map their docker-compose (or just CLI) to something like "/my/host/path/config:/config" and when they spin up the image, all the files will be available at that location on their host machine in persistent storage.

Issue:

When you spin up the image for the first time, the directories get created on the host but there are no files for end-user to modify. So they are left with manually copying files into this folder to make it work and that is not acceptable in my humble opinion.

Quick overview of the image:

Python script that uses Selenium to perform some actions on a website

Dockerfile:

FROM python:3.9.2

RUN apt-get update
RUN apt-get install -y cron wget apt-transport-https

# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable

# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/

# Set TZ
RUN apt-get install -y tzdata
ENV TZ=America/New_York

RUN mkdir -p /config
COPY . /config
WORKDIR /config
RUN pip install --no-cache-dir -r requirements.txt

# set display port to avoid crash
ENV DISPLAY=:99

# Custom Env Vars
ENV DOCKER_IMAGE=true

# Setup Volumes
VOLUME [ "/config" ]

# Run the command on container startup
CMD ["python3", "-u", "/config/RunTests.py"]

Any help would be greatly appreciated.

BinaryNexus
  • 875
  • 3
  • 15
  • 30
  • From what I understood. You mean, that when someone runs you container with a bind mount, the host folder he is using will get populated by the files on the container ? For example if `config` in you container has 2 files `1.txt` and `2.txt` when I start the container with `-v /home/config:/config` I will have the 2 files in my host at /home/config – Iduoad Apr 29 '21 at 00:05
  • That is correct @M.Iduoad , that is exactly that I want to happen but it is not working for some reason. – BinaryNexus Apr 29 '21 at 15:19

1 Answers1

1

This is not how docker bind mount work. Docker bind mounts use the mount system call under the hood and it hides the content of the folder inside your containers when a host folder in bind mount.

Running this command docker run -v /my/config:/config container will always hide (override) the content inside your container.

On the contrary if you use empty docker volumes(created by docker volume command), Docker will copy the files to the volume before binding it.

So docker run config_volume:/config container will copy you config files into you volume the first time. Then you can use the volume with volume-from or mount it on another container.

To learn more about this take a look at this issue.

Another workaround is to bind you volume to a folder while creating it. More info here.

docker volume create --driver local \
    --opt type=none \
    --opt device=$configVolumePath \
    --opt o=bind \
    config_vol

For me the best solution is to copy or symlink your configuration files on container startup.

You can do so by modifying your add the cp or ln command into your entrypoint script.

Iduoad
  • 895
  • 4
  • 15