61

I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.

One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.

But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?

Andreas Wederbrand
  • 38,065
  • 11
  • 68
  • 78
  • I provided two ways to copy your data from your local machine to a host (either local or remote). See here http://stackoverflow.com/a/39348811/1556338 In general you want to keep your data away from the image. That makes your image re-usable in different environments (dev/testing/prod, clientA, clientB, etc). – Bernard Sep 08 '16 at 10:52
  • 1
    Yes, `docker cp` will work but not as a part of docker-compose, it's a separate step. It would be nicer to not wrap docker-compose in a shell script just to achieve this. – Andreas Wederbrand Sep 08 '16 at 10:59
  • Unfortunately docker compose is limited. As your usage becomes more specific you won't be able to escape using shell scripting for some things. Compose doesn't even work with the new swarm mode. – Bernard Sep 08 '16 at 11:04
  • What I usually do to solve this is to create Docker images for configuration. I have this files in a CVS with a Dockerfile, and I build a versioned Docker image (from Scratch). Later I lauch the compose with a **volumes from** indicathing this "files container". – Jorge Marey Sep 08 '16 at 12:56

6 Answers6

51

Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.

This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:

version: '2'

services:
  my-db-app:
    build: db/.
    image: custom-db

And db/Dockerfile would look like:

FROM mysql:latest
COPY ./sql /sql

The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.


Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:

tar -cC sql . | docker run --rm -it -v sql-files:/sql \
  busybox /bin/sh -c "tar -xC /sql"

Run that via a script and then have that same script bounce the db container to reload that config.


Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:

# create a reusable volume
$ docker volume create --driver local \
    --opt type=nfs \
    --opt o=addr=192.168.1.1,rw \
    --opt device=:/path/to/dir \
    foo

# or from the docker run command
$ docker run -it --rm \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

# or to create a service
$ docker service create \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:

version: '3.4'

configs:
  sql_file_1:
    file: ./file_1.sql

services
  my-db-app:
    image: my-db-app:latest
    configs:
      - source: sql_file_1
        target: /sql/file_1.sql
        mode: 444

Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.

BMitch
  • 231,797
  • 42
  • 475
  • 450
32

If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.

Example for nginx config in official image:

version: "3.7"

services:
  nginx:
    image: nginx:alpine
    ports:
      - 80:80
    environment:
      NGINX_CONFIG: |
        server {
          server_name "~^www\.(.*)$$" ;
          return 301 $$scheme://$$1$$request_uri ;
        }
        server {
          server_name example.com
          ...
        }
    command:
      /bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""

Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):

https://docs.docker.com/compose/compose-file/#env_file https://docs.docker.com/compose/compose-file/#variable-substitution

To get the original entrypoint command of a container:

docker container inspect [container] | jq --raw-output .[0].Config.Cmd

To investigate which file to modify this usually will work:

docker exec --interactive --tty [container] sh
Bobík
  • 1,828
  • 20
  • 19
  • I'm approving this answer as it works for at least small files. Nice trick. I'll have to override the default entrypoint for that image but that is fine. – Andreas Wederbrand Apr 22 '19 at 08:23
14

This is how I'm doing it with volumes:

services:
  my-db-app:
    command: /shell_scripts/go.sh
    volumes:
      - ./shell_scripts:/shell_scripts 
BMitch
  • 231,797
  • 42
  • 475
  • 450
Adam Spence
  • 3,040
  • 1
  • 21
  • 17
  • 10
    This doesn't work when shell_scripts resides on a different host from the remote docker server. – BMitch Dec 31 '16 at 22:23
  • Also worked for me simply using `volumes: - ./:/config/path/in/container` to mount files from project root directory into the container. – socona Apr 05 '19 at 15:17
6

i think you had to do in a compose file:

volumes:
 - src/file:dest/path
Cristian Monti
  • 145
  • 1
  • 2
  • 12
1

As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).

version: '3.3'
services:
  my-db-app:
    command: /shell_scripts/go.sh
    volumes:
      shell_scripts:/shell_scripts 
volumes:
    shell_scripts:
      driver: "cloudstor:aws"
Eoan
  • 301
  • 2
  • 6
1

With Compose V2 you can simply do (as in the documentation) :

docker compose cp src [service:]dest

Before v2 you can use the workaround using docker cp explained in the associated issue

docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql
marrco
  • 128
  • 7