2

I was reading Project Atomic's guidance for images which states that the 2 main use cases for using a volume are:-

  • sharing data between containers
  • when writing large files to disk

I have neither of these use cases in my example using an Nginx image. I intended to mount a host directory as a volume in the path of the Nginx docroot in the container. This is so that I can push changes to a website's contents into the host rather then addressing the container. I feel it is easier to use this approach since I can - for example - just add my ssh key once to the host.

My question is, is this an appropriate use of a data volume and if not can anyone suggest an alternative approach to updating data inside a container?

jacks
  • 4,614
  • 24
  • 34

4 Answers4

2

One of the primary reasons for using Docker is to isolate your app from the server. This means you can run your container anywhere and get the same result. This is my main use case for it.

If you look at it from that point of view, having your container depend on files on the host machine for a deployed environment is counterproductive- running the same container on a different machine may result in different output.

If you do NOT care about that, and are just using docker to simplify the installation of nginx, then yes you can just use a volume from the host system.

Think about this though...

#Dockerfile
FROM nginx
ADD . /myfiles

#docker-compose.yml
web:
    build: .

You could then use docker-machine to connect to your remote server and deploy a new version of your software with easy commands

docker-compose build
docker-compose up -d

even better, you could do

docker build -t me/myapp .
docker push me/myapp

and then deploy with

docker pull
docker run
Paul Becotte
  • 9,767
  • 3
  • 34
  • 42
  • thanks for the tips regarding using compose. I wasn't clear enough in my question. I do care about isolation and reusability but my use case is: start-up Nginx in single container -> then without needing to disrupt the service be able to do subsequent frequent updates to the web content sitting in the Nginx root dir. So I don't want to be restarting the container with compose each time I do this. Thanks for the comment though. – jacks Dec 03 '15 at 09:44
  • 1
    Fair enough- though optimizing your process to avoid restarting nginx during a deploy does not seem like the correct optimization :) Just explaining why project atomic and the docker documentation recommend doing it this way. You of course are free to make the decisions that make the most sense to your project, especially since you made it a point to ask why other people do it a different way :) – Paul Becotte Dec 03 '15 at 14:05
  • Sure. I will run a solution with compose to test it out and see how I feel about it. ;) – jacks Dec 03 '15 at 14:11
1

There's a number of ways to achieve updating data in containers. Host volumes are a valid approach and probably the simplest way to achieve making your data available.

You can also copy files into and out of a container from the host. You may need to commit afterwards if you are stopping and removing the running web host container at all.

docker cp /src/www webserver:/www

You can copy files into a docker image build from your Dockerfile, which is the same process as above (copy and commit). Then restart the webserver container from the new image.

COPY /src/www /www

But I think the host volume is a good choice.

docker run -v /src/www:/www webserver command

Docker data containers are also an option for mounted volumes but they don't solve your immediate problem of copying data into your data container.

If you ever find yourself thinking "I need to ssh into this container", you are probably doing it wrong.

Community
  • 1
  • 1
Matt
  • 68,711
  • 7
  • 155
  • 158
0

Not sure if I fully understand your request. But why you need do that to push files into Nginx container.

Manage volume in separate docker container, that's my suggestion and recommend by Docker.io

Data volumes

A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:

Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization.
Data volumes can be shared and reused among containers.
Changes to a data volume are made directly.
Changes to a data volume will not be included when you update an image.
Data volumes persist even if the container itself is deleted.

refer: Manage data in containers

BMW
  • 42,880
  • 12
  • 99
  • 116
  • 1
    I think op needs a way to push data into a container, which you would still need to do with a data container. – Matt Dec 03 '15 at 01:10
0

As said, one of the main reasons to use docker is to achieve always the same result. A best practice is to use a data only container.

With docker inspect <container_name> you can know the path of the volume on the host and update data manually, but this is not recommended;

or you can retrieve data from an external source, like a git repository

Chris
  • 1,692
  • 2
  • 17
  • 21
  • I want to do regular updates to web content inside the container. Whether I do a Git push to a repo on the web server container or the Data container the task is the same right. – jacks Dec 03 '15 at 09:46