12

I'm new to docker world: I'm at a point where i can deploy docker containers and do some work.

Trying to get to the next level of saving my changes and moving my containers/images to another pc/server.

Currently, I'm using docker on windows 10, but I do have access to Ubuntu 16.04 server to test my work.

This is where I'm stuck: I have Wordpress and MariaDB images deployed on Docker.
My WP is running perfectly OK.I have installed few themes and created few pages with images.

At this point, I like to save my work and send it to my friend who will deploy my image and do further work on this same Wordpress.

What I have read online is: I should run docker commit command to save and create my docker image in .tar format and then send this image file (.tar) to my friend. He will run docker load -i on my file to load it as image into his docker and then create container from it which should give him all of my work on Wordpress.

Just to clarify, I'm committing both Wordpress and Mariadb containers.
I don't have any external volumes mounted so all the work is being saved in containers.

I do remember putting check mark on drive C and D in docker settings but i don't know if that has anything to to do with volumes.

I don't get any error in my commit and moving .tar files process. Once my friend create his containers from my committed images, he gets clean Wordpress (like new installation of Wordpress starting from wp setup pages).

Another thing I noticed is that the image I create has the same file size as original image i pulled. When I run docker images, I see my image is 420MB ,as well as Wordpress image is 420MB.

I think my image should be a little bit bigger since I have installed themes, plugins and uploaded images to Wordpress. At least it should add 3 to 5 MB more then original images. Please help. Thank you.

Running docker system df gives me this.

TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              5                   3                   1.259GB             785.9MB (62%)
Containers          3                   3                   58.96kB             0B (0%)
Local Volumes       2                   2                   311.4MB             0B (0%)
Build Cache         0                   0                   0B                  0B
tgogos
  • 23,218
  • 20
  • 96
  • 128
smaqsood
  • 1,559
  • 3
  • 11
  • 12

5 Answers5

3

Make sure, as shown here, to commit a running container (to avoid any data cleanup)

docker commit CONTAINER_ID yourImage

After the docker commit command, you can use docker save to save your image in a tar, and docker load to import it back, as shown here.

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • Those were exactly my steps. i did commit, container ID and my image name. Then i did save to save image in .tar file. Then i moved file to another PC running Docker. Use Docker load to load .tar file into image. then used image ID's in docker-compose.yml file to create container from both wordpress and mariadb containers. At the end what i get is wordpress installation from scratch. – smaqsood Jan 15 '19 at 08:02
  • @smaqsood The problem is that a typicla Wordpress image does declare a volume: https://github.com/docker-library/wordpress/blob/d775299ddae6dafa232166f44b7c03d23ddf7bb6/php7.1/apache/Dockerfile: you would need to export the volume as well (https://docs.docker.com/v17.03/engine/tutorials/dockervolumes/#backup-restore-or-migrate-data-volumes). Same for MariaDB. – VonC Jan 15 '19 at 08:06
  • Deployment was two Docker run commands..., i wish if two commands can also wrap up my work into image to move...., this is what i had in mind before touching docker. – smaqsood Jan 15 '19 at 08:36
  • @smaqsood You would need to script the data volume export, as in https://stackoverflow.com/a/26339869/6309. Possibly using a tool like https://github.com/discordianfish/docker-backup – VonC Jan 15 '19 at 08:44
  • 2
    Very informative article. Exactly my situation. Backing up WordPress container with all data. Looks like core issue is dangling volumes..., I still can't believe why docker made an essential need to such a big deal and complicated. – smaqsood Jan 15 '19 at 19:06
1

You should never run docker commit.

To answer your immediate question, containers that run databases generally store their data in volumes; they are set up so that the data is stored in an anonymous volume even if there was no docker run -v option given to explicitly store data in a named volume or host directory. That means that docker commit never persists the data in a database, and you need some other mechanism to copy the actual data around.

At a more practical level, your colleague can ask questions like "where did this 400 MB tarball come from, why should I trust it, and how can I recreate it if it gets damaged in transit?" There are also good questions like "the underlying database has a security fix I need, so how do I get the changes I made on top of a newer base image?" If you're diligent you can write down everything you do in a text file. If you then have a text file that says "I started from mysql:5.6, then I ran ..." that's very close to being a Dockerfile. The syntax is straightforward, and Docker has a good tutorial on building and running custom images.

When you need a custom image, you should always describe what goes into it using a Dockerfile, which can be checked into source control, and can rebuild an image using docker build.

For your use case it doesn't sound like you actually need a custom image. I would probably suggest setting up a Docker Compose YAML file that described your setup and actually stored the data in local directories. The database half of it might look like

version: '3'
services:
  db:
    image: 'mysql:8.0'
    volumes:
      - './mysql:/var/lib/mysql/data'
    ports:
      - '3306:3306'

The data will be stored on the host, in a mysql subdirectory. Now you can tar up this directory tree and send that tar file to your colleague, who can then untar it and recreate the same environment with its associated data.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • your reply sounds promissing but i couldn't get the process. i have looked into the dockerfile usage and it is more like creating images using existing images on docker hub. I couldn't find any article that actually describe my sitution. All online help points to using commit. i have a felling that i need to use Dockerfile but it might take me sometime for me to understand the usage of it. I never thought docker would be this complicated. i though the core purpose of docker containers is to work and move. – smaqsood Jan 15 '19 at 08:12
  • 1
    Why does it exist if you should never do it? – Peter Kionga-Kamau Apr 12 '22 at 16:22
1

Use docker build (Changes to the images should be stored in the Dockerfile).

Now if you have multiple services, just use docker's brother docker-compose. One extra step you have to do is create docker-compose.yml (don't be afraid yet my friend, it's nothing trivial). All you're doing in this file is listing out your images (along with defining where their Dockerfile is for that image, could be in some subfolder for each image). You can also define some other properties there if you'd like.

Meeko
  • 85
  • 2
  • 6
  • i understand docker-compose.yml but my understanding is you use docker-compose.yml file to create containers from images. i didn't kwno you can also use docker-compose to create images from containers (reverse). – smaqsood Jan 15 '19 at 08:17
  • This answer is vague - how do you use docker-compose to save a running container with all it's artifacts to an image that can be run on another machine? – Peter Kionga-Kamau Apr 09 '22 at 21:56
1

Notice that certain directories are considered volume directories by docker, meaning that they are container specific and therefore never saved in the image. The \data directory is such an example. When docker commit my_container my_image:my_tag is executed, all of the containers filesystem is saved, except for /data. To work around it, you could do:

mkdir /data0
cp /data/* /data0

Then, outside the container:

docker commit my_container my_image:my_tag

Then you would perhaps want to copy the data on /data0 back to /data, in which case you could make a new image:

On the Dockerfile:

FROM my_image:my_tag
CMD "cp /data0 /data && my_other_CMD"

Notice that trying to copy content to /data in a RUN command will not work, since a new container is created in every layer and, in each of them, the contents of /data are discarded. After the container has been instatiated, you could also do:

docker exec -d my_container /bin/bash -c "cp /data0/* /data"
Gabriel Fernandez
  • 580
  • 1
  • 3
  • 14
0

You have to use the volumes to store your data. Here you can find the documentation: https://docs.docker.com/storage/volumes/

For example you can do somethink like this in your docker-compose.yml.

version: '3.1'

services:
  wordpress:
    image: wordpress:php7.2-apache
    ports:
      - "8080:80"
    environment:
      WORDPRESS_DB_HOST: databasename
      WORDPRESS_DB_USER: username
      WORDPRESS_DB_PASSWORD: password
      WORDPRESS_DB_NAME: namedatabase
    volumes:
      - name_volume:/var/www/html
volumes:
  - name_volume:

or

    volumes:
      - ./yourpath:/var/www/html
Mirco
  • 212
  • 3
  • 12
  • Provide some specifics, the answer should contain most of information to solve the question problem. – pomo_mondreganto Jan 15 '19 at 10:49
  • i hope to did it better now – Mirco Jan 15 '19 at 12:33
  • please correct me if i'm wrong. after reading all replies this is my conclusion. i red more on volumes and created a folder 'wordpress' on my windows computer and added 'volumes: - ./wordpress:/var/www/html' in my docker compose yml file. Once wordpress is up and running i went to my wordpress folder and i see all wordpress files in there. Basically this wordpress folder gives me access to wordpress files in container (correct me please). i can back up wordpress files like this and use phpmyadmin to back up db but this is not the portability solution i was expecting from docker. – smaqsood Jan 15 '19 at 20:29
  • When you use docker volume like ./folderA:/folderB, when you run your container all the files are copied from folderA to folderB and they will vary together: if you change something inside folderA it will be changed in the folderB too, same viceversa. I suggest you to try with an apache server and a folder/volume and put inside an index.html to do some practice with it. Try to change the index while container still running or something like this. – Mirco Jan 16 '19 at 08:41
  • 1
    what happen when i kill the container. will folder A remain on my PC with all files..? – smaqsood Jan 16 '19 at 23:21
  • yep it will remain with all files – Mirco Jan 17 '19 at 08:32
  • This is unclear. If you create that volume and your container runs and saves to that volume folder, then you commit, will it just work? (*spoiler: it will not*). What are the entire set of steps to get that volume data into the image that can be run on another machine? – Peter Kionga-Kamau Apr 09 '22 at 22:13