13

When committing a running container with docker commit, is this creating a consistent snapshot of the filesystem?

I'm considering this approach for backing up containers. You would just have to docker commit <container> <container>:<date> and push it to a local registry.

The backup would be incremental, as the commit would just create a new layer.

Also would the big amount of layers hurt io performance of the container drastically? Is there a way to remove intermediate layers at a later point in time?

Edit

By consistent I mean that every application that is designed to survive a power-loss should be able to recover from this snapshots. Basically this means that no file must change after the snapshot is started.

Meanwhile I found out that docker supports multiple storage drivers (aufs, devicemapper, btrfs) now. Unfortunately there is hardly any documentation about the differences between them and the options they support.

Florian Gutmann
  • 2,666
  • 2
  • 20
  • 28
  • 2
    if your docker image has volumes, be aware that `docker commit` won't ever commit the files within those mounted volumes – Thomasleveil Sep 27 '14 at 01:13
  • Possible duplicate of [Is it "safe" to commit a running container in docker?](http://stackoverflow.com/questions/27288070/is-it-safe-to-commit-a-running-container-in-docker) – techraf Apr 07 '16 at 07:00

2 Answers2

1

I guess consistency is what you define it to be.

In terms of flattening and the downsides of stacking too many AUFS layers see: https://github.com/dotcloud/docker/issues/332

docker flatten is linked there.

till
  • 570
  • 1
  • 6
  • 22
  • Thanks for the hint to the flattening issue. But my main question remains unanswered. I edited the question to better reflect what I understand by consistency. – Florian Gutmann Jun 16 '14 at 10:59
1

I am in a similar situation. I am thinking about not using a dedicated data volume container instead committing regularly to have some kind of incremental backup. Beside the incremental backup the big benefit is for a team developing approach. As newcomer you can simply docker pull a database image already containing all the data you need to run, debug and develop.

So what I do right now is to pause before commit:

docker pause happy_feynman; docker commit happy_feynman odev:`date +%s`

As far as I can tell I have no problems right now. But this is a developing machine so no I have no experience on heavy load servers.

KIC
  • 5,887
  • 7
  • 58
  • 98