3

how to run docker in production, with a active/active or active/standby HA system? are there any guides or best practices?

i am thinking of 3 scenarios:

1) NFS - for two servers - wich are prepped with docker-machine and mounting a shared NFS to /var/lib/docker/ - so both docker nodes should see the same files. (using some sort of filer, like vnx, efs, and so on.)

2) using DRBD to replicate a disk - and mount it to: /var/lib/docker/ - so data is on both nodes, and the active node can mount it and run containers, in case of failover the other node mounts and starts the containers

3) using DRBD - as above - and export a NFS server, mounting the NFS on both nodes to : /var/lib/docker/ - so as above both nodes can mount and run containers, using Heartbeat/Pacemaker to travel the virtual-IP & DRBD switching

what is the best practice on running docker-containers in production to make them high availaible.

regards

Helmut Januschka
  • 1,578
  • 3
  • 16
  • 34
  • 1
    Interesting question! (Found it through google). A pity that there are no answer nor comments. Have you tried at serverfault? – MariusSiuram Jan 15 '16 at 08:20
  • havn't tried serverfault now. right now i have done a few standalone docker machines, with an loadbalancer in front - and the container use their non-container data via shared NFS (from a filer) - but this is not 100% satisfying – Helmut Januschka Jan 15 '16 at 09:26
  • RedHat is pushing GlusterFS for this purpose, I have googled up also a DockerEngine Plugin solution based on LINBIT DRDB... I have no personal experience, surely it's of paramount importance to ensure Docker Data Volumes replication in PROD – Pierluigi Vernetto Feb 22 '18 at 16:49

1 Answers1

0

Persistent storage is still somewhat the elephant in the room in the container/docker world.

I wouldn't recommend using any of the approaches that you're suggesting. The only exception would be if you put some particular data onto a shared volume (using a volume mount) (but not the entire /var/lib/docker).

There are lots of things going on in the container space and there's a volume plugins that integrates directly into Docker. One of the volume plugins/solutions that is gaining the most momentum is Flocker, which is worth looking into.

Once you've moved your data out of your containers, setting up a HA system becomes a lot easier, as the containers become more or less ephemeral.

You can then use something like Kubernetes, Docker Swarm, or Docker Datacenter to manage/monitor these containers.

vpetersson
  • 1,630
  • 2
  • 17
  • 18
  • actually i ended up mounting changing/dynamic data as volumes into the container, backing them with an shared NFS. – Helmut Januschka Feb 22 '18 at 22:21
  • Using **volume** to manage stateful service/data and **container** to non-stateful service. Then orchestrating containers with K8s/Docker swarm/Apache mesos+marathon... Probably the more mature practices. – Light.G Sep 05 '18 at 12:54