0

I have a docker container, which has its volume mounted with host volume. Here is its docker-compose service:

  core:
    image: index.docker.io/kaushal/demo_img
    volumes:
      - ./data/custom:/opt/custom:z

Here ./data/custom is host directory and it's mounted with /opt/custom dir in container. Now, to achieve high availability, I want to run multiple replicas of this container using docker swarm on different nodes.

When I start swarm, it always shows 0/2 replicas. The reason is highlighted here: https://stackoverflow.com/a/56707801/5353128 (tl;dr because I don't have ./data/custom dir in other swarm node).

This problem seems to be the common problem with docker swarm but I couldn't find the straightforward solution for this. Some of the SO posts suggests to use shared volumes but it's not clear how to implement such shared volume for such a simple usecase.

Also, is there any alternative to shared volumes? Any reference would be appreciated. Thanks!

Kaushal28
  • 5,377
  • 5
  • 41
  • 72
  • On the linked question I provided a list of commands to do this with NFS and an explination for the reason for the behavior. Are you asking for HA without external volumes? Where do you want your data to be when the node goes down? – BMitch Mar 29 '21 at 13:55

1 Answers1

0

Docker itself only mounts things for a volume. It itself does not have any sort of file sharing mechanism built in. The type of volume that you have specified is a bind mount, which just mounts a directory on the host where the container is running to a location inside that container. See https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount for info on what a bind mount is in the generic linux/unix sense and https://docs.docker.com/storage/bind-mounts/ for the docker-specific variety.

Another type of mount is NFS. https://stackoverflow.com/a/44825756/4930423 is an answer that discusses that fairly well.

You could also introduce other filesystem types as well-- cifs/samba is another possibility.

Another approach that folks use is to make each node mount one nfs/cifs/other type shared directory using the host's /etc/fstab, and then from there use a docker bind mount. Any file sharing technology that meets the requirements for your workload will work with this setup-- make sure your io is high enough and that the workload in question doesn't have any fundamental problems with the inevitable increased latency with a remote access type filesystem.

programmerq
  • 6,262
  • 25
  • 40