I have a multi-process application which run overs multiple host. We want to run this app inside docker container. We are not using docker swarm for launching remote host process inside container (due to some ssh issues in docker), internally we are launching docker container before launching a remote process. All these processes share a docker image which takes more than 100GB disk size.
The problem is that we do not want this docker image to consume 100s of Gb on each host, where containers are initialized. Is there a way to share this docker workdir from local path to NFS path without any issue. I know there are options to change configuration in docker to user different path but docker manual says :
--data-root is the path where persisted data such as images, volumes, and cluster state are stored. The default value is /var/lib/docker. To avoid any conflict with other daemons, set this parameter separately for each daemon.
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file
Also is there a way load a image(from some nfs path) in docker daemon running on a host when docker run command is issued (i.e. do not pre load the image before launching the container on each host.)