2

I am trying to use a folder named tmp as a volume in docker container in order to do it I am using the following docker-compose.yml file

version: "3"

services:
  master:
    image: singularities/spark
    command: start-spark master
    hostname: master
    ports:
      - "6066:6066"
      - "7070:7070"
      - "8080:8080"
      - "50070:50070"
      - "7077:7077"
    volumes:
      - "../data:/tmp/"
    deploy:
      placement:
        constraints:
          - node.role == manager
  worker:
    image: singularities/spark
    command: start-spark worker master
    environment:
      SPARK_WORKER_CORES: 1
      SPARK_WORKER_MEMORY: 4g
    links:
      - master
    volumes:
      - "../data:/tmp/"

tmp folder exist in singularities/spark image. After I run following command, folders and files under tmp folder are deleted.

docker-compose up -d
ugur
  • 400
  • 6
  • 20
  • 1
    Possible duplicate of [Docker mount to folder overriding content](https://stackoverflow.com/questions/47664107/docker-mount-to-folder-overriding-content) – David Maze Aug 15 '18 at 11:01

3 Answers3

1

The clue is in the name. The /tmp folder gets cleared at boot time (i.e.at container startup). You'll have to use a different folder name if you want persistent data.

PaulNUK
  • 4,774
  • 2
  • 30
  • 58
1

When you do a docker-compose up -d, while creating containers docker mounts your ../data host directory to /tmp which cleans up /tmp of the image/container & puts everything you have inside ../data of the host machine.

You might have to choose some other container path other than /tmp to ensure it has the data created by singularities/spark image.

EDIT 1

docker cp command can help you copy files FROM/TO host/container.

You want to copy from /tmp of the image to host & then copy host to tmp (Not sure why you wanna do this, not suggested & extremely rare scenario )

However, you can utilize docker run with a named volume or a host bind volume to start a container & get the data. Following a docker cp to copy data FROM & TO the host or container.

vivekyad4v
  • 13,321
  • 4
  • 55
  • 63
  • Is there any way to synchronize data between two side local and container – ugur Aug 15 '18 at 10:50
  • Which directory? You can mount it anywhere, make it `../data:/host_data/`. Now you will see your host data in `/host_data` of your container. – vivekyad4v Aug 15 '18 at 10:53
  • Existing directory like tmp I declared above, I want to put files in tmp to local and copy files in local to tmp – ugur Aug 15 '18 at 10:56
  • AFAIU, In that case, you might have to create a Dockerfile with base image as `singularities/spark` for both master & worker . Put everything in the container to some directory by using Dockerfile & do a `cp` or `mv` by using entrypoint scripts to be on the safer side. This question isn't really clarifying the expected behaviour, you can try that out by creating Dockerfile & post your concerns in a different question. – vivekyad4v Aug 15 '18 at 11:05
1

This does only work with single files

    volumes:
  - "../data/config.properties:/tmp/config.properties"

For example

Moritz Vogt
  • 99
  • 1
  • 9