153

Yet another Docker symlink question. I have a bunch of files that I want to copy over to all my Docker builds. My dir structure is:

parent_dir
    - common_files
        - file.txt
    - dir1
        - Dockerfile  
        - symlink -> ../common_files

In above example, I want file.txt to be copied over when I docker build inside dir1. But I don't want to maintain multiple copies of file.txt. Per this link, as of docker version 0.10, docker build must

Follow symlinks inside container's root for ADD build instructions.

But I get no such file or directory when I build with either of these lines in my Dockerfile:

ADD symlink /path/dirname or ADD symlink/file.txt /path/file.txt

mount option will NOT solve it for me (cross platform...). I tried tar -czh . | docker build -t without success.

Is there a way to make Docker follow the symlink and copy the common_files/file.txt into the built container?

Ravi
  • 3,719
  • 6
  • 28
  • 40

14 Answers14

99

That is not possible and will not be implemented. Please have a look at the discussion on github issue #1676:

We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.

Henrik Sachse
  • 51,228
  • 7
  • 46
  • 59
  • Thank you. Yes I noticed that link before but I thought that was for a much older version of docker (0.6.1). 0.10's change log sort of mentions this is possible https://github.com/docker/docker/blob/master/CHANGELOG.md#0100-2014-04-08 – Ravi Aug 07 '15 at 21:27
  • Also if "parent_dir" is checked out in any computer and if symlink has relative path to "common_files", it will be repeatable. – Ravi Aug 07 '15 at 21:27
  • 3
    Your quote `Follow symlinks inside container's root for ADD build instructions.` means that *inside the container* symlinks are followed. Not in the build context directory. In `ADD file.txt /dir/file.txt` the directory `dir` could be a symlink. The arguments I quoted in my answer are still valid and symlinks are still not followed in the latest version. You might run into problems (regarding repeatability) when you store symlinks in revision control systems like *git*. Therefore please refer to [this question](http://stackoverflow.com/questions/86402/how-can-i-get-git-to-follow-symlinks). – Henrik Sachse Aug 07 '15 at 21:37
  • 1
    I see your point regarding symlinks in git. But symlinks don't have to go into git though. A simple setup script can prepare the local env creating symlinks. To me, the cost of keeping 'n' copies of a shared file appears too high from maintenance perspective. Maybe I'll have to serve it out of apache. Thank you. – Ravi Aug 10 '15 at 14:28
  • 58
    what a shame, while I see the point I don't follow the logic and it bites me. Git handles symblinks just perfectly, and I also expect builds to work across all machines and environments where the source repo is checked out..?! – Gregor May 29 '17 at 08:31
  • 1
    It's "not repeatable"? What a joke -- it is repeatable, depending on the use case. E.g. if we are all pulling and building in some repo, it's completely legit to use symlinks within the repo when using `ADD`. – Josh M. May 04 '23 at 13:30
51

If anyone still has this issue I found a very nice solution on superuser.com:

https://superuser.com/questions/842642/how-to-make-a-symlinked-folder-appear-as-a-normal-folder

It basically suggests using tar to dereference the symlinks and feed the result into docker build:

$ tar -czh . | docker build -
jdabrowski
  • 1,805
  • 14
  • 21
  • I get 2 errors: `tar: Failed to clean up compressor`, `Error response from daemon: the Dockerfile (Dockerfile) cannot be empty` – Ivan Rubinson Dec 25 '20 at 11:52
  • 1
    @IvanRubinson seems like a problem with the tar command, not my solution, maybe lack of privileges? – jdabrowski Feb 19 '21 at 21:27
  • 2
    You can drop the `-z` option here (compression, since it is unneeded) and `tar` will run much faster sending the files into the docker build command. – datUser Sep 20 '22 at 14:02
16

One possibility is to run the build in the parent directory, with:

$ docker build [tags...] -f dir1/Dockerfile .

(Or equivalently, in child directory,)

$ docker build  [tags...] -f Dockerfile ..

The Dockerfile will have to be configured to do copy/add with appropriate paths. Depending on your setup, you might want a .dockerignore in the parent to leave out things you don't want to be put into the context.

shaunc
  • 5,317
  • 4
  • 43
  • 58
10

I know that it breaks portability of docker build, but you can use hard links instead of symbolic:

ln /some/file ./hardlink
Eugene
  • 2,336
  • 21
  • 28
  • 19
    To be clear, this works for files, not directories. – GDorn Jun 30 '20 at 16:08
  • Hard link works! Thanks for the idea! However, if one have some files that are hardlink and some are not, be careful when updating the files. Updating a hardlink file will change file in other directories! And hardlink files won;t show up in different colors when doing a ls like sym link too. So use it with care! – HAltos Apr 02 '21 at 08:00
  • I made a tool to help automate this: https://stackoverflow.com/a/68765508 – Venryx Aug 13 '21 at 00:26
  • 1
    hardlinks won't work if files are in a different hard disk (mount). – alanwilter Apr 19 '22 at 07:43
7

I just had to solve this issue in the same context. My solution is to use hierarchical Docker builds. In other words:

parent_dir
  - common_files
    - Dockerfile
    - file.txt

- dir1
    - Dockerfile (FROM common_files:latest)

The disadvantage is that you have to remember to build common_files before dir1. The advantage is that if you have a number of dependant images then they are all a bit smaller due to using a common layer.

Chris Smith
  • 71
  • 1
  • 1
3

I got frustrated enough that I made a small NodeJS utility to help with this: file-syncer

Given the existing directory structure:

parent_dir
    - common_files
        - file.txt
    - my-app
        - Dockerfile
        - common_files -> symlink to ../common_files

Basic usage:

cd parent_dir

// starts live-sync of files under "common_files" to "my-app/HardLinked/common_files"
npx file-syncer --from common_files --to my-app/HardLinked

Then in your Dockerfile:

[regular commands here...]

# have docker copy/overlay the HardLinked folder's contents (common_files) into my-app itself
COPY HardLinked /

Q/A

  • How is this better than just copying parent_dir/common_files to parent_dir/my-app/common_files before Docker runs?

That would mean giving up the regular symlink, which would be a loss, since symlinks are helpful and work fine with most tools. For example, it would mean you can't see/edit the source files of common_files from the in-my-app copy, which has some drawbacks. (see below)

  • How is this better than copying parent_dir/common-files to parent_dir/my-app/common_files_Copy before Docker runs, then having Docker copy that over to parent_dir/my-app/common_files at build time?

There are two advantages:

  1. file-syncer does not "copy" the files in the regular sense. Rather, it creates hard links from the source folder's files. This means that if you edit the files under parent_dir/my-app/HardLinked/common_files, the files under parent_dir/common_files are instantly updated, and vice-versa, because they reference the same file/inode. (this can be helpful for debugging purposes and cross-project editing [especially if the folders you are syncing are symlinked node-modules that you're actively editing], and ensures that your version of the files is always in-sync/identical-to the source files)
  2. Because file-syncer only updates the hard-link files for the exact files that get changed, file-watcher tools like Tilt or Skaffold detect changes for the minimal set of files, which can mean faster live-update-push times than you'd get with a basic "copy whole folder on file change" tool would.
  • How is this better than a regular file-sync tool like Syncthing?

Some of those tools may be usable, but most have issues of one kind or another. The most common one is that the tool either cannot produce hard-links of existing files, or it's unable to "push an update" for a file that is already hard-linked (since hard-linked files do not notify file-watchers of their changes automatically, if the edited-at and watched-at paths differ). Another is that many of these sync tools are not designed for instant responding, and/or do not have run flags that make them easy to use in restricted build tools. (eg. for Tilt, the --async flag of file-syncer enables it to be used in a local(...) invokation in the project's Tiltfile)

Venryx
  • 15,624
  • 10
  • 70
  • 96
3

One tool that allows to "link" a directory in a way that is accepted by docker, is docker itself.

It is possible to run a temporary docker container, with all necessary files/directories mounted in adequate paths, and build image from within such container. For example:

docker run -it \
    --rm \
    -v /var/run/docker.sock:/var/run/docker.sock \
    --mount "type=bind,source=$ImageRoot/Dockerfile,destination=/Image/Dockerfile,readonly" \
    --mount "type=bind,source=$ImageRoot/.dockerignore,destination=/Image/.dockerignore,readonly" \
    --mount "type=bind,source=$ReposRoot/project1,destination=/Image/project1,readonly" \
    --mount "type=bind,source=$ReposRoot/project2,destination=/Image/project2,readonly" \
    --env DOCKER_BUILDKIT=1 \
    docker:latest \
        docker build "/Image" --tag "my_tag"

In above example I assume variables $ImageRoot and $ReposRoot are set.

Noxitu
  • 346
  • 2
  • 6
1

instead of using simlinks it is possible to solve problem administratively by just moving files from sites_available to sites_enabled instead of copying or making simlinks

so your site config will be in one copy only in site_available folder if it stopped or something or in sites_enabled if it should be used

Ilya Kolesnikov
  • 623
  • 8
  • 17
0

Use a small wrapper script to copy the needed dir to the Dockerfile's location;

  • build.sh;

.

#!/bin/bash
[ -e bin ] && rm -rf bin
cp -r ../../bin .
docker build -t "sometag" .
user5359531
  • 3,217
  • 6
  • 30
  • 55
0

Commonly I isolate build instructions to subfolder, so application and logic levels are higher located:

.
├── app
│   ├── package.json
│   ├── modules
│   └── logic
├── deploy
│   ├── back
│   │   ├── nginx
│   │   │   └── Chart.yaml
│   │   ├── Containerfile
│   │   ├── skaffold.yaml
│   │   └── .applift -> ../../app
│   ├── front
│   │   ├── Containerfile
│   │   ├── skaffold.yaml
│   │   └── .applift -> ../../app
│   └── skaffold.yaml
└── .......

I utilize name ".applift" for those symbolic links .applift -> ../../app

And now follow symlink via realpath without care about path depth

dir/deploy/front$ docker build -f Containerfile --tag="build" `realpath .applift`

or pack in func

dir/deploy$ docker_build () { docker build -f "$1"/Containerfile --tag="$2" `realpath "$1/.applift"`; }
dir/deploy$ docker_build ./back "front_builder"

so

COPY app/logic/ ./app/

in Containerfile will work

Yes, in this case you will loose context for other layers. But generally there is no any other context files located in build-directory

Dek4nice
  • 81
  • 1
  • 3
0

I had a situation where the parent_dir contained common libraries in common_files/ and a common docker/Dockerfile. dir1/ contained the contents of a different code repository but I wanted that repository to have access to those parent code repository folders. I solved it without using symlinks as follows:

parent_dir
    - common_files
        - file.txt
    - docker
        - Dockerfile
    - dir1
        - docker-compose.yml --> ../common_files
                             --> ../docker/Dockerfile

So I created a docker-compose.yml file, where I specified where the files were located relative to the docker-compose.yml where it would be executed from. I also tried to minimise changes to the Dockerfile since it would be used by both repositories, so I provided DIR argument to specify a subdirectory to run:

version: "3.8"

services:
  dev:
    container_name: template
    build:
      context: "../"
      dockerfile: ./docker/Dockerfile
      args:
        - DIR=${DIR}
    volumes:
      - ./dir1:/app
      - ./common_files:/common_files

I ran the following from within the dir1/ folder and it ran successfully:

export DIR=./dir1 && docker compose -f docker-compose.yml build

This is the original Dockerfile:

...
WORKDIR /app
COPY . /app

RUN my_executable
...

And this is a snippet with changes I made to the Dockerfile:

...
ARG DIR=${DIR}
WORKDIR /app
COPY . /app
RUN cd ${DIR} && my_executable && cd /app
...

This worked and the parent repository could still run the Dockerfile with the same outcome even though I had introduced the DIR argument since if the parent repository called it DIR would be an empty string and it would behave like it did before.

Luke Schoen
  • 4,129
  • 2
  • 27
  • 25
0

What about packaging common / shared files into a standalone container image let's call it common and then COPY --from=common file1 . where you wish to use that file?

Example:

common
  fileA
  fileB
  Dockerfile
app1
  Dockerfile
app2
  Dockerfile

common/Dockerfile

FROM scratch
WORKDIR /common
ADD . .

app1/Dockerfile

FROM alpine
COPY --from=common /common/fileA .
COPY --from=common /common/fileB .

app2/Dockerfile

FROM alpine
COPY --from=common /common/fileB .

In this way you may package lots of shared files and pick the ones you need in your apps. You may even version common container.

And, of course, common needs to be built first whenever files change in it.

remus
  • 2,635
  • 2
  • 21
  • 46
0

I just worked on building my node.js web server with certbot ssl on podman. Surely, I got exactly the "no file or directory" error when I access /etc/letsencrypt/live/my_domain/privkey.pem.

During the research, I find out that those files are actually symlinks from /etc/letsencrypt/archive/my_domain/, but I only copy the symlinks into the container. So, the solution is simply made a volume projection directly from host's /etc/letsencrypt to container's /etc/letsencrypt. In result, my node can successfully access the actual source files by the symlinks.

TL;DR: It is not possible to expect docker/podman to copy the content of symlinks from the host into the container. Therefore, just make sure the source files are also projected or copies into the container. It is ok to use symlinks to access within container since the source is also existed.

Best regards, Edwin Lu

0

Assuming git is in use, git submodules can effectively mirror a common directory at multiple locations within a repo. Docker seems to handle this fine.

Downsides

  • Extra chores. For example, you have to remember to update every submodule reference when the common files change.
  • Submodules are directory-oriented, so the containerized system has to be okay with the common files being alone in their own directory.
  • The common files have to live in a separate repo.

I made a demo involving 3 containerized Go apps sharing a common package in this way: https://github.com/jakewan/test-monorepo-go

Jacob Wan
  • 2,521
  • 25
  • 19