832

How can I include files from outside of Docker's build context using the "ADD" command in the Docker file?

From the Docker documentation:

The path must be inside the context of the build; you cannot ADD ../something/something, because the first step of a docker build is to send the context directory (and subdirectories) to the docker daemon.

I do not want to restructure my whole project just to accommodate Docker in this matter. I want to keep all my Docker files in the same sub-directory.

Also, it appears Docker does not yet (and may not ever) support symlinks: Dockerfile ADD command does not follow symlinks on host #1676.

The only other thing I can think of is to include a pre-build step to copy the files into the Docker build context (and configure my version control to ignore those files). Is there a better workaround for than that?

tshepang
  • 12,111
  • 21
  • 91
  • 136
ben_frankly
  • 9,270
  • 3
  • 18
  • 22
  • 285
    This has got to be the worst thing about Docker. From my point of view, there is no such thing as a "Docker project". Docker is for shipping projects. Its just a tool. I don't want to have to rebuild my whole project to accomadte docker, adding .dockerignore etc. At the end of the day, who knows how long Docker will last? It would be great to have a seperation between code (i.e. angular project), and whatever means to deploy it (i.e. docker). After all, there really is no benefit to having a docker file next to everything else. Its just wiring things up in order to create an image :( – TigerBear Jan 15 '18 at 20:55
  • 13
    Yeah, this is a big downer. I'm facing the same issue and I have a larger sized binary file (already compressed) that I don't want to copy into each Docker build context. I'd rather source it from its current location (outside the Docker build context). And I don't want to map a volume at run time, because I'm trying to COPY/ADD the file at build-time and unzip and do what I need so certain binaries are baked into the image. This way spinning up the containers are quick. – jersey bean Mar 22 '18 at 20:42
  • I found a good structure and I explain with details at https://stackoverflow.com/a/53298446/433814 – Marcello DeSales Nov 14 '18 at 11:02
  • 10
    the problem with docker builds is the made-up concept of "context". Dockerfiles are not sufficient to define a build, unless they are placed under a strategic directory (aka context), i.e. "/" as an extreme, so you can access any path (note that that's not the right thing to do in a sane project either..., plus it makes docker builds very slow because docker scans the entire context at start). You can consider building a docker image with all the required files, and usinng `FROM` to continue from there. I would not change the project structure to accommodate Docker (or any build tools). – Devis L. May 03 '19 at 20:23
  • 2
    In a newish feature If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this `docker buildx build --build-context othersource= ../something/something .` see answer below – Sami Wood Oct 24 '22 at 19:22

20 Answers20

691

The best way to work around this is to specify the Dockerfile independently of the build context, using -f.

For instance, this command will give the ADD command access to anything in your current directory.

docker build -f docker-files/Dockerfile .

Update: Docker now allows having the Dockerfile outside the build context (fixed in 18.03.0-ce). So you can also do something like

docker build -f ../Dockerfile .
Cyberwiz
  • 11,027
  • 3
  • 20
  • 40
  • I held off on accepting an answer because I wasn't thrilled with the first two. However, this is a clean and straightforward solution. – ben_frankly Feb 05 '16 at 03:14
  • 1
    How can I add -f flag on the docker-compose.yml build section ? – Ro. Jun 16 '16 at 19:11
  • 13
    @Ro. you use the `dockerfile:` property in the `build:` section in the Compose file https://docs.docker.com/compose/compose-file/#/compose-file-reference – Emerson Farrugia Jul 09 '16 at 10:35
  • 3
    I get "The Dockerfile must be within the build context" - I'd really like to have one Dockerfile that can reside below the current build context. In your example you have the Dockerfile within/below the current build context, which of course, works. – Alexander Mills Feb 04 '17 at 09:38
  • 2
    @AlexanderMills I see your problem. I guess you have some reason for wanting the Dockerfile outside the build context? Is there something you dont want included in the build context? Perhaps .dockerignore (https://docs.docker.com/engine/reference/builder/#/dockerignore-file) can help. – Cyberwiz Feb 06 '17 at 09:32
  • 3
    Yeah I just want a shared Dockerfile which corresponds to multiple subdirectories, all of which are "build contexts" – Alexander Mills Feb 06 '17 at 17:30
  • 118
    Does this solve the OP's problem of wanting to `ADD` a file that is outside the context directory? That's what I'm trying to do but I don't think using `-f` makes external files addable. – Sridhar Sarnobat Sep 19 '17 at 02:25
  • 2
    This just does not work.`ADD failed: Forbidden path outside the build context: ../webservice/npmkeys ()` using `docker build -f files/Dockerfile` – loretoparisi Nov 16 '17 at 17:01
  • 5
    @loretoparisi The build context includes your current directory, not parent directories. Just go up one folder before you build and ADD webservice/npmkeys without the ”..” – Cyberwiz Nov 17 '17 at 18:59
  • 13
    This solution really isn't useful if your trying to source the file from a completely different outside the Docker build context. i.e. suppose your file is under /src/my_large_file.zip and your Docker build context is under /home/user1/mydocker_project. I don't wan to copy the file over to the Docker build context because its large and I want bake some of its contents into the image so that starting up containers isn't a slow process. – jersey bean Mar 22 '18 at 20:45
  • 43
    Can't upvote this enough.. in my docker-compose.yml I have: `build: context: .., dockerfile: dir/Dockerfile`. Now my build context is the parent directory! – Mike Gleason jr Couturier Jul 12 '18 at 12:49
  • 5
    The problem remains though, i.e. a Dockerfile is not sufficient when importing files outside the context. For instance, many IDEs and build tools just take the Dockerfile and execute it. A better solution would be having a `CONTEXT` key to point to a common parent folder, although Docker really struggles with context folders containing too many files (e.g. see https://github.com/docker/toolbox/issues/613). An even better solution would be adding independent/unrelated directories to the context, all from within Dockerfile. – Devis L. May 03 '19 at 20:11
  • 13
    I'm running this from a directory with a lot of files and the result is that I'm looking at a message that says `sending build context to Docker deamon` and it appears to copy gigagbytes of data. – oarfish Aug 14 '19 at 09:40
  • 3
    @oarfish Do you really need to run it from that directory then? You can use a .dockerignore file to specify files/subdirectories that you dont need. – Cyberwiz Aug 15 '19 at 10:15
  • 2
    @oarfish I think Docker copies everything inside the build context and sends it to a build daemon — also files not needed for the build, see: https://github.com/moby/moby/issues/2745#issuecomment-35318051 *"... a very long and resource intensive Uploading context" stage, which appears to (quite needlessly) copy all of the project directories to a temporary location"*, and the reply: *"the short answer is that the docker client does not parse the Dockerfile. It tgz's the context (current dir and all subdirs) up, passed it all to the server"* – KajMagnus May 19 '20 at 05:16
  • @Cyberwiz The answer is no - you can run the command from anywhere, like `docker build -f dockerfile_dir/Dockerfile context_directory/`. That is an important point although it seems trivial once you notice. – zhouji Jul 30 '20 at 15:26
  • @MikeGleasonjrCouturier Your hint worked! It's nice to be able to run `docker-compose build` instead of `docker build`. Thanks! – Niloct Oct 07 '20 at 20:15
  • 2
    Just a very basic thing which might help other beginners: the context of the Dockerfile is in which directory `docker build ...` is started, not where the Dockerfile is saved, when using this solution. You can also check this by using `RUN dir -a ./` to see where the context is. – questionto42 Feb 18 '21 at 20:39
  • Looks like Docker does not allow Dockerfile outside of build context, from docs: "By default the docker build command will look for a Dockerfile at the root of the build context. The -f, --file, option lets you specify the path to an alternative file to use instead. This is useful in cases where the same set of files are used for multiple builds. **The path must be to a file within the build context**." – segue_segway Aug 22 '21 at 19:33
  • Weirdly, elsewhere in the docs: "Traditionally, the Dockerfile is called Dockerfile and located in the root of the context. You use the -f flag with docker build to point to a Dockerfile anywhere in your file system." Not sure which is accurate. – segue_segway Aug 22 '21 at 19:48
  • 2
    The path of the files mentioned in the Dockerfile must be within and relative to the context passed in the build command. So, it's actually not solving the problem of "including files outside of Docker's build context". – Swapan Pramanick Dec 14 '21 at 00:36
91

I spent a good time trying to figure out a good pattern and how to better explain what's going on with this feature support. I realized that the best way to explain it was as follows...

  • Dockerfile: Will only see files under its own relative path
  • Context: a place in "space" where the files you want to share and your Dockerfile will be copied to

So, with that said, here's an example of the Dockerfile that needs to reuse a file called start.sh

Dockerfile

It will always load from its relative path, having the current directory of itself as the local reference to the paths you specify.

COPY start.sh /runtime/start.sh

Files

Considering this idea, we can think of having multiple copies for the Dockerfiles building specific things, but they all need access to the start.sh.

./all-services/
   /start.sh
   /service-X/Dockerfile
   /service-Y/Dockerfile
   /service-Z/Dockerfile
./docker-compose.yaml

Considering this structure and the files above, here's a docker-compose.yml

docker-compose.yaml

  • In this example, your shared context directory is the runtime directory.
    • Same mental model here, think that all the files under this directory are moved over to the so-called context.
    • Similarly, just specify the Dockerfile that you want to copy to that same directory. You can specify that using dockerfile.
  • The directory where your main content is located is the actual context to be set.

The docker-compose.yml is as follows

version: "3.3"
services:
  
  service-A
    build:
      context: ./all-service
      dockerfile: ./service-A/Dockerfile

  service-B
    build:
      context: ./all-service
      dockerfile: ./service-B/Dockerfile

  service-C
    build:
      context: ./all-service
      dockerfile: ./service-C/Dockerfile
  • all-service is set as the context, the shared file start.sh is copied there as well the Dockerfile specified by each dockerfile.
  • Each gets to be built their own way, sharing the start file!
Pang
  • 9,564
  • 146
  • 81
  • 122
Marcello DeSales
  • 21,361
  • 14
  • 77
  • 80
  • 8
    Your point on Dockerfile is not completly true, as pointed by the accepted answer, if you are in a folder hierarchy `a/b/c`, then yes running `docker build .` in `c` won't allow you to access `../file-in-b`. But, I think the general misundertanding in this (or at least mine was) is that the context is defined by the location stated by the first argument of the build command, not by the location of the Dockerfile. So as stated in the accepted answer: from `a`: `docker build -f a/b/c/Dockerfile . ` means that in the Dockerfile `.` is now the folder `a` – β.εηοιτ.βε Feb 21 '19 at 10:26
  • 4
    Quoting from the Dockerfile docs: paths of files and directories will be interpreted as relative to the source of the context of the build. – Nishant George Agrwal Dec 18 '19 at 08:59
  • sincere thank you for carefully documenting this, really helpful.. – Robert Sinclair Feb 12 '21 at 05:47
  • 1
    @RobertSinclair, not a problem buddy! This helps me a lot during dev... I'm glad it helped!!! – Marcello DeSales Feb 12 '21 at 06:13
  • This should be the selected solution for this issue, I never used context in docker build but now I can't work without it! This is the most elegant and useful solution – Pini Cheyni Mar 07 '21 at 07:56
  • @PiniCheyni exactly how I feel :) – Marcello DeSales Mar 07 '21 at 17:09
  • What if I have some files in the `all-service` directory that need to go into every container, and also some other files in the `service-a`, `service-b`, etc. folders that need to go only into the respective container, and the `service-a`, `service-b`, etc. folders are outside the `all-service` directory? – tolache Jul 06 '23 at 16:16
84

I often find myself utilizing the --build-arg option for this purpose. For example after putting the following in the Dockerfile:

ARG SSH_KEY
RUN echo "$SSH_KEY" > /root/.ssh/id_rsa

You can just do:

docker build -t some-app --build-arg SSH_KEY="$(cat ~/file/outside/build/context/id_rsa)" .

But note the following warning from the Docker documentation:

Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command.

sheldonh
  • 2,684
  • 24
  • 31
aaron
  • 1,176
  • 10
  • 7
  • 20
    This is poor advice without a huge warning. From the Docker documentation: "Warning: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc. Build-time variable values are visible to any user of the image with the docker history command." [1] In other words, the example given in this example discloses the private SSH key in the docker image. In some contexts, that might be fine. https://docs.docker.com/engine/reference/builder/#arg – sheldonh Jan 16 '19 at 16:26
  • 5
    Finally, to overcome this security issue, you could use techniques like squashing or multistage-builds: https://vsupalov.com/build-docker-image-clone-private-repo-ssh-key/ – Jojo Aug 15 '19 at 13:35
59

On Linux you can mount other directories instead of symlinking them

mount --bind olddir newdir

See https://superuser.com/questions/842642 for more details.

I don't know if something similar is available for other OSes. I also tried using Samba to share a folder and remount it into the Docker context which worked as well.

Community
  • 1
  • 1
Günter Zöchbauer
  • 623,577
  • 216
  • 2,003
  • 1,567
  • 7
    Only root can bind directories – jjcf89 Jun 13 '19 at 15:04
  • Users who can access docker have some sort of root access anyway, since arbitrary docker commands can be used to break the chroot jail (or just mount the required files into the container) – SOFe Mar 16 '21 at 12:21
23

If you read the discussion in the issue 2745 not only docker may never support symlinks they may never support adding files outside your context. Seems to be a design philosophy that files that go into docker build should explicitly be part of its context or be from a URL where it is presumably deployed too with a fixed version so that the build is repeatable with well known URLs or files shipped with the docker container.

I prefer to build from a version controlled source - ie docker build -t stuff http://my.git.org/repo - otherwise I'm building from some random place with random files.

fundamentally, no.... -- SvenDowideit, Docker Inc

Just my opinion but I think you should restructure to separate out the code and docker repositories. That way the containers can be generic and pull in any version of the code at run time rather than build time.

Alternatively, use docker as your fundamental code deployment artifact and then you put the dockerfile in the root of the code repository. if you go this route probably makes sense to have a parent docker container for more general system level details and a child container for setup specific to your code.

Usman Ismail
  • 17,999
  • 14
  • 83
  • 165
23

I believe the simpler workaround would be to change the 'context' itself.

So, for example, instead of giving:

docker build -t hello-demo-app .

which sets the current directory as the context, let's say you wanted the parent directory as the context, just use:

docker build -t hello-demo-app ..
Anshuman Manral
  • 609
  • 8
  • 16
  • 10
    I think this breaks .dockerignore :-\ – NullVoxPopuli Mar 09 '18 at 23:25
  • 1
    I gave up on .dockerignore and instead made Makefile managed docker folder that contains only files needed for build context... I only need to call `make build` and it pulls in all files needed if they were updated and then it calls appropriate docker build... I need to do extra work, but it works flawlessly because I'm in full control. –  Nov 26 '19 at 09:33
  • `.dockerignore` must be in the root directory of the context. If you change the context from `.` to `..`, then you have to move the `.dockerignore` file up one directory. – cowlinator Jul 11 '23 at 01:19
13

You can also create a tarball of what the image needs first and use that as your context.

https://docs.docker.com/engine/reference/commandline/build/#/tarball-contexts

Laurent Picquet
  • 1,147
  • 11
  • 12
  • 4
    Great tip! I discovered you can even feed docker build the tarball as context on stdin: `tar zc /dir1 /dir2 |docker build -`. This was very helpful in my case. – Tore Olsen May 09 '20 at 10:54
  • Also it's possible to source from a local existing tar, see [this](https://stackoverflow.com/a/54267763/213871) answer – ceztko Feb 15 '22 at 20:24
11

This behavior is given by the context directory that the docker or podman uses to present the files to the build process.
A nice trick here is by changing the context dir during the building instruction to the full path of the directory, that you want to expose to the daemon. e.g:

docker build -t imageName:tag -f /path/to/the/Dockerfile /mysrc/path

using /mysrc/path instead of .(current directory), you'll be using that directory as a context, so any files under it can be seen by the build process.
This example you'll be exposing the entire /mysrc/path tree to the docker daemon.
When using this with docker the user ID who triggered the build must have recursively read permissions to any single directory or file from the context dir.

This can be useful in cases where you have the /home/user/myCoolProject/Dockerfile but want to bring to this container build context, files that aren't in the same directory.

Here is an example of building using context dir, but this time using podman instead of docker.

Lets take as example, having inside your Dockerfile a COPY or ADDinstruction which is copying files from a directory outside of your project, like:

FROM myImage:tag
...
...
COPY /opt/externalFile ./
ADD /home/user/AnotherProject/anotherExternalFile ./
...

In order to build this, with a container file located in the /home/user/myCoolProject/Dockerfile, just do something like:

cd /home/user/myCoolProject
podman build -t imageName:tag -f Dockefile /

Some known use cases to change the context dir, is when using a container as a toolchain for building your souce code.
e.g:

podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile /tmp/mysrc

or it can be a path relative, like:

podman build --platform linux/s390x -t myimage:mytag -f ./Dockerfile ../../

Another example using this time a global path:

FROM myImage:tag
...
...
COPY externalFile ./
ADD  AnotherProject ./
...

Notice that now the full global path for the COPY and ADD is omitted in the Dockerfile command layers.
In this case the contex dir must be relative to where the files are, if both externalFile and AnotherProject are in /opt directory then the context dir for building it must be:

podman build -t imageName:tag -f ./Dockerfile /opt

Note when using COPY or ADD with context dir in docker:
The docker daemon will try to "stream" all the files visible on the context dir tree to the daemon, which can slowdown the build. And requires the user to have recursively permission from the context dir. This behavior can be more costly specially when using the build through the API. However,with podman the build happens instantaneously, without needing recursively permissions, that's because podman does not enumerate the entire context dir, and doesn't use a client/server architecture as well.
The build for such cases can be way more interesting to use podman instead of docker, when you face such issues using a different context dir.

Some references:

isca
  • 986
  • 8
  • 9
  • 2
    This is dangerous and not advisable. The Docker build context will be your entire machine. For one, sending that entire context to the daemon will take forever. Second, the build process itself can do whatever it wants really. A malicious Dockerfile can connect to remote servers with full filesystem read access. Lastly, your Dockerfile instructions like `ADD` become closely coupled to your machine, requiring full aka absolute paths for everything. They will no longer be portable. – Alex Povel Feb 18 '22 at 14:15
  • The point here is to explain the entrypoint and how it works, not judge the best standards. Keep in mind the best is to keep everything self-contained in the same project. However the question is how-to achieve such behavior and demonstrate how the entrypoint works. It will not take forever since there's no enumeration in the daemon to make it happen. The context here is defined in the build by an ID with permission on to, not by a fixed path in the daemon, So malicious Dockefile doesn't make sense here. – isca Mar 03 '22 at 18:18
  • Did you test the snippets of your answer? As a regular user, assuming a Unix OS, you don't even have read permission for all of `/`. It will just error out with permission denied. Running the above as `root` could (?) fix it, but is a *terrible* idea. In any case, I CTRL+C-ed out of the build process I ran for testing after 3GB of `/` had been loaded into the daemon's build context. The above doesn't work for me at all! – Alex Povel Mar 04 '22 at 14:37
  • For sure, with both cases, and it works, is not a matter of standard, but is a matter of why the context dir exists. Here, I'm using `/` as example to illustrate the exposure of it. However I improved the answer to address your concerns here – isca Mar 07 '22 at 14:21
  • Nifty trick, PROVIDED you control the build command and docker files, else naughty devs could lift any file off your build machine :) – Illegal Operator Jun 28 '23 at 17:47
9

I think as of earlier this year a feature was added in buildx to do just this.

If you have dockerfile 1.4+ and buildx 0.8+ you can do something like this:

docker buildx build --build-context othersource= ../something/something .

Then in your docker file you can use the from command to add the context

ADD –-from=othersource . /stuff

See this related post.

user16217248
  • 3,119
  • 19
  • 19
  • 37
Sami Wood
  • 305
  • 3
  • 10
  • does this work for directories only? I can't pass a file path with getting error `ERROR: failed to get build context path {/builds/my_file }: not a directory` – anatol Jun 14 '23 at 10:48
5

Using docker-compose, I accomplished this by creating a service that mounts the volumes that I need and committing the image of the container. Then, in the subsequent service, I rely on the previously committed image, which has all of the data stored at mounted locations. You will then have have to copy these files to their ultimate destination, as host mounted directories do not get committed when running a docker commit command

You don't have to use docker-compose to accomplish this, but it makes life a bit easier

# docker-compose.yml

version: '3'
  services:
    stage:
      image: alpine
      volumes:
        - /host/machine/path:/tmp/container/path
      command: bash -c "cp -r /tmp/container/path /final/container/path"
    setup:
      image: stage
# setup.sh

# Start "stage" service
docker-compose up stage

# Commit changes to an image named "stage"
docker commit $(docker-compose ps -q stage) stage

# Start setup service off of stage image
docker-compose up setup
Pang
  • 9,564
  • 146
  • 81
  • 122
Kris Rivera
  • 165
  • 2
  • 3
5

As is described in this GitHub issue the build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message.*

It is not allowed to include files from outside the build directory, so this results in the "Forbidden path" message."

Rocksn17
  • 739
  • 1
  • 9
  • 9
5

Create a wrapper docker build shell script that grabs the file then calls docker build then removes the file.

a simple solution not mentioned anywhere here from my quick skim:

  • have a wrapper script called docker_build.sh
  • have it create tarballs, copy large files to the current working directory
  • call docker build
  • clean up the tarballs, large files, etc

this solution is good because (1.) it doesn't have the security hole from copying in your SSH private key (2.) another solution uses sudo bind so that has another security hole there because it requires root permission to do bind.

Trevor Boyd Smith
  • 18,164
  • 32
  • 127
  • 177
4

I was personally confused by some answers, so decided to explain it simply.

You should pass the context, you have specified in Dockerfile, to docker when want to create image.

I always select root of project as the context in Dockerfile.

so for example if you use COPY command like COPY . .

first dot(.) is the context and second dot(.) is container working directory

Assuming the context is project root, dot(.) , and code structure is like this

sample-project/
  docker/
    Dockerfile

If you want to build image

and your path (the path you run the docker build command) is /full-path/sample-project/, you should do this

docker build -f docker/Dockerfile . 

and if your path is /full-path/sample-project/docker/, you should do this

docker build -f Dockerfile ../ 
3

Workaround with links:

ln path/to/file/outside/context/file_to_copy ./file_to_copy

On Dockerfile, simply:

COPY file_to_copy /path/to/file

fde-capu
  • 205
  • 1
  • 4
  • 14
  • 1
    I probably wont use this because this doesn't work with soft links, only hard links – Greg Jan 15 '21 at 23:09
  • unknown instruction: LN – Sh eldeeb Apr 20 '21 at 22:01
  • @Sheldeeb `ln` would be used on Unix context, not in the Dockerfile, to create the hard link (see https://en.wikipedia.org/wiki/Ln_(Unix)). Then treat the link as a regular file. It is not capital "LN". – fde-capu Apr 22 '21 at 12:39
  • this may affect the code base, i.e: override an existing file, or even modify a clean git repo. also you may cannot rename the file, for example you cannot modify package.json if you like to run `npm install` after creating the hard link – Sh eldeeb Apr 22 '21 at 20:46
2

An easy workaround might be to simply mount the volume (using the -v or --mount flag) to the container when you run it and access the files that way.

example:

docker run -v /path/to/file/on/host:/desired/path/to/file/in/container/ image_name

for more see: https://docs.docker.com/storage/volumes/

Ben Wex
  • 51
  • 4
  • 5
    Note that this only works if the volume is a runtime dependency. For build time dependencies, `docker run` is too late. – user3735633 Feb 17 '20 at 23:39
  • Also mounting volumes copies files to the location you are mounting to in context. This means it doubles up my files. What If I need to mount a volume folder with 100's of scripts? Now my host has double the amount of scripts. – Dave Apr 10 '23 at 21:16
1

I had this same issue with a project and some data files that I wasn't able to move inside the repo context for HIPAA reasons. I ended up using 2 Dockerfiles. One builds the main application without the stuff I needed outside the container and publishes that to internal repo. Then a second dockerfile pulls that image and adds the data and creates a new image which is then deployed and never stored anywhere. Not ideal, but it worked for my purposes of keeping sensitive information out of the repo.

ErikE
  • 48,881
  • 23
  • 151
  • 196
1

In my case, my Dockerfile is written like a template containing placeholders which I'm replacing with real value using my configuration file.

So I couldn't specify this file directly but pipe it into the docker build like this:

sed "s/%email_address%/$EMAIL_ADDRESS/;" ./Dockerfile | docker build -t katzda/bookings:latest . -f -;

But because of the pipe, the COPY command didn't work. But the above way solves it by -f - (explicitly saying file not provided). Doing only - without the -f flag, the context AND the Dockerfile are not provided which is a caveat.

Pang
  • 9,564
  • 146
  • 81
  • 122
Daniel Katz
  • 2,271
  • 3
  • 25
  • 27
  • 2
    just an FYI, you could use build-args for that – nadavvadan Jan 04 '21 at 09:00
  • This solution, which proposes using "docker build -t . -f -" also solved the problem that I had where I wanted to generate a dockerfile by bash script and input it via STDIN, but I also wanted to COPY files from the local context "." – Christopher Thomas Dec 08 '21 at 09:12
1

How to share typescript code between two Dockerfiles

I had this same problem, but for sharing files between two typescript projects. Some of the other answers didn't work for me because I needed to preserve the relative import paths between the shared code. I solved it by organizing my code like this:

api/
  Dockerfile
  src/
    models/
      index.ts

frontend/
  Dockerfile
  src/
    models/
      index.ts

shared/
  model1.ts
  model2.ts
  index.ts

.dockerignore

Note: After extracting the shared code into that top folder, I avoided needing to update the import paths because I updated api/models/index.ts and frontend/models/index.ts to export from shared: (eg export * from '../../../shared)

Since the build context is now one directory higher, I had to make a few additional changes:

  1. Update the build command to use the new context:

    docker build -f Dockerfile .. (two dots instead of one)

  2. Use a single .dockerignore at the top level to exclude all node_modules. (eg **/node_modules/**)

  3. Prefix the Dockerfile COPY commands with api/ or frontend/

  4. Copy shared (in addition to api/src or frontend/src)

    WORKDIR /usr/src/app
    
    COPY api/package*.json ./     <---- Prefix with api/
    RUN npm ci
    
    COPY api/src api/ts*.json ./  <---- Prefix with api/
    COPY shared usr/src/shared    <---- ADDED
    RUN npm run build
    

This was the easiest way I could send everything to docker, while preserving the relative import paths in both projects. The tricky (annoying) part was all the changes/consequences caused by the build context being up one directory.

jrasm91
  • 394
  • 2
  • 10
1

Changing the build context is the way to go.

If you have a .net core project and you still want to use the Visual Studio UI to debug/publish the project with docker than you can change the context by adding the "DockerfileContext" to your projects .csproj:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>net6.0</TargetFramework>
    <DockerDefaultTargetOS>Linux</DockerDefaultTargetOS>
    <DockerfileContext>..\..\.</DockerfileContext>
  </PropertyGroup>

  ...

</Project>

Do not forget to change the paths in the Dockerfile accordingly.

123
  • 31
  • 2
0

One quick and dirty way is to set the build context up as many levels as you need - but this can have consequences. If you're working in a microservices architecture that looks like this:

./Code/Repo1
./Code/Repo2
...

You can set the build context to the parent Code directory and then access everything, but it turns out that with a large number of repositories, this can result in the build taking a long time.

An example situation could be that another team maintains a database schema in Repo1 and your team's code in Repo2 depends on this. You want to dockerise this dependency with some of your own seed data without worrying about schema changes or polluting the other team's repository (depending on what the changes are you may still have to change your seed data scripts of course) The second approach is hacky but gets around the issue of long builds:

Create a sh (or ps1) script in ./Code/Repo2 to copy the files you need and invoke the docker commands you want, for example:

#!/bin/bash
rm -r ./db/schema
mkdir ./db/schema

cp  -r ../Repo1/db/schema ./db/schema

docker-compose -f docker-compose.yml down
docker container prune -f
docker-compose -f docker-compose.yml up --build

In the docker-compose file, simply set the context as Repo2 root and use the content of the ./db/schema directory in your dockerfile without worrying about the path. Bear in mind that you will run the risk of accidentally committing this directory to source control, but scripting cleanup actions should be easy enough.

Pang
  • 9,564
  • 146
  • 81
  • 122
user1007074
  • 2,093
  • 1
  • 18
  • 22