115

Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects

docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash

This command exits with the following error:

C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.

I'm using Docker Toolbox on Windows 10 Home.

szx
  • 6,433
  • 6
  • 46
  • 67
  • 2
    Did you resolve this issue? I got the same – ferprez Nov 28 '18 at 16:30
  • 4
    In my case the error occurred because I was mounting a volume from a path/directory that contained a symlink, after I changed directory to the real path it worked. To my knowledge this bug still exists today. – szx Oct 03 '19 at 09:32
  • Yes, this bug is an existing bug. Here is the docker for windows ticket that reflects this issue: https://github.com/docker/for-win/issues/5778. You can see there when a newer version fixes it (hopefully). – Csongor Halmai Mar 05 '20 at 14:20

23 Answers23

132

For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.

Edit: It would appear this also fixes the issue on Windows 10

lancepants
  • 1,462
  • 1
  • 9
  • 8
  • 3
    Still an issue in 2022, but this fixed it for me. FYI there's a difference between closing the Docker desktop app and actually restarting it. Restarting fixed it, but when I initially just quit out and reopened that didn't help. – MT3 Feb 25 '22 at 20:40
  • Anyone know *why* this works? – Cody May 03 '22 at 17:56
  • 6
    @Cody - Because "turning it off and back on again" fixes lots of problems – Ste May 13 '22 at 11:34
  • But that doesn't "resolve" the issue. It just makes it go away right now. Which is a good step to try the 1st time it happens. But after it's happened multiple times there's a deeper problem. If you go to a generic tech support & tell them you have a problem where after your phone's been on for more than an hour the screen glitches, they tell you to turn it off & on, you do & it works now, sure... But it didn't solve the issue in any way – LostOnTheLine Jan 16 '23 at 16:10
  • @LostOnTheLine There are several issues opened against the docker github: https://www.google.com/search?q=error+while+creating+mount+source+path+file+exists+site:github.com The closest probably being https://github.com/docker/for-win/issues/5516 which has been auto closed by the bot for inactivity. An actual resolution will have to come from a docker bugfix. – lancepants Feb 02 '23 at 00:28
  • @lancepants That issue is completely unrelated, it's about the same `file exists` problem & just reading the first few is all about capitalization. So that's a solution, but that has nothing to do with "Just restart it & it works" which I assume if someone is looking online for answers they've already tried. IMO if you are looking & haven't tried restarting you're a lost cause – LostOnTheLine Feb 02 '23 at 01:32
  • @LostOnTheLine sure, I think the point is that if a real resolution is required a user will have to open an issue against the docker gh repo and hope it gains traction, or look to patch the issue themselves and PR it – lancepants Feb 03 '23 at 04:37
  • Still a valuable "fix" in 2023! This one drove me mad! Anyone knows what causes this? Sounds hard to reproduce btw... – Raphaël Roux May 30 '23 at 13:48
51

My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.

I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.

Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.

pico_prob
  • 1,105
  • 10
  • 14
  • 4
    Thanks. I was able to solve this by using `sshfs -o allow_other` and by editing `/etc/fuse.conf` – taari Mar 23 '21 at 10:55
  • 4
    `allow_root` option also seems to do the trick. – Rafał Krypa Apr 23 '21 at 14:32
  • 1
    In our case, we were using blobfuse to mount Azure storage drive. This fix ie. editing the `/etc/fuse.conf` file and passing `-o allow_root` to the `blobfuse` command worked for us. – Satrajit A Aug 09 '22 at 18:51
46

I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.

melicent
  • 1,221
  • 15
  • 22
  • 1
    Same for me with the password. For me, I unchecked the shared drive, applied, then checked the shared drive again and applied. – NicklasF Jul 29 '19 at 10:25
  • 1
    I have exact same kind of error. and there is not Reset credential under docker desktop settings with v3.5.1. Do you solution on the latest version to solve same issue? – Baodi Di Jul 09 '21 at 23:50
  • the issue gone after restart docker desktop. :) – Baodi Di Jul 09 '21 at 23:59
7

Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:

docker run --rm -it -v "/:/host" ubuntu /bin/bash

And see what the filesystem looks like under "/host".

BMitch
  • 231,797
  • 42
  • 475
  • 450
  • 1
    I saw my C drive was being shared, thought it was a mistake and un-shared it via the properties window. Then I ran into this error. Just using the Docker Desktop gui for windows under the 'Shared Drives' settings page I was able to fix this error by re-sharing the drive again. – 2b77bee6-5445-4c77-b1eb-4df3e5 Mar 06 '19 at 13:50
  • This is a great advice, thank you for that. I found out that docker have troubles following junctions on the host windows, thanks to this advice. – Andrew Savinykh Jan 07 '20 at 22:55
  • Related - had this issue with a mounted directory in minikube. I had to run `minikube start --mount-string="..."` to re-mount the directory, then my deployment worked. – inostia Apr 16 '21 at 09:41
7

I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.

Victor F
  • 917
  • 16
  • 30
6

I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:

# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists. 


# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23  2019 /etc/localtime -> ../usr/share/zoneinfo/UTC 

It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.

TheCooocy
  • 173
  • 1
  • 4
5

Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.

docker volume rm `(docker volume ls -qf dangling=true)`

I tried your command locally (MacOS) without any error.

Mornor
  • 3,471
  • 8
  • 31
  • 69
  • Thanks, but unfortunately this didn't help, the output of `docker volume ls -qf dangling=true` was empty – szx Jun 15 '18 at 17:22
  • docker volume rm `(docker volume ls -q dangling=true)` without f – Abdulkarim Kanaan Jun 10 '19 at 01:43
  • 1
    @Abdulkarim. Wrong. The f switch is correct, as noted by szx. It means *filter*, the filter is dangling=true. The q is also correct as it makes the result just print the volume name without the column names. – Jay M Feb 10 '20 at 11:12
  • Sometimes it is the solution, but most of the time I have also to restart the Docker Desktop Service. Anyway the command is good to know. just notice if the sub-command `docker volume ls -qf dangling=true` returns nothing, the whole command fails with an error. – рüффп Jan 13 '22 at 10:05
  • On Windows (PowerShell) it would be `docker volume rm $(docker volume ls -qf dangling=true)` – pirateofebay Apr 15 '22 at 20:51
  • For anyone like me who needs the above answer to be conditional on whether dangling volumes exist, and always return 0: ``DANGLING=`(docker volume ls -qf dangling=true)`; if [[ ! -z "$DANGLING" ]]; then docker volume rm $DANGLING; fi`` – Brian Gradin May 05 '23 at 18:41
4

I met this problem too. I used to run the following command to share the folder with container

docker run ... -v c:/seleniumplus:/dev/seleniumplus ...

But it cannot work anymore.

I am using the Windows 10 as host. My docker has recently been upgraded to "19.03.5 build 633a0e". I did change my windows password recently.

I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(. All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran

docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...

And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").

lei wang
  • 109
  • 1
  • 1
  • 9
4

I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.

The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).

Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.

András Aszódi
  • 8,948
  • 5
  • 48
  • 51
3

I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.

docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up

Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.

Omer Raviv
  • 11,409
  • 5
  • 43
  • 82
  • you saved me, ofc i needed to clear out old volumes, thank you. for any other lost souls prune and rm everything then compose up again – i0x539 Feb 22 '23 at 18:12
2

Had the exact error. In my case, I used c instead of C when changing into my directory.

Andrii
  • 539
  • 2
  • 6
2

I have put the user_allow_other in /etc/fuse.conf. Then, the mounting as in the example below has solved the problem.

$ sshfs -o allow_other user@remote_server:/directory/ 
James
  • 111
  • 9
2

I solved this by restarting docker and rebuilding the images.

Kipkemoi Derek
  • 159
  • 3
  • 5
1

In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.

Also make sure that EFS (Encrypting File System) is disabled for the shared folders.

See also my answer here.

TheOperator
  • 5,936
  • 29
  • 42
1

I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.

Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.

1

I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt

katsu
  • 604
  • 6
  • 7
1

This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:

APPDATA=~/Library/

And my problem was solved.

cr1pto
  • 539
  • 3
  • 13
0

I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -

docker volume create

then Mount this created volume if required to be used by multiple containers..

Abhishek Jain
  • 3,815
  • 2
  • 26
  • 26
0

For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:

sudo mount -a
Ahmet Cetin
  • 3,683
  • 3
  • 25
  • 34
0

I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked

JavaTec
  • 965
  • 3
  • 15
  • 30
0

In my case my volume path (in a .env file for docker-compose) had a space in it

/Volumes/some\ thing/folder

which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :

"/Volumes/some thing/folder"
Alucard
  • 116
  • 1
  • 4
0

In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).

0

I had a similar experience on Linux and none of the above solution worked. In my case, my problem is that one directory in the path I was trying to mount had lost its o+rx permissions due to a maintenance activity.

Once the permissions were added, I could successfully mount the directory once again.

INElutTabile
  • 866
  • 2
  • 20
  • 38