93

After having read about the performance improvements when running Docker on wsl2, I have been waiting for the official release of Windows 10 that supports wsl2. I updated Windows and Docker and switched on the Docker flag to use wsl2 and was hoping for some performance boost for my Oracle Database running in a Docker container but unfortunately the change slowed down the container and my laptop dramatically. The performance of the container is about 10x slower and my laptop is pretty much stuck when starting the container. It seems as if the memory consumption would completely use up my 8GB and heavy memory swapping starts to take place. Is there anything I can do to improve the performance of Docker on wsl2 or at least to better understand what's wrong in my setup?

My environment:

  • Processor Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz, 2 Core(s)
  • Installed Physical Memory (RAM) 8.00 GB
  • Microsoft Windows 10 Pro Version 10.0.19041 Build 19041
  • Docker version 19.03.8, build afacb8b
doberkofler
  • 9,511
  • 18
  • 74
  • 126
  • 4
    I have this issue as well, I have 16gb of memory and the vmmem process is consuming the majority of it. – Glen Jun 02 '20 at 21:54
  • BTW There is an issue on this: https://github.com/microsoft/WSL/issues/4166 As we all wsl2 users end up searching what to do with stuck windows... – Pavel Biryukov Nov 24 '20 at 18:57
  • Similar problem, but with a lot of memory, and during build https://stackoverflow.com/questions/65231110/docker-takes-ages-before-starting-build – XedinUnknown Dec 10 '20 at 08:41
  • I recently switched back from Linux to windows so I could use proprietary software a bit easier. This was my biggest issue at first so I ended up dual booting into Ubuntu. Then yesterday I ran across this article: https://www.createit.com/blog/slow-docker-on-windows-wsl2-fast-and-easy-fix-to-improve-performance/ and overall what you need to do is store your docker accessible code in WSL2 and then `docker-compose up`. From there you can access the running container ok localhost as normally experienced in Linux OS – ViaTech Sep 16 '22 at 22:55
  • Oracle DB under WSL2's docker with 8GB of RAM? You are madman! Too many answers for this question and no one correct. The issue is the **Windows**. After removing it problem should gone. – rzlvmp Feb 14 '23 at 09:18

9 Answers9

86

This comes from the "vmmem" which consumes as much resource as it can. To solve the problem just go to your user file for me in

C:\Users\userName

In this directory create a file named ".wslconfig" in which you will configure how many resources can consume WSL2:

[wsl2] 
memory=900MB    #Limits VM memory in WSL 2 to 900MB 
processors=1    #Makes the WSL 2 VM use one virtual processors

Now close your docker and wait for "vmmem" to close in the task manager.

then You can restart docker and normally "vmmem" will not exceed the limit you have set (here 900MB) If don't work restart your computer.

I hope it helped you.

Ecora
  • 1,043
  • 5
  • 10
  • 4
    Thank you so much for your answer. Docker should offer to set this or something. [Here](https://learn.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig) is Microsoft's documentation on `.wslconfig` for reference. – Glen Jun 17 '20 at 14:59
  • 6
    Using a `.wslconfig` files mitigates the resource consumption but actually makes the container itself even slower than before. I guess I just don't have enough memory and will have to continue using Docker without taking advantage of wsl2. – doberkofler Jul 05 '20 at 13:17
  • 6
    I found that limiting memory via `.wslconfig` like this made a huge difference to Docker's performance. Without it Docker seems to use as many resources as it can. Limiting Docker to `memory=6GB` on my 16GB machine has made working with Docker much better. The optimum setting will probably depend on available RAM and what you're doing with Docker, but I'd guess that setting it to (quite a bit!) more than 500MB would probably be sensible in many cases! – Nick F Sep 15 '20 at 18:01
  • 1
    This adjustment also made a huge difference for me. `Vmmem` was consuming all available CPU on my machine while compiling a program during `docker build`. My machine was literally unresponsive for hours while the compilation ground to a standstill, had to `kill -9` the docker processes. After setting `memory` and `processors` each to half of what is available on my system, the same build was completed in ~10 mins. Thanks! – Mike Jarema Dec 28 '20 at 22:07
  • 1
    In my case vmmem process was not stopping by itself. It finally stopped when I executed `wsl.exe --shutdown` – Juan Calero May 13 '21 at 08:24
  • 1
    @Thezozozolino L thx for the infor however when system starts I see System.InvalidOperationException: Failed to deploy distro docker-desktop to C:\Users\usen_name\AppData\Local\Docker\wsl\distro: exit code: -1 stdout: The system cannot find the path specified. How to fix this. – Zaker Sep 05 '21 at 07:17
  • 1
    `swap=0` might [help too](https://stackoverflow.com/questions/63725469/how-to-increase-wsl-docker-container-performance-on-quite-common-laptop). – T.Todua Sep 29 '21 at 15:38
  • 1
    After stopping Docker, I had to run `wsl --shutdown` for Vmmem to close. It does not terminate automatically. See https://superuser.com/questions/1559170/how-can-i-reduce-the-consumption-of-the-vmmem-process – GOTO 0 Jan 05 '22 at 10:19
46

You probably have your code stored on the Windows machine in a folder similar to this...

C:\\Users\YourName\projects\blahfu

But you are using Docker on WSL 2 which is a different (Linux) filesystem. So, when you do a Docker build all of the code/context gets copied from the Windows filesystem to Linux filesystem and then from there to the Docker container. This is what takes the most time and is incredibly slow.

Try to put your project into a folder like this...

/home/YouName/projects/blahfu

You should get quite a performance boost.

user883992158
  • 325
  • 3
  • 17
Andy
  • 762
  • 6
  • 14
  • I'm not sure I understand but in my case I have a Linux container with an Oracle Database and all of the data is within the container itself. – doberkofler Aug 14 '20 at 10:31
  • @doberkofler During development you should mount your source code in the docker container (if you don't want to destroy/create your container on every code change)...but you're talking about the db, right!? if the data directory of your database isn't stored locally (within a local mountpoint) it's slow anyway ;) – Andy Sep 11 '20 at 08:55
  • All files are stored in the Linux container and I only access the container via sql*net. – doberkofler Sep 12 '20 at 09:57
  • 7
    @Andy: Technically, your approach make sense, but idea of WSL is to have the flexibility to work on Windows leveraging power of Linux, and it should take care of performance right. If we are moving all stuff into WSL folder structure, then I doubt. – srk Oct 06 '20 at 06:12
  • @srk with their recent updates, Docker said it gave better performances to use WSL2 instead of HyperV. They did not say anything about the use of it, so I understood that it was just more efficient for Docker itself, not especially to work with WSL files or to run Docker inside WSL. If I'm wrong, the communication was unclear, and I'd be curious to know if it's worth switching Docker to WSL2 when you don't use Docker in WSL anyway. – ymoreau Nov 12 '20 at 09:34
  • 1
    The official guidance is here: https://docs.docker.com/docker-for-windows/wsl/#best-practices – akauppi Jun 14 '21 at 08:33
  • I tried copying the file and update my docker-compose file with the new local /home path but this the apache container stopped working. It's empty. Kind of the source files get not copied over to the apache conainer. What could that be ? – Claudio Ferraro Dec 27 '21 at 15:39
  • I'm not sure I'm following. I'm using docker to scrape terabytes of data off various internet services onto my external drive. How would I store all that in the Linux filesystem? Wouldn't that just download to some virtual location on my C drive (which is less than 1 terabyte)? – gargoylebident Apr 29 '22 at 07:52
  • @gargoylebident I absolutely don't know what that has to do with this topic at all!? Are you doing actual development on the scraper code while scraping these tons of data!? I doubt that. It's a development case here. Hopefully nobody ever is running that for production use! – Andy May 02 '22 at 14:02
  • @Andy The topic is docker being slow on Windows. Your "solution" is to move everything.. to Linux. So I think my confusion is very much warranted. So what do you do in production? Go back to mounting to the host and everything being slow again? – gargoylebident May 02 '22 at 18:52
  • @gargoylebident On production I don't use Windows....simple as that. Noone wants to use a windows server. And yes: On production you're mounting a storage. Let it be a gluster or S3 or what else. It#s a windows problem. It has nothing to do with the mount in the docker container. P.S. I don#t have that problem anymore as I can now use a desktop linux for work, too. – Andy May 04 '22 at 11:41
  • 1
    @Andy in other words, your workaround (wouldn't call it a solution) is specific to Linux users (i.e. a minority). Those who use Windows are SOL. – gargoylebident May 05 '22 at 23:36
  • The facepalm is strong with this one. If you use a Desktop Linux you don't need WSL. Please have a look at the answer of @akauppi. That's all you need to now. Over and out ;) – Andy May 09 '22 at 06:11
  • 3
    Moved the code base from Windows Drive to WSL directory, ran docker build, 10x faster! – Kaymaz May 16 '22 at 12:52
  • 1
    Yes @Andy it was the case! Thank you a lot for this solution! I moved my project into /home/ directory on ubuntu wsl and got more than 10x speed increase! – Bullwinkle Mar 02 '23 at 08:44
19

wsl container have they proper filesystem isolated from the windows filesystem. The base idea is to copy your source code from windows file systeme to wsl file systeme.

from window you can acces the wsl container and copy your project to a wslcontainer :

navigate with explorer to \\wsl$

rebuild the container from this location this will do the trick !

tooy
  • 257
  • 2
  • 7
  • 3
    I do not understand. Could you elaborate on your comment? – doberkofler Nov 14 '20 at 06:42
  • As I already mentioned in here, the complete database including all the data files is stored in the container itself and there is no mounted windows file system at all. – doberkofler Nov 14 '20 at 11:58
  • ok, so, your computer run windows 10 because you ask about wsl2, right ? so when you say "the data files is stored in the container itself and there is no mounted windows file system at all" the question is where is stored your docker-compose.yml file and all other file that define you docker container ? this fils cant be in the docker container because there is no container without them. so i guess this file from who you made you docker compose up is on your windows fs, right ? yes?-> migrate them to wsl No?->i supose they are already in wsl, so other thing that make docker slow – tooy Nov 14 '20 at 22:39
  • Yes, I'm running on Windows 10 and asking about wsl2 but I'm not sure I understand the other comments. In my specific case the container itself is `docker run` from an image that is build on a different computer and does contain all files needed to be executed. – doberkofler Nov 14 '20 at 23:45
  • ok, so her we are, the doc say : "docker run image-name : image-name could be a docker image on your local machine[...] " when you run your image build on other computer, you still run an image located on your machine. so the question is where is store this image on your computer? – tooy Nov 15 '20 at 00:14
  • In my local Windows filesystem. So you are referring to the location of the image itself and would suggest starting it from a wls2 subsystem? Correct? – doberkofler Nov 15 '20 at 00:17
  • yeap ! exactly ! copy it to the wsl and run it from there ! :) – tooy Nov 15 '20 at 00:23
  • 1
    I could not copy anything there - seems to be a special folder, read only. – ESP32 Mar 01 '22 at 15:54
11

If the data for the actual docker container is stored on a windows file system (i.e. NTFS) instead of stored on a native Linux filesystem (regardless of what the docker container contents are, which are likely already Linux based), then I think you are going to see slow performance because you're running WSL and using the docker container from a mounted WINDOWS file system (i.e. /c/mnt/...).

If you copy your docker container to something like /usr/local, or /home/<username>/docker on WSL then you may see a 10x performance INCREASE.

Try that and see if it works?

J. Scott Elblein
  • 4,013
  • 15
  • 58
  • 94
atom88
  • 1,449
  • 3
  • 22
  • 32
  • The complete database including all the datafiles is stored in the container itself and there is no mounted windows file system. – doberkofler Sep 23 '20 at 04:14
  • those wsl files are stored in C:\ right? if that so, it will make my C drive quickly to be full since program files etc are also stored there :'( – Fahmi Dec 23 '21 at 17:00
  • apart from copying must i change docker-compose as well ? my files somehow even after changing the docker-compose path are not getting copied to the containers – Claudio Ferraro Dec 29 '21 at 14:07
  • Yes, this is documented here https://docs.docker.com/desktop/windows/wsl/ – TrojanName Apr 12 '23 at 12:25
5

you need edit "vmmem" resource just add file .wslconfig in path

C:\Users<yourUserName>.wslconfig

Configure global options with .wslconfig

Available in Windows Build 19041 and later

You can configure global WSL options by placing a .wslconfig file into the root directory of your users folder: C:\Users<yourUserName>.wslconfig. Many of these files are related to WSL 2, please keep in mind you may need to run

wsl --shutdown

to shut down the WSL 2 VM and then restart your WSL instance for these changes to take affect.

Here is a sample .wslconfig file:

Console

Copy
[wsl2]
kernel=C:\\temp\\myCustomKernel
memory=4GB # Limits VM memory in WSL 2 to 4 GB
processors=2 # Makes the WSL 2 VM use two virtual processors

see this https://learn.microsoft.com/en-us/windows/wsl/wsl-config

xpredo
  • 1,282
  • 17
  • 27
2

Open your wsl2 distribution (Ubuntu for example) and set the ~/.docker/config.json file.
Only you need to change:

{
  "credsStore": "docker.exe"
}

"credsStore": "desktop.exe" : ultra-slow (over 2 minutes)
"credsStore": "wincred.exe" : fast
"credsStore": "" : fast

It works very well.

Claudio
  • 86
  • 6
1

If you are using VS Code, there is a command named "Remote-Containers: Clone Repository in Container Volume..." which assures you have full speed file access.

Form the documentation:

Repository Containers use isolated, local Docker volumes instead binding to the local filesystem. In addition to not polluting your file tree, local volumes have the added benefit of improved performance on Windows and macOS.

Adrian Dymorz
  • 875
  • 8
  • 25
1

As mentioned by Claudio above, setting below lines in ~/.docker/config.json of wsl ubuntu server solved the problem for me.

{ 
   "credsStore": "wincred.exe"
} 

Earlier it was taking 5-10 min to build any simple image, now it is done in 1-2 seconds.

Downside: You have to make this change every time you open the server. I have tried every solution mentioned in https://github.com/docker/for-win/issues/9843 to solve this but nothing works for me.

0

I experienced this issue in Docker on a Windows 2019 Container Host. It was taking over 10 minutes to do a restore that would take about 5 seconds on my own machine. I found out that the MsMgEng.exe (Defender) process was scanning the dockerd.exe (Docker daemon). CPU usage was 98%.

To isolate the issue, run task manager while you're doing a very slow docker build. If it's Defender Real-Time scanning you will see the CPU usage through the roof on the dockerd.exe process. Defender is basically choking the Docker build!

I'm pretty sure it was just the Docker daemon but I also added docker.exe and gitlab-runner.exe on the Processor Exclusion List. The magic of process exclusion is that you don't have to exclude any folders, it will automatically refrain from scanning any folders that the process is dealing with.

And 13-minute restores are a thing of the past! That fixed it. You don't need any special params, conditions or flags on your dotnet restore.

Charles Owen
  • 2,403
  • 1
  • 14
  • 25