1

I set docker memory correct - use 50GB but using only 12.64 isolation - a process

Where I made mistake?

demon.json

{
  "registry-mirrors": [],
  "insecure-registries": [],
  "debug": true,
  "experimental": false,
  "storage-opt": [ "dm.basesize=40G" ],
  "hosts": ["tcp://10.0.0.32:2376", "npipe://"]
}

enter image description here

kill moment

enter image description here

using

using Docker.DotNet;
using Docker.DotNet.Models;

set memory

 return await client.Containers.CreateContainerAsync(
                new CreateContainerParameters
                {
                    Env = environmentVariables,
                    Name = containerName,
                    Image = imageName,
                    ExposedPorts = new Dictionary<string, EmptyStruct>
                    {
                        {
                            "80", default(EmptyStruct)
                        }
                    },
                    HostConfig = new HostConfig
                    {
                        Memory = containerMemory,
                        Isolation = "process", //Memory = containerMemory,
                        CPUCount = numberOfCores,
                        PortBindings = new Dictionary<string, IList<PortBinding>>
                        {
                            {
                                "80",
                                new List<PortBinding>
                                {
                                    new PortBinding { HostPort = port.ToString(CultureInfo.InvariantCulture) }
                                }
                            }
                        },
                        PublishAllPorts = true
                    }
                }).ConfigureAwait(false);

In the new docker, I can not set a memory ram limit for the machine. I think the resources in the comments are much older than the current docker version.

enter image description here

enter image description here

This not worked I added this at beginning enter image description here

Rafał Developer
  • 2,135
  • 9
  • 40
  • 72
  • How are you using docker? I mean, are you using Docker for Windows, for instance? How did you set the amount of memory for your containers? – jccampanero Oct 17 '21 at 18:28
  • Docker do not reserve memory ... is the process inside the docker-container using so much memory? – akop Oct 18 '21 at 06:26
  • @jccampanero I edited my post and I added how I added memory. Yes, I use Docker for Windows Instances. I assign value and I tried spawn docker. – Rafał Developer Oct 18 '21 at 08:02
  • @akop yes instance docker for windows use only 12GB, not 50GB which I assigned I don`t understand the limit if my VM is super powerful :) – Rafał Developer Oct 18 '21 at 08:04
  • Thank you very much for the feedback @RafałDeveloper. If you are using Docker for Windows, the product itself can be limiting the resources available to your docker container. Please, see [this related SO question](https://stackoverflow.com/questions/44533319/how-to-assign-more-memory-to-docker-container), and the [docker for windows documentation](https://docs.docker.com/desktop/windows/#resources). Perhaps the problem can be solved by just increasing the amount of memory available for the product. Sorry if I missed something, I am not used to using the docker rest API. – jccampanero Oct 18 '21 at 08:46
  • @jccampanero I think the related question is old (4 years) and this does not fix my problem – Rafał Developer Oct 18 '21 at 09:28
  • Thank you for the feedback @RafałDeveloper. Yes, but I think in some way is still valid. Please, could you see [this other related question](https://stackoverflow.com/questions/67425618/incrementing-gb-of-ram-for-docker-container-in-windows)? I am not sure, but it may vibe of help. Certainly, some configuration is limiting your resources. – jccampanero Oct 18 '21 at 09:48
  • Please, see this other [related SO question](https://stackoverflow.com/questions/66172375/docker-desktop-is-using-12-gb-ram-to-run-one-container-with-24-mb-ram) as well, I think that it is related with the link provided in the previous comment related to the fact of using and configuring wls. I hope it helps. – jccampanero Oct 18 '21 at 09:53
  • @jccampanero to be honest :) I tested this before I added the question :) – Rafał Developer Oct 18 '21 at 10:25
  • @jccampanero, to be honest, I think maybe the problem is related to process isolation and there is some limit but I didn`t find the answer. – Rafał Developer Oct 18 '21 at 10:29
  • 1
    Sorry for the late reply @RafałDeveloper. Well, maybe, indeed. Could the OOM be caused by your application itself? Perhaps you have enough resources, but the application itself or the .netcore runtime is causing the OOM error. Please, try doing a postmortem inspection of your docker container logs, maybe it can be valuable. As the container is killed, you can still access its logs with `docker logs `. You can find the container id by running `docker ps --all` in your machine. Please, could you try? – jccampanero Oct 18 '21 at 15:46
  • @jccampanero I will try tomorrow I didn`t know logs were available after kill :) – Rafał Developer Oct 18 '21 at 16:12
  • @jccampanero can you give your last answer as an answer not comment – Rafał Developer Oct 19 '21 at 11:30
  • 1
    Of course @RafałDeveloper, thank you. I hope it means that the problem has been fixed. – jccampanero Oct 19 '21 at 12:33
  • @jccampanero thanks for your time yes some times one small puzzle can fix huge problem – Rafał Developer Oct 19 '21 at 15:30
  • You are very welcome @RafałDeveloper. I am very happy to hear that the answer was helpful. – jccampanero Oct 19 '21 at 17:28

2 Answers2

2

The storage-opt:

"storage-opt": [ "dm.basesize=40G" ],

will have no effect. It is used for device mapper which isn't used by default in any current version of docker and previously only applied to RedHat based systems that didn't have aufs/overlay support. With overlay2, docker will allow the container to use all the storage available in /var/lib/docker unless you set the container filesystem to read-only.

From the rest of the question, it's not clear whether you're trying to limit memory (RAM) or storage (disk). These are not the same thing. It's also not clear whether these are native Windows containers or Linux containers running in the embedded VM.

Assuming you want to limit the memory of a Linux container, simply start the container with the --memory or -m option set to your desired limit. E.g.:

docker run -m 30g some_image

This is a limit, it doesn't allocate the memory, but limits the container and will kill it with an OOM error if the container attempts to exceed it.

When docker is run from within Docker Desktop, you also need to set the memory and/or disk allocated to the embedded VM. Any container or process within that VM is then limited based on the capacity of the VM itself. Setting these varies by how you have installed Docker and details for this are found in Docker's documentation.

BMitch
  • 231,797
  • 42
  • 475
  • 450
  • I worked with windows containers, spawned in Virtual Machine Azure, I have a problem with ram memory but I check is not related also with disk memory. Docker.DotNet is set memory flag as this --memory or -m to correct 55GB but use only 12.5GB and can not use more. Images mcr.microsoft.com/dotnet/core/sdk:3.1, mcr.microsoft.com/windows/servercore:1809 – Rafał Developer Oct 18 '21 at 12:39
  • "This is a limit, it doesn't allocate the memory, but limits the container and will kill it with an OOM error if the container attempts to exceed it." yes, and kill docker after 13GB – Rafał Developer Oct 18 '21 at 12:45
  • I forgot I use windows Server because in windows 10 isolation=process not worked – Rafał Developer Oct 18 '21 at 12:46
  • I understand you are docker master and -m should always work correctly? Hmm, maybe the problem is with Azure Virtual Machine limits? or windows docker container image limit? – Rafał Developer Oct 18 '21 at 13:36
2

As stated in the different comments, you may have enough resources but maybe the application itself, or the .netcore runtime, is causing the out of memory error.

Please, try doing a postmortem inspection of your docker container logs and try finding some related problems, I think it can be valuable.

As the container is killed and not removed, you can still access its logs with:

docker logs <container id>

You can find the container id by running:

docker ps --all
jccampanero
  • 50,989
  • 3
  • 20
  • 49