0

I am using complex multi-container setup via docker compose. I wish to limit each container's max amount of memory. I set max amount via deploy.resources.limits.memory section in docker compose:

    deploy:
      resources:
        limits:
          memory: 3gb

After deployment I check the memory visible inside the container, and get the following:

$grep MemTotal /proc/meminfo
MemTotal:       65669412 kB

So, my application can see all 64 Gb of RAM.

Is there a way to set the amount of memory my application can see from inside the docker container? (Im using python apps, so it is essential for memory management)

Nick Zorander
  • 131
  • 12
  • What does `docker stats` say for the container? `/proc/meminfo` will always be the host's memory, as it isn't a namespaced/limited resource. See https://stackoverflow.com/questions/72185669/what-is-the-real-memory-available-in-docker-container for more information. What is your goal for getting the available memory inside the container, compared to getting the actual limit enforced from outside the container? – MatsLindh Feb 27 '23 at 09:44
  • @MatsLindh I'm trying to solve problem with memory leak when I add elastic apm (https://stackoverflow.com/questions/75057759/fastapi-with-gunicorn-uvicorn-stops-responding). I think the problem might be related to python memory allocation strategy – Nick Zorander Feb 27 '23 at 16:59
  • I'm not familiar with Elastic APM, but you might want to look at something like https://github.com/bloomberg/memray to see how memory usage is handled. With `gunicorn` you can also use `--max-requests` to make gunicorn restart the workers after N requests to avoid any lingering memory allocation issues over time. – MatsLindh Feb 27 '23 at 19:53
  • Thanks for memray, gotta try it out. `--max-requests` doesn't help with gunicorn. – Nick Zorander Feb 28 '23 at 08:11
  • Any idea why `--max-requests` doesn't work? Is the master process the one leaking memory? I assume you're using the UvicornWorker implementation? Does it scaling the number of workers change anything? – MatsLindh Feb 28 '23 at 09:27
  • I have 10 workers. Tried using 1 and 4 workers - it only causes performance degradation. I think the problem is how python uses memory/caches inside the container. Python does not return memory to the operating system (the strategy is to leave memory reserved for the interpreter in order to use it immediately afterwards, without resorting to a system call). Because of this, we get that apm uses python cache, it clogs the container's memory (but not the host) and python does not release resources. – Nick Zorander Feb 28 '23 at 09:58
  • https://github.com/elastic/apm-agent-python/issues/234 - outside the container reloading worker procceses works – Nick Zorander Feb 28 '23 at 09:59

1 Answers1

0

Can you try with the mem_limit option in the docker compose:

deploy:
      resources:
        limits:
          memory: 3gb
      mem_limit: 1g

Another way may be by setting the container's memory limit using the --memory option when running the docker run command.

docker run --memory=1g your-image-name