46

Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?

Golo Roden
  • 140,679
  • 96
  • 298
  • 425
  • Vague, open ended question with no details, unless you are expecting a number limit like "65 is the max limit of containers that can be run" (which it isn't.). – BodgeIT Aug 24 '17 at 08:24
  • 14
    This is exactly what I was interested in. Hence the accepted answer pretty perfectly explains what I wanted to know. – Golo Roden Aug 24 '17 at 08:57
  • then you'll get a better answer by providing system details etc. – BodgeIT Aug 24 '17 at 09:00
  • A realistic number could be provided by sharing the experience of similar configured systems. From there hopefully we can project a realistic limit. – Salvador Valencia Nov 21 '17 at 19:09

3 Answers3

59

There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on

  1. How you are configuring your docker containers.
  2. What you are running in your containers.
  3. What kernel, distribution and docker version you are on.

The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8

Docker

Docker creates or uses a number of resources to run a container, on top of what you run inside the container.

  • Attaches a virtual ethernet adaptor to the docker0 bridge (1023 max per bridge)
  • Mounts an AUFS and shm file system (1048576 mounts max per fs type)
  • Create's an AUFS layer on top of the image (127 layers max)
  • Forks 1 extra docker-containerd-shim management process (~3MB per container on avg and sysctl kernel.pid_max)
  • Docker API/daemon internal data to manage container. (~400k per container)
  • Creates kernel cgroups and name spaces
  • Opens file descriptors (~15 + 1 per running container at startup. ulimit -n and sysctl fs.file-max )

Docker options

  • Port mapping -p will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and also sysctl kernel.pid_max)
  • --net=none and --net=host would remove the networking overheads.

Container services

The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)

If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.

IO across a 1000 processes would cause a lot of IO contention.

1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)

If you create network connections from a 1000 containers the hosts network layer will get a workout.

It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.

Example

1023 Docker busybox images running nc -l -p 80 -e echo host uses up about 1GB of kernel memory and 3.5GB of system memory.

1023 plain nc -l -p 80 -e echo host processes running on a host uses about 75MB of kernel memory and 125MB of system memory

Starting 1023 containers serially took ~8 minutes.

Killing 1023 containers serially took ~6 minutes

Matt
  • 68,711
  • 7
  • 155
  • 158
  • Is your point that the number of simultaneous containers is restricted by available system resources? Because that's more or less what your example demonstrates. The more system resources you have, the more containers you can run, until you hit specific tunable kernel limits like `pid_max` – Ben Whaley Jun 06 '16 at 17:00
  • 2
    My point is that when using Dockers form of a container there are a lot of overheads to consider. A system _won't_ hit `pid_max` containers. That number is divided by 2 to start with due to dockers additional processes. If it were tuned low you might get close to `max_pid/2`. If you take the standard 32768 value then that's at most 16384 container processes. Docker will be opening at least 134 225 920 file descriptors by that stage (~ 134GB slab of kernel mem alone). With pure containers, `pid_max` might come into play. Docker's practical limits are much lower. – Matt Jun 07 '16 at 08:08
  • So then do you say `fs.file-max` is the limit as that's what you run into first? or the 1023 attachments to a bridge? It all depends and they all contribute. The "how many x can I y" question in linux is never a simple one and always involves system resources and some odd kernel internels. I wanted to provide more detail on dockers requirements so that users can apply that to their situation – Matt Jun 07 '16 at 09:31
  • How did you start 1023 containers serially? I've been using a script to `docker run ...` busybox images n times to get n running busybox images. I've been maxing out at 280ish running busybox images. I seem to be running out of system memory. I look at the HyperV details on my MobyLinuxVM and the MemoryDemanded is: 6700MB – William Lin Aug 13 '18 at 19:05
  • @WilliamLin I don't recall doing anything special, that bit could have been running on a physical Debian host. – Matt Aug 31 '18 at 21:36
  • @Matt I think my issue was the amount of memory given to Docker. I only had 2GB allocated to Docker. Did you have more? – William Lin Sep 11 '18 at 15:17
  • @WilliamLin Yes, it would have been > 4GB, most likely 8. – Matt Sep 14 '18 at 23:59
18

From a post on the mailing list, at about 1000 containers you start running into Linux networking issues.

The reason is:

This is the kernel, specifically net/bridge/br_private.h BR_PORT_BITS cannot be extended because of spanning tree requirements.

darron
  • 181
  • 1
  • 4
  • 4
    You can improve this answer by providing a link to the "post on the mailing list" and also explain that there a `docker0` bridge, which has `veth` interfaces attached to it for each of the containers, and the `BR_PORT_BITS` limit implies that maximum number of `veth` interfaces is the maximum integer that 10 bits can represent. – errordeveloper Mar 10 '15 at 10:56
  • 2
    @errordeveloper possibly here: https://groups.google.com/d/msg/docker-user/k5hqpNg8gwQ/-00mvrB2nIkJ – Bryan Mar 10 '15 at 12:06
  • Please make this clearer, what is the maximum integer that 10 bits can represent? – Jonathan Aug 28 '16 at 21:57
4

With Docker-compose, I am able to run over 6k containers on single host (with 190GB memory). container image is under 10MB. But Due to bridge limitation i have divided the containers in batches with multiple services, each service have 1k containers and separate subnet network.

docker-compose -f docker-compose.yml up --scale servicename=1000 -d

But after reaching 6k even though memory is still available around 60GB, it stops scaling and suddenly spikes up the memory. There should be bench-marking figures published by docker team to help, but unfortunately its not available. Kubernetes on the other hand publishes bench-marking stats clearly about the number of pods recommended per node.

rams time
  • 179
  • 5