2

I understand the use of replicas in Docker Swarm mode. It is mainly to eliminate points of failure and reduce the amount of downtime. It is well explained in this post.

Since having more replicas is more useful for a system as a whole, why don't companies just initialise as many replicas as possible e.g 1000 replicas for a docker service? I can imagine a large corporation running a back-end system may face multiple points of failures at any given time and they would benefit from having more instances of the particular service.

I would like to know how many replicas are considered TOO MUCH and what are the factors affecting the performance of a Docker Swarm?

Tom Halson
  • 380
  • 3
  • 12
Seng Wee
  • 534
  • 9
  • 20

1 Answers1

0

I can think of hardware overhead being a limiting factor.

Lets say your running Rails app. Each instance required 128Mb of RAM, and 10% CPU usage. 9 instances is a touch over 1Gb of memory and 1 entire CPU.

While that does not sounds like a lot, image an organization has 100 + teams each with 3,4,5 applications each. The hardware requirements to operation an application at acceptable levels quickly ramp up.

Then there is network chatter. 10MB/s is typical in big org/corp settings. While a heartbeat check for a couple instances is barely noticeable, heartbeat on 100's of instances could jam up the network.

At the end of the day it comes down the constraints. What are the boundaries within the software, hardware, environment, budgetary, and support systems? It is often hard to imagine the pressures present when (technical) decisions are made.

David J Eddy
  • 1,999
  • 1
  • 19
  • 37