For deploying a large number of containers (say 25) on a single host, would it be better to have a large custom base image with all libraries needed by every application, or develop near custom images for each container with only necessary libraries for each?
The Docker design pattern is to remain light (i.e. only what is required), but it has also been argued that if many containers use the same base image then "resources" can be shared.
A previous question on Resource Sharing and most of the Docker info says containers don't share anything. This previous question on Multiple Base Images leads one to believe that, regardless of one large image or many custom images, any overlapping programs will be shared.. in which case the large base image may have slightly higher overhead but would be less development work because you can naively throw everything together (and even though it's huge, disk space is bountiful)
Technically speaking, what are the pros and cons of implementing a large base image versus small custom images? How do containers from the same base image "share resources"?