I'm running a number of short-lived docker containers each of which does some memory-intensive batch processing. I'm looking for a way to find the peak memory usage each container hit while it was running. Knowing this will allow me to optimize the infrastructure I run these containers on for future runs.
One naive way to achieve this is redirecting the streaming output of docker stats
to some file: docker stats container_id > stats.log
. However, this requires running a process for each container and then sorting through very verbose logs to find the peak usage. I'm wondering if there's not an easier way.