I have found plenty of questions regarding the tensorflow warning
tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of
xxxxxxxxx exceeds 10% of system memory
I know that it is just a warning, not an error and that its display can be suppressed. I also know how to (most likely) address the issue: reduce the batch size.
However, I have never found the issue addressed the other way around:
Given my network and the current state of the system on which it runs, what is the largest batch size it can fit without problems?
So is there a way to access what tensorflow is doing internally from within the python (3.7) interface?
I'm running tf 2.2.0 on my CPU. I know that there are ways to limit the GPU memory usage (let it grow / not use 100% of available space/ etc), but I have not found an equivalent for the CPU. Creating a logical device with memory limit is not supported with CPUs (https://www.tensorflow.org/api_docs/python/tf/config/set_logical_device_configuration)