I am able to train on a local machine that has 4x 1080Ti's and as others have noted TF grabs all the available memory on my machine. After poking around for a bit most searches lead me to solutions for the base TF and not the Object Detection API, for example:
How to prevent tensorflow from allocating the totality of a GPU memory?
How do I access these sorts of options within the Object Detection API? How can I have similar TF style control over training within the OD API? Is there a proper way to within the OD API / slim API?
I tried adding a GPUOptions message to that training.proto but that didn't seem to make a difference.