I'm working on object detection of various sorts of animals using the Tensorflow Object Detection API. In the past I successfully applied MobileNet v1 to various settings and I used to be happy with the results.
Now, I encountered a problem with a new species that is about 1/3 smaller than animals I dealt with before. Visually, the animals look the same up to a scale, meaning that the bounding boxes to be predicted are rather in the range of 5-15% of the image size than 20%-30% as before.
I have the feeling there should be some hyperparameter I need to tweak in order to get stuff back to working, but I struggle to find the right one the pipeline config. I already experimented with tuning min_scale and max_scale of the anchor_generator towards smaller values, but with no success.
Interestingly, using Faster RCNN works right away on the exact same data.
Any ideas what could be tried?