When using Tensorflow 2 object detection api when do I normalize images of different sizes. Do I preprocess all the images to be same size then annotate with object bounding box? Or do the models somehow do the resizing internally and adjusts the predicted bounding box? The pre-trained models seems to have a preset sizes.
https://github.com/tensorflow/models/tree/master/research/object_detection/models https://github.com/tensorflow/models/tree/master/research/object_detection https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/index.html