I have a Detectron2 .pth model that I converted successfully to Caffe2 .pb via the Detectron2 tools functionality located here: https://github.com/facebookresearch/detectron2/blob/master/tools/caffe2_converter.py
As recommended, used the --run-eval flag to confirm results while converting and the results are very similar to original detectron2 results.
To run inference on a new image using the resulting model.pb and model_init.pb files, used functionality located here: https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/api.py (mostly) https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/caffe2_inference.py
However, inference results are not even close. Can anybody suggest reasons why this might happen? Detectron2 repo says all preprocessing is done in the caffe2 scripts, but am I missing something?
I can provide my inference code:
caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])