4

I have a Detectron2 .pth model that I converted successfully to Caffe2 .pb via the Detectron2 tools functionality located here: https://github.com/facebookresearch/detectron2/blob/master/tools/caffe2_converter.py

As recommended, used the --run-eval flag to confirm results while converting and the results are very similar to original detectron2 results.

To run inference on a new image using the resulting model.pb and model_init.pb files, used functionality located here: https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/api.py (mostly) https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/caffe2_inference.py

However, inference results are not even close. Can anybody suggest reasons why this might happen? Detectron2 repo says all preprocessing is done in the caffe2 scripts, but am I missing something?

I can provide my inference code:

caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])
gg19
  • 41
  • 1

1 Answers1

0

Your input_image should be multiple of 32. So problably do you need make a resize of your input img

So do you need :

caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
img = cv2.resize(img, (64, 64))  
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])

See the class: classdetectron2.export.Caffe2Tracer In link : https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer

Vitor Bento
  • 384
  • 4
  • 17