0

I am trying the code for image segmentation for self driving cars using Berkley deep drive dataset, I trained the model, and tested an image on it, got an output in tensor format(the segmented image), but I need it in image format, I tried Image.fromarray function, got the below output:

enter image description here

And the actual image is shown below:

enter image description here

The model I am using is from this git repo.

mujjiga
  • 16,186
  • 2
  • 33
  • 51

2 Answers2

0

For tensorflow models I used:

from PIL import Image

prediction = np.squeeze(prediction)
r = prediction * 255
im = Image.fromarray(r.astype('uint8'), 'L')
im = im.resize(original.size)
0

If I understand correctly your tensors are the result of the model prediction and the underlining model is U-net. If that is the case, those tensors should represent segmentation masks. If the image used for prediction is of size 512x512 (depends of model architecture) then the predicted tensor size by U-net will be k x 512 x 512 i.e k segmentation masks per image. You have to overlay these masks on the image with lighter colors to see how the predicted image is segmented by the model. So you need access to the image you used for prediction to do this.

Since you are using fast.ai apis I recommend checking the code of show_results method of learner object to see how they render the output. This should be a good starting point.

mujjiga
  • 16,186
  • 2
  • 33
  • 51