I have already trained the FCN model with fixed size images 256x256. Could I ask from experts how can I train the same model once the size of image are changing from one image to another image?
I really appreciate your advice. Thanks
I have already trained the FCN model with fixed size images 256x256. Could I ask from experts how can I train the same model once the size of image are changing from one image to another image?
I really appreciate your advice. Thanks
You can choose one of these strategies:
By training each image as a different batch, you can reshape
the net in the forward()
(rather than in reshape()
) of the data layer, thus changing the net at each iteration.
+write reshape
once in forward
method and you no longer need to worry about input shapes and sizes.
-reshape
ing the net often requires allocation/deallocation of CPU/GPU memory and therefore it takes time.
-You might find a single image in a batch to be too small of a batch.
For example (assuming you are using a "Python"
layer for input):
def reshape(self, bottom, top):
pass # you do not reshape here.
def forward(self, bottom, top):
top[0].data.reshape( ... ) # reshape the blob - this will propagate the reshape to the rest of the net at each iteration
top[1].data.reshape( ... ) #
# feed the data to the net
top[0].data[...] = current_img
top[1].data[...] = current_label
You can decide on a fixed input size and then randomly crop all input images (and the corresponding ground truths).
+No need to reshape
every iteration (faster).
+Control over model size during train.
-Need to implement random crops for images and labels
Resize all images to the same size (like in SSD).
+Simple
-Images are distorted if not all images have the same aspect ratio.
-You are no invariant to scale