I am using a simple object detection model in PyTorch and using a Pytoch Model for Inferencing.
When I am using a simple iterator over the code
for k, image_path in enumerate(image_list):
image = imgproc.loadImage(image_path)
print(image.shape)
with torch.no_grad():
y, feature = net(x)
result = image.cuda()
It prints our variable sized images such as
torch.Size([1, 3, 384, 320])
torch.Size([1, 3, 704, 1024])
torch.Size([1, 3, 1280, 1280])
So When I am using Batch Inferencing using a DataLoader applying the same transformation the code is not running. However, when I am resizing all the images as 600.600 the batch processing runs successfully.
I am having Two Doubts,
First why Pytorch is capable of inputting dynamically sized inputs in Deep Learning Model and Why dynamic sized input is failing in Batch Processing.