3

I have created deep learning models in different input shapes. For Testing , I am resizing the images according to the model's input shape manually I need to resize the image with input shape of the deep model Any Command to find the input shape of the model in PYTORCH

model = torch.load(config.MODEL_PATH).to(config.DEVICE)
im = cv2.resize(im, (INPUT_IMAGE_HEIGHT,INPUT_IMAGE_HEIGHT))

How can I find the INPUT_IMAGE_HEIGHT from model?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Aneesh R P
  • 69
  • 3
  • Does this answer your question? [PyTorch model input shape](https://stackoverflow.com/questions/66488807/pytorch-model-input-shape) – GoodDeeds May 23 '22 at 10:28

1 Answers1

2

This is a tricky question because your input size will can depend on several components of your model. The short answer is you can't.


Concerning the number of channels in your input tensor, you can infer this solely based on the first convolutional layer. Assuming your model is a two dimensional convolutional network, then you can get the input channel number based on :

for child in model.modules():
    if type(child).__name__ == 'Conv2d':
        print(child.weight.size(1))
        break

Now for the input size, as I said you may not be able to infer this information at all. Indeed some convolutional networks, such as classification networks, may require specific dimensions such that the bottleneck can be flattened and fed into a fully-connected network. This is not always true though, networks that use some sort of pulling operation (average, or maximum pulling) will alleviate the need to provide fixed input shapes. Other networks such as dense prediction networks may not need to get a specific shape as the input, given the symmetry between input and output...

This all depends on the network's design and as such, I'm afraid there is no definitive answer to your question.

Ivan
  • 34,531
  • 8
  • 55
  • 100
  • Suppose I have a model trained with input dim 256x256 is it possible to test the model with imagesizes 200x100, 128x128 ? – Aneesh R P May 24 '22 at 05:06
  • result 3 torch.Size([16, 3, 3, 3]) Can we find any relation with inputshape? like aspect ratio? – Aneesh R P May 24 '22 at 05:12
  • 1
    You can't infer this based on the input and output shapes alone since nothing is stopping you from having layers in your model which do not decrease the spatial dimensionality by `x`. For instance, pooling layers don't have this property. However if you have the input shape, you *can* find out the output shape with a simple inference with a noise input, for example: `model(torch.rand(1,3,h,w))` where `h` and `w` would be your desired input height and width respectively. – Ivan May 24 '22 at 07:26