0

I have a 4d tensor a = torch.Size([32, 46, 55, 46]) here a[0] is the batch size a[1] & a[3] is the image size and a[2] is the volume of the image. I was trying to load this into a conv3d but i am getting error RuntimeError: Expected 5-dimensional input for 5-dimensional weight [100, 55, 3, 3, 3], but got 4-dimensional input of size [32, 55, 46, 46] instead

import torch.nn as nn
import torch.nn.functional as F

simple_model = nn.Sequential(
    nn.Conv3d(55, 100, kernel_size=3, stride=1, padding=1),
)
for images in train_dl:
    print('images.shape:', images.shape)
    out = simple_model(images.permute(0,2,1,3))
    print('out.shape:', out.shape)
    break

This is the code i was working on. iter(train_dl).next().shape = torch.Size([32, 46, 55, 46])

  • 1
    Check the [docs](https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html): according to your description, and assuming "volume" is the number of channels, your input is missing one extra dimension (D). Are you sure `nn.Conv3d` is the correct layer to process your input? – aretor Mar 03 '22 at 10:57
  • @aretor Yes as you pointed out using conv2d solved the problem. but why not use conv3d for volumes?. Then where exactly conv3d is used? –  Mar 03 '22 at 11:32
  • 1
    You need to add a singleton "batch" dimension to your data if you are using 3D conv. – Shai Mar 03 '22 at 11:46

0 Answers0