0
class ResNet(nn.Module):
def __init__(self, output_features, fine_tuning=False):
    super(ResNet, self).__init__()
    self.resnet152 = tv.models.resnet152(pretrained=True)
    
    #freezing the feature extraction layers
    for param in self.resnet152.parameters():
        param.requires_grad = fine_tuning
        
    #self.features = self.resnet152.features
    
    self.num_fts = 512
    self.output_features = output_features
    
    # Linear layer goes from 512 to 1024
    self.classifier = nn.Linear(self.num_fts, self.output_features)
    nn.init.xavier_uniform_(self.classifier.weight)
    
    self.tanh = nn.Tanh()
    
def forward(self, x):
   
    h = self.resnet152(x)
    print('h:   ',h.shape)
  
    return h

image_model_resnet152=ResNet(output_features=10).to(device)
image_model_resnet152

Here, after printing the image_model_resnet152, I get:

enter image description here

Here, what is the difference between (avgpool): Linear(in_features=2048) and (classifier): Linear(in_features=512)?

I am implementing an image captioning model, so which in_features should I take for an image?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Rafi
  • 105
  • 2
  • 7
  • How was `ResNet` defined? – Ivan Aug 22 '21 at 10:14
  • Yes, I have updated the question. Would you mind checking once more? – Rafi Aug 22 '21 at 10:17
  • Did you implement this module on your own? So, are you asking about the difference between an adaptive average pool layer and a fully connected layer? – Ivan Aug 22 '21 at 10:18
  • Yes, I have implemented it previously using VGG16. There was a module named features and whenever I pass an image of shape [1,3,224,224] to the VGG16 model's features module. The module returned a vector of shape [1,512,7,7]. But now in Resnet152 there is no system like that so I am in now bit confused. – Rafi Aug 22 '21 at 10:23

1 Answers1

1

ResNet is not as straightforward as VGG: it's not a sequential model, i.e. there is some model-specific logic inside the forward definition of the torchvision.models.resnet152, for instance, the flattening of features between the CNN and classifier. You can take a look at its source code.

The easiest thing to do in this case is to add a hook on the last layer of the CNN: layer4, and log the result of that layer in an external dict. This is done with register_forward_hook.

Define the hook:

out = {}
def result(module, input, output):
   out['layer4'] = output

Attach the hook on the submodule resnet.layer4:

>>> x = torch.rand(1,3,224,224)
>>> resnet = torchvision.models.resnet152()

>>> resnet.layer4.register_forward_hook(result)

After inference you will have access to the result inside of out:

>>> resnet(x)
>>> out['layer4']
(1, 2048, 7, 7)

You can look at another answer of mine on a more in-depth usage of forward hooks.


A possible implementation would be:

class NN(nn.Module):
    def __init__(self):
        super().__init__()
        self.resnet = torchvision.models.resnet152()
        self.resnet.layer4.register_forward_hook(result)
        self.out = {}
    
    @staticmethod
    def result(module, input, output):
        out['layer4'] = output

    def forward(self, x):
        x = self.resnet(x)
        return out['layer4']

You can then define additional layers for your custom classifier and call them inside forward.

Ivan
  • 34,531
  • 8
  • 55
  • 100
  • Thank you very very much!! This is a really broad and better explanation I have found on the internet. This will help me a lot. – Rafi Aug 22 '21 at 10:51