1

I created the following deep network with dropout layers like below:

class QNet_dropout(nn.Module):

    """
        A MLP with 2 hidden layer and dropout

        observation_dim (int): number of observation features
        action_dim (int): Dimension of each action
        seed (int): Random seed
    """

    def __init__(self, observation_dim, action_dim, seed):
        super(QNet_dropout, self).__init__()
        self.seed = torch.manual_seed(seed)
        self.fc1 = nn.Linear(observation_dim, 128)
        self.fc2 = nn.Dropout(0.5)
        self.fc3 = nn.Linear(128, 64)
        self.fc4 = nn.Dropout(0.5)
        self.fc5 = nn.Linear(64, action_dim)

    def forward(self, observations):
        """
           Forward propagation of neural network

        """

        x = F.relu(self.fc1(observations))
        x = F.linear(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = F.linear(self.fc4(x))
        x = self.fc5(x)
        return x

However, when I tried to run the code, I got the following errors:

/home/workspace/QNetworks.py in forward(self, observations)
     90 
     91         x = F.relu(self.fc1(observations))
---> 92         x = F.linear(self.fc2(x))
     93         x = F.relu(self.fc3(x))
     94         x = F.linear(self.fc4(x))

TypeError: linear() missing 1 required positional argument: 'weight'

It seems like I didn't properly use/forward the dropout layer. What should be the correct way to do the Forward for the dropout layer? Thanks!

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
Edamame
  • 23,718
  • 73
  • 186
  • 320

1 Answers1

1

The F.linear() function used incorrectly. You should use your stated linear function instead of torch.nn.functional. The dropout layer should be after Relu. You can call Relu function from torch.nn.functional.

import torch
import torch.nn.functional as F

class QNet_dropout(nn.Module):

    """
        A MLP with 2 hidden layer and dropout

        observation_dim (int): number of observation features
        action_dim (int): Dimension of each action
        seed (int): Random seed
    """

    def __init__(self, observation_dim, action_dim, seed):
        super(QNet_dropout, self).__init__()
        self.seed = torch.manual_seed(seed)
        self.fc1 = nn.Linear(observation_dim, 128)
        self.fc2 = nn.Dropout(0.5)
        self.fc3 = nn.Linear(128, 64)
        self.fc4 = nn.Dropout(0.5)
        self.fc5 = nn.Linear(64, action_dim)

    def forward(self, observations):
        """
           Forward propagation of neural network

        """
        x = self.fc2(F.relu(self.fc1(observations)))
        x = self.fc4(F.relu(self.fc3(x)))
        x = self.fc5(x)
        return x

observation_dim = 512
model = QNet_dropout(observation_dim, 10, 512)
batch_size = 8
inpt  = torch.rand(batch_size, observation_dim)
output = model(inpt)
print ("output shape: ", output.shape)
Salih Karagoz
  • 2,189
  • 2
  • 22
  • 35
  • Sir may I draw your attention to [here](https://stackoverflow.com/questions/56344611/how-can-take-advantage-of-multiprocessing-and-multithreading-in-deep-learning-us), you may help! – Mario May 31 '19 at 22:21