5

Recently I observed that a lot of times while defining the neural nets we define separate ReLU objects for each layer. Why can't we use the same ReLU object wherever it is needed.

For example instead of writing like this-

def __init__(self):
    self.fc1     = nn.Linear(784, 500)
    self.ReLU_1  = nn.ReLU()
    self.fc2     = nn.Linear(500, 300)
    self.ReLU_2  = nn.ReLU()
    
def forward(x):
    x = self.fc1(x)
    x = self.ReLU_1(x)
    x = self.fc2(x)
    x = self.ReLU_2(x)

why can't we use

def __init__(self):
    self.fc1    = nn.Linear(784, 500)
    self.ReLU   = nn.ReLU()
    self.fc2    = nn.Linear(500, 300)
    
def forward(x):
    x = self.fc1(x)
    x = self.ReLU(x)
    x = self.fc2(x)
    x = self.ReLU(x)

Is this something specific to PyTorch?

  • 1
    Yet better, just call the corresponding funtions from `torch.n.functional`, no pesky stateless objects... – dedObed Jun 26 '20 at 12:01

1 Answers1

3

We can do so. First variant is just for clarity.

roman
  • 1,061
  • 6
  • 14