I am interested in defining L weights in a custom neural network with Pytorch.
If L is known it is not a problem to define them one by one, but if L is not known I want to use a for loop to define them. My idea is to do something like this (which does not work)
class Network(nn.Module):
def __init__(self, ):
super(Network, self).__init__()
self.nl = nn.ReLU()
for i in range(L):
namew = 'weight'+str(i)
self.namew = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True)
This should do something like this (which instead works but is limited to a specific number of weights):
class Network(nn.Module):
def __init__(self, ):
super(Network, self).__init__()
self.nl = nn.ReLU()
self.weight1 = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True)
self.weight2 = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True)
self.weight3 = torch.nn.Parameter(data=torch.Tensor(2,2), requires_grad=True)
With what I tried to do, there is the problem that instead of a "dynamic" string for 'namew', Pytorch recognizes just the string 'namew'. Therefore instead of L weights, just 1 weight is defined.
Is there some way to solve this problem?