When searching for ways to implement L1 regularization in PyTorch Models, I came across this question, which is now 2 years old so i was wondering if theres anything new on this topic?
I also found this recent approach of dealing with the missing l1 function. However I don't understand how to use it for a basic NN as shown below.
class FFNNModel(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim, dropout_rate):
super(FFNNModel, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.dropout_rate = dropout_rate
self.drop_layer = nn.Dropout(p=self.dropout_rate)
self.fully = nn.ModuleList()
current_dim = input_dim
for h_dim in hidden_dim:
self.fully.append(nn.Linear(current_dim, h_dim))
current_dim = h_dim
self.fully.append(nn.Linear(current_dim, output_dim))
def forward(self, x):
for layer in self.fully[:-1]:
x = self.drop_layer(F.relu(layer(x)))
x = F.softmax(self.fully[-1](x), dim=0)
return x
I was hoping simply putting this before training would work:
model = FFNNModel(30,5,[100,200,300,100],0.2)
regularizer = _Regularizer(model)
regularizer = L1Regularizer(regularizer, lambda_reg=0.1)
with
out = model(inputs)
loss = criterion(out, target) + regularizer.__add_l1()
Does anyone understand how to apply these 'ready to use' classes?