Following a previous question, I want to plot weights, biases, activations and gradients to achieve a similar result to this.
Using
for name, param in model.named_parameters():
summary_writer.add_histogram(f'{name}.grad', param.grad, step_index)
as was suggested in the previous question gives sub-optimal results, since layer names come out similar to '_decoder._decoder.4.weight'
, which is hard to follow, especially since the architecture is changing due to research. 4
in this run won't be the same in the next, and is really meaningless.
Thus, I wanted to give my own string names to each layer.
I found this Pytorch forum discussion, but no single best practice was agreed upon.
What is the recommended way to assign names to Pytorch layers?
Namely, layers defined in various ways:
- Sequential:
self._seq = nn.Sequential(nn.Linear(1, 2), nn.Linear(3, 4),)
- Dynamic:
self._dynamic = nn.ModuleList()
for _ in range(self._n_features):
self._last_layer.append(nn.Conv1d(in_channels=5, out_channels=6, kernel_size=3, stride=1, padding=1,),)
- Direct:
self._direct = nn.Linear(7, 8)
- Other ways I didn't think about
I would like to be able to give a string name to each layer, defined in each of the above ways.