The "problem" here isn't related to int
vs tuple
. In fact, if you print m
and m2
you'll see
>>> m
Conv2d(3, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
>>> m2
Conv2d(3, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
that the integer got expanded as the documentation promises.
What actually differs is the initial weights, which I believe are just random. You can view them via m.weights
, m2.weights
. These will differ every time you create a new Conv2d
, even if you use the same arguments.
You can initialize the weights if you want to play around with these objects in a predictable way, see
How to initialize weights in PyTorch?
e.g.
m.weight.data.fill_(0.01)
m2.weight.data.fill_(0.01)
m.bias.data.fill_(0.1)
m2.bias.data.fill_(0.1)
and they should now be identical.