0
import torch
import torch.nn as nn

data = torch.ones(3,3,6,6)
conv = nn.Conv2d(3, 16, kernel_size = 3, padding = 1)

print(data[0].unsqueeze(0).shape)

for i in range(3):
    print((conv(data)[i] == conv(data[i].unsqueeze(0))).all())


Results:

torch.Size([1, 3, 6, 6])
tensor(False)
tensor(False)
tensor(False)

I thought it would print True but ended up printing False instead. Any idea why?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Saad Jlil
  • 31
  • 4
  • 1
    There's plenty of resources out there on why comparing floating point numbers using `==` is a bad idea. For example [here's a popular one for Java](https://stackoverflow.com/questions/1088216/whats-wrong-with-using-to-compare-floats-in-java), but the same applies to pytorch tensors. Basically you should be using something like [`torch.isclose`](https://pytorch.org/docs/stable/generated/torch.isclose.html) to compare tensors. – jodag May 07 '22 at 20:52
  • Could you please explain what the goal of your experiment is? Because, the compared tensors `conv(data)[i]` and `conv(data[i].unsqueeze(0))` don't even have the same size and the input data to convolutional layer `data` and `data[i].unsqueeze(0)` are tensors with different size and values. So, it is reasonable to see that they are not equal. – Dilara Gokay May 08 '22 at 14:15
  • I don't really have a precise goal. I'm just curious. Although they don't have precisely the same size, you can replace the line by: print((conv(data)[i].unsqueeze(0) == conv(data[i].unsqueeze(0))).all()). And it's the same output. I don't really know why they would have different values though. – Saad Jlil May 08 '22 at 16:55
  • @jodag Thanks, replacing the line with: print(torch.isclose(conv(data)[i].unsqueeze(0),conv(data[i].unsqueeze(0)))). It print "True" statements. – Saad Jlil May 08 '22 at 16:58

0 Answers0