0

Pytorch is messing with my floats.

import torch
t = torch.FloatTensor([[1.3]])

print(t)
print(t.item())

prints

tensor([[1.3000]])
1.2999999523162842

I tried to use torch.Tensor([[1.3]]) as well and had the same issue. I don't get why 1.3 is something impossible to represent. Also torch.Tensor([[1.5]]) works just fine which is weird.

  • 3
    That's just the nature of floating point numbers and it's not unique to pytorch. See this: https://stackoverflow.com/a/72729449/2542516 – Priyatham Nov 17 '22 at 01:54
  • try `float(numpy.float32("1.3"))` and you'll get the same number back. i.e. it's just using a single precision floating point number internally. double precision would be 1.3000000000000000444089209850062616169452667236328125, so still out a bit, even if it prints as 1.3 sometimes. Eric's answers on this site are normally very good, I'd recommend spending some time understanding the one you were pointed to above – Sam Mason Nov 17 '22 at 14:22
  • Okay that makes sense, thank you Priyatham and Sam! – username_entity Nov 17 '22 at 18:08

0 Answers0