2
import torch
torch.set_printoptions(precision=1, sci_mode=False)

numeric_seq_id = 2021080918959999952

t = torch.tensor(numeric_seq_id)
tt = torch.tensor(numeric_seq_id).float() # !!!

print(t, tt)

output is

tensor(2021080918959999952) tensor(2021080905052848128.)

We could see that tt's value is changed after .float() transform.

Why is here such a difference on the value?


ps. pytorch's version = 1.10.1

python's version = 3.8

zheyuanWang
  • 1,158
  • 2
  • 16
  • 30

1 Answers1

2

This is not pytorch specific, but an artifact of how floats (or doubles) are represented in memory (see this question for more details), which we can also see in numpy:

import numpy as np

np_int = np.int64(2021080918959999952)
np_float = np.float32(2021080918959999952)
np_double = np.float64(2021080918959999952)

print(np_int, int(np_float), int(np_double))

Output:

2021080918959999952 2021080905052848128 2021080918960000000
FlyingTeller
  • 17,638
  • 3
  • 38
  • 53