-1

I encountered the following strange issue and I've no idea why it happens. I believe it is related to how PyTorch casts large values near the power of two. Can anyone give me a hint? Thanks for help!

large power of 2 = 2 ** 31 enter image description here

gasoon
  • 775
  • 4
  • 8
  • 14

1 Answers1

-2

Turns out that for large value, Pytorch does not record the value, but some value near it, and such approximation beyond what native float should have:

import torch
torch.set_printoptions(precision=10, sci_mode=False)

large_power_of_2 = 2 ** 30
a = torch.zeros(3, 5)

for i in range(-10, 10):
    a[0][0] = large_power_of_2 + i
    print('try to print a[0]. Expected: {}, but real is {}'.format(large_power_of_2 + i, a[0][0]))

output:

try to print a[0]. Expected: 1073741814, but real is 1073741824.0
try to print a[0]. Expected: 1073741815, but real is 1073741824.0
try to print a[0]. Expected: 1073741816, but real is 1073741824.0
try to print a[0]. Expected: 1073741817, but real is 1073741824.0
try to print a[0]. Expected: 1073741818, but real is 1073741824.0
try to print a[0]. Expected: 1073741819, but real is 1073741824.0
try to print a[0]. Expected: 1073741820, but real is 1073741824.0
try to print a[0]. Expected: 1073741821, but real is 1073741824.0
try to print a[0]. Expected: 1073741822, but real is 1073741824.0
try to print a[0]. Expected: 1073741823, but real is 1073741824.0
try to print a[0]. Expected: 1073741824, but real is 1073741824.0
try to print a[0]. Expected: 1073741825, but real is 1073741824.0
try to print a[0]. Expected: 1073741826, but real is 1073741824.0
try to print a[0]. Expected: 1073741827, but real is 1073741824.0
try to print a[0]. Expected: 1073741828, but real is 1073741824.0
try to print a[0]. Expected: 1073741829, but real is 1073741824.0
try to print a[0]. Expected: 1073741830, but real is 1073741824.0
try to print a[0]. Expected: 1073741831, but real is 1073741824.0
try to print a[0]. Expected: 1073741832, but real is 1073741824.0
try to print a[0]. Expected: 1073741833, but real is 1073741824.0

Since the recorded data are the same, the comparison returns True.

gasoon
  • 775
  • 4
  • 8
  • 14
  • This happens because pytorch uses 32-bit floats by default instead of 64-bit like numpy. You start losing integer precision for float32 around `2**24`. See https://stackoverflow.com/questions/3793838/which-is-the-first-integer-that-an-ieee-754-float-is-incapable-of-representing-e – jodag Sep 29 '22 at 14:22