0

Although there is a problem with the accuracy of floating-point multiplication, the gap is slightly larger. And it is also related to the roll step.

x = torch.rand((1, 5))
y = torch.rand((5, 1))
print("%.10f"%torch.matmul(x,y))
>>> 1.2710412741
print("%.10f"%torch.matmul(torch.roll(x, 1, 1), torch.roll(y, 1, 0)))
>>> 1.2710412741
print("%.10f"%torch.matmul(torch.roll(x, 2, 1), torch.roll(y, 2, 0)))
>>> 1.2710413933

What results in the problem above? How can i get a more consistent result?

opflow
  • 3
  • 1

2 Answers2

0

Floating point additions are not associative, hence you're not guaranteed to get the same result for different orders of the summands.

If you want to eliminate this, you can use something like the Kahan algorithm.

But this all comes with a big caveat: If you really have to rely on this, you should think about using different ways of representing your numbers, see the first link. Floating point numbers are nice for numerical computations, but if you use them, you have to deal with all kinds of different sources of error. Again, I recommend thoroughly reading the first linked page, and also familiarizing yourself with the inner workings of floating point numbers, e.g. https://en.wikipedia.org/wiki/Floating-point_arithmetic

flawr
  • 10,814
  • 3
  • 41
  • 71
0

Float is only guaranteed 16 decimal place.

So, 1+0.5+0.25+0.175 and 1+(1/2)+(0.5/2)+(0.25/2) is showing different result.

Regularly, between 1.2710412741 and 1.2710413933 is really small(-1.192000000926896e-07) It doesn't make important problem but you want to double check your function is well working.

x = torch.randint(1,5,(1,5))
y = torch.randint(1,5,(5,1))
print (torch.matamul(x,y))
print (torch.matmul(torch.roll(x,1,1), torch.roll(y,1,0))

doing this. it is always same output.