0

I'm concerned about the fact that computing np.mean(np.sum(x, axis=1)) does not give the same result as np.sum(np.mean(x, axis=0)).

For instance:

import numpy as np

np.random.seed(123)
x = np.random.randint(1000, 100)

sm = np.sum(np.mean(x, axis=0))
ms = np.mean(np.sum(x, axis=1))

print('sm {}'.format(sm))
print('ms {}'.format(ms))
print('sm == ms {}'.format(sm == ms))

prints

sm 0.1314820175147663
ms 0.13148201751476632
sm == ms False

Is this a numpy issue or am I missing something?

(obviously all types are equal in this calculation, np.float64; casting x to np.float64 does not change anything)

Python 3.6.4, numpy 1.14.5

ted
  • 13,596
  • 9
  • 65
  • 107
  • 6
    It can happen. Remember 0.1 + 0.2 != 0.3. Possible duplicate: [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – jpp Jun 15 '18 at 14:37
  • 1
    Probably it's just the residual error – leoschet Jun 15 '18 at 14:38
  • I understand that but why does the order of operations matter? (editted to `randn` and not `randint`). I know 0.1 + 0.2 != 0.3 but I expected 0.1 + 0.2 == 0.2 + 0.1, to follow the comparison – ted Jun 15 '18 at 14:56
  • @ted, You will get floating point vs decimal errors along the way when you combine multiple operations. There's nothing to say your floating point error will be identical whichever path you take. This is true for `float` generally, nothing Python / NumPy related. – jpp Jun 15 '18 at 15:05
  • Ok. So it's expected. Cheers! – ted Jun 15 '18 at 15:07
  • if it is a problem you have `np.allclose` and `np.isclose` to test near-equality – bobrobbob Jun 15 '18 at 15:32
  • Yeah I had that but thanks :) – ted Jun 15 '18 at 15:38

0 Answers0