I'm concerned about the fact that computing np.mean(np.sum(x, axis=1))
does not give the same result as np.sum(np.mean(x, axis=0))
.
For instance:
import numpy as np
np.random.seed(123)
x = np.random.randint(1000, 100)
sm = np.sum(np.mean(x, axis=0))
ms = np.mean(np.sum(x, axis=1))
print('sm {}'.format(sm))
print('ms {}'.format(ms))
print('sm == ms {}'.format(sm == ms))
prints
sm 0.1314820175147663
ms 0.13148201751476632
sm == ms False
Is this a numpy
issue or am I missing something?
(obviously all types are equal in this calculation, np.float64
; casting x to np.float64
does not change anything)
Python 3.6.4, numpy 1.14.5