When using Numba's @jit
with Numpy's float32
data type I'm getting ?truncation? issues. It's largely noise since it's far past the decimals I care about - around the 7th or 8th place - but it'd still be good to know what's going on and if I can fix it up.
I have to use the float32
data type to conserve memory, as an aside!
Here's the code I'm using as a test:
import numpy as np
from test_numba import test_numba
np.random.seed(seed=1774);
number = 150;
inArray = np.round(np.float32((np.random.rand(number)-.5)*2),4); #set up a float32 with 4 decimal places
numbaGet = test_numba(inArray); #run it through
print("Get:\t"+str(numbaGet)+" Type: "+str(type(numbaGet)));
print("Want:\t"+str(np.mean(inArray))+" Type: "+str(type(np.mean(inArray)))); #compare to expected
Combined with the following function
import numpy as np
from numba import jit #, float32
@jit(nopython=True) #nopython=True, nogil=True, parallel=True, cache=True , nogil=True, parallel=True #float32(float32),
def test_numba(inArray):
#outArray = np.float32(np.mean(inArray)); #forcing float32 did not change it
outArray = np.mean(inArray);
return outArray;
The output from this is:
Get: 0.0982406809926033 Type: <class 'float'>
Want: 0.09824067 Type: <class 'numpy.float32'>
And that seems to point to Numba is making it a Python float
class (float64
as far as I understand it) and doing math and then somehow losing precision.
If I switch to float64
the difference is greatly minimized.
Get: 0.09824066666666667 Type: <class 'float'>
Want: 0.09824066666666668 Type: <class 'numpy.float64'>
Not sure what I'm doing wrong with this. Again, in my case it's an ignorable issue (starting from 4 decimal places) but still would like to know why!