After the first good results with pythran, I tried transonic to benefit form the jit and the class support. Unfortunately it does not run as expected.
If I use the @jit
decorator the decorated functions are compiled and cached, but during the first run of the code the compiled version is not used, instead the function is processed by python. After the first run the cached version is used.
If I use the @boost
decorator and run transonic runmwe.py
a compiled version is created in the __pythran__
folder, but running the script with python runmwe.py
I receive the following warning and the code is processed by python.
WARNING: Pythran file does not seem to be up-to-date:
<module '__pythran__.runmwe_920d6d0a5cd396436d463468328e997b' from '__pythran__/runmwe_920d6d0a5cd396436d463468328e997b.cpython-38-x86_64-linux-gnu.so'>
Rerunning transonic runmwe.py
just produces a warning that the code is already up-to-date.
Do I miss some configuration to use @jit
and @boost
properly or is this the expected behavior and I use transonic the wrong way?
Used software from conda-forge:
transonic 0.4.5
pythran 0.9.7
python 3.8.6
MWE:
import numpy as np
from transonic import jit,boost
#transonic def looping(float[])
@boost
def looping(np_array):
shape_x =np_array.shape[0]
for x in range(shape_x):
if np_array[x] < 0.5:
np_array[x] = 0
else:
np_array[x] = 1
return np_array
in_arr = np.random.rand(10**7)
looping(in_arr)