-1

I am trying to run my code and I am using numba, to make it faster, however, every time I run it, I get a different result. I have checked it and when not using @jit, my results are replicable.

Does anyone know why?

@jit(nopython=True)
def weight_matrix(n1,n2,user,u,d,n):
    b3=np.zeros((d,n2-n1))
    for col in range(n1,n2):
        v = user[:,col]
        vOmega =v[v!=0]
        uOmega =u[v!=0,:]
        size1=vOmega.size
        uOmega= np.reshape(uOmega, (size1,d))
        vOmega= np.reshape(vOmega, (size1,1))
        w= np.linalg.inv((uOmega.T)@uOmega)@(uOmega.T)@vOmega
    
        b3[:,col-n1] = w[:,0]

    return b3

user is a fixed numpy array, and u is a random matrix (the sees is fixed), d, n1, n2, n are fixed numbers.

numba version: 0.55.1 numpy version: 1.21.6

I tried Spyder, and Jupyter to make sure it is not my compiler. I have tried the code without jit and it runs with no issue.

Madi
  • 9
  • 1
  • 2
    Create an [MRE](https://stackoverflow.com/help/minimal-reproducible-example) by adding a function call with specific arguments (and maybe a print of the results) and show it as properly formatted text in the question. – Michael Butscher Sep 02 '23 at 15:27
  • Unless your input is very small, I do not expect `np.linalg.inv((uOmega.T)@uOmega)@(uOmega.T)@vOmega` to be faster with Numba. Numba will just call BLAS like Numpy does and BLAS libraries are already well optimized. Can't this line be mathematically simplified? – Jérôme Richard Sep 02 '23 at 21:14
  • If v` is a floating-point array, then your issue is very likely explained in : [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Jérôme Richard Sep 02 '23 at 21:16

0 Answers0