0

I am trying to use numpy.linalg.inv in some code which is then jitted with @numba.njit. However, I notice that the code does not get significantly faster. Moreover, when I run the same code several times, the time varies drastically (like 4 times different). I wrote some toy code to check whether numpy.linalg.inv gets faster at all with numba:

    def matrinv(M):
        res = np.linalg.inv(M)
        return res

    @njit
    def matrinv_fast(M):
        res = np.linalg.inv(M)
        return res

The running times are almost the same (like 40 ms vs 35 ms). Is it because the existing numpy functions are already precompiled, and numba cannot get them faster? Or I am doing something wrong?

o-a
  • 1
  • 4
    NumPy functions already drop down to optimized C code. Numba can't make significant improvements on that. Numba is very useful for optimizing a subset of pure Python, especially loops, close to the performance of optimized C code. – BatWannaBe Nov 03 '21 at 08:37
  • 1
    Since you are inverting a matrix, here is the obligatory reminder that you probably shouldn't. http://gregorygundersen.com/blog/2020/12/09/matrix-inversion/ TLDR: There are almost always better approaches than inverting a matrix, e.g. np.linalg.solve – Homer512 Nov 03 '21 at 12:03

0 Answers0