1

I use the root function from scipy.optimize with the method "excitingmixing" in my code because other methods, like standard Newton, don't converge to the roots I am looking for.

However I would like to optimize my code using numba, which doesn't support the scipy package. I tried to look up the "exciting mixing" algorithm in the documentation to program it myself:

https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html

I didn't find anything useful except the not really helpful statement that the method "uses a tuned diagonal Jacobian approximation".

I would be glad if someone could tell me something about the algorithm or has an idea on how to optimize the scipy function in an other way.

As requested here is a minimal code example:

import numpy as np
from scipy import optimize
from numba import jit

@jit(nopython = True)
def func(x):
    [a, b, c, d] = x

    da = a*(1-b)
    db = b*(1-c)
    dc = c
    dd = 1

    return [da, db, dc, dd]

@jit(nopython = True)
def getRoot(x0):
    solution = optimize.root(func, x0, method="excitingmixing")
    return(solution.x)

root = getRoot([0.1,0.1,0.2,0.4])
print(root)
Holgerillo
  • 46
  • 5
  • Consider running a profiler to establish whether it is the overhead of the `ExcitingMixing` optimiser that consumes the majority of the compute time or whether it is the evaluation of your objective function. If it is the latter, you could port your objective function to `numba` and use the standard algorithm provided by `scipy`. – Till Hoffmann May 22 '18 at 16:38
  • I am pretty sure that it is not my function. The function is already optimized and the most root finding algorithms are much faster but don't converge to the roots I am looking for. – Holgerillo May 22 '18 at 16:55
  • Provide a full example. Many scipy functions can take a low-level-callback function instead of a Python function. This is an example for scipy.integrate.quad https://stackoverflow.com/a/50097776/4045774 – max9111 May 22 '18 at 19:48

2 Answers2

2

You can look in the source code of scipy to see the implementation of the excitingmixing option:

https://github.com/scipy/scipy/blob/c948e96ebb3454f6a82e9d14021cc601d7ce7a85/scipy/optimize/nonlin.py#L1272

You're likely not going to want to reimplement the entire root finding algorithm in numba. I better strategy you can test is to use numba to optimize the function that you pass to the scipy method. You're still going to pay some overhead of scipy calling a function, but you might see a performance increase if the bottleneck is evaluating the function and that can be done faster with a numba jitted version. I've found it best to just experiment with numba and test with the timeit method.

JoshAdel
  • 66,734
  • 27
  • 141
  • 140
  • Thanks for the fast answer - I already optimized the function, which is called from scipy and the bottleneck is really the root finding algorithm provided by scipy with the "excitingmixing" option. – Holgerillo May 22 '18 at 16:50
1

I wrote a little wrapper Minpack, called NumbaMinpack, which can be called within numba compiled functions: https://github.com/Nicholaswogan/NumbaMinpack.

You should try the lmdif method, if Newton's method is failing you.

from NumbaMinpack import lmdif, hybrd, minpack_sig
from numba import njit, cfunc
import numpy as np

@cfunc(minpack_sig)
def myfunc(x, fvec, args):
    fvec[0] = x[0]**2 - args[0]
    fvec[1] = x[1]**2 - args[1]
    
funcptr = myfunc.address # pointer to myfunc

x_init = np.array([10.0,10.0]) # initial conditions
neqs = 2 # number of equations
args = np.array([30.0,8.0]) # data you want to pass to myfunc
@njit
def test():
    # solve with lmdif
    sol = lmdif(funcptr, x_init, neqs, args)
    # OR solve with hybrd
    sol = hybrd(funcptr, x_init, args) 
    return sol
test() # it works!
nicholaswogan
  • 631
  • 6
  • 13