I have asked this question Is fsolve good to any system of equations?, from which I got a satisfactory answer. The system I presented there
x = A * exp (x+y)
y = 4 * exp (x+y)
, is just a toy model which is similar with my real case problem, fsolve
did the work with (code in the answer below):
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
def f(p,*args):
x, y = p
A = args[0]
return (x -A* np.exp(x+y),y- 4* np.exp(x+y))
A = np.linspace(0,4,5)
X = []
Y =[]
for a in A:
x,y = fsolve(f,(0.0, 0.0) , args=(a))
X.append(x)
Y.append(y)
print(x,y)
plt.plot(A,X)
plt.plot(A,Y)
However, I read here stackoverflow.com/questions/6519380/… that brenqt
is much faster than fsolve
. I've tried then to use it but keep getting f(a) and f(b) must have different signs
. I understand that f must be continuous. f(a) and f(b) must have opposite signs.
So, I believe brenqt
is not a good choice for this system. Please correct me if I'm wrong here.
In my real case I'm encountering exactly what the answer here how to solve 3 nonlinear equations in python says, i.e."fsolve()) is quite sensitive to initial conditions" I want to avoid to "firstly minimize the sum-of-squares" as I have many more parameters than the OP of that question. How to use optimize.root
to produce a similar result as the one I got with fsolve
in my original question?