SymPy + algebraic equation + floating point numbers => trouble. Floating point math does not work like normal math, and SymPy is designed for the latter. Small things like 16 (integer) versus 16.0 (float) make a lot of difference in solving equations with SymPy: ideally, you would have no floating point numbers there, creating exact rational numbers instead, like this.
from sympy import S
one_x = S('-0.08')
However, you have floating point data and are looking for a floating point solution. This makes SymPy the wrong tool for the job. SymPy is for doing math with symbols, not for crunching floating point numbers. The correct solution is to use an appropriate solve from SciPy, such as brentq
. It takes a bracketing interval as an input (where the function has different signs at both ends). For example:
from scipy.optimize import brentq
eq = lambda x: np.sqrt((x-second_x)**2 + (slope*x+intercept-second_y)**2) + second_r - one_r - np.sqrt((x-one_x)**2 + (slope*x + intercept - one_y)**2)
brentq(eq, -10, 10) # returns -0.049356742923277075
If you stick with SymPy, that means your equation is going outsourced to mpmath
library, which is much more limited in the numerical root finding and optimization. To get a solution to converge with its methods, you'll need a really good starting point: apparently, one_x/2
is such a point.
from sympy import sqrt, Symbol, nsolve
# ... as in your code
nsolve(sqrt((x-second_x)**2+(slope*x+intercept-second_y)**2)+second_r-one_r-sqrt((x-one_x)**2+(slope*x+intercept-one_y)**2), one_x/2)
returns -0.0493567429232771
.
By using sympy.solveset
, which is intended for symbolic solution, you deprive yourself not only of SciPy's powerful numeric solvers, but also of an opportunity to set a good starting value for the numeric search which sympy.nsolve
provides. Hence the lack of convergence in this numerically tricky problem. By the way, this is what makes it numerically tricky: the function is nearly constant most of the time, with one rapid change.
