89

What's the (best) way to solve a pair of non linear equations using Python. (Numpy, Scipy or Sympy)

eg:

  • x+y^2 = 4
  • e^x+ xy = 3

A code snippet which solves the above pair will be great

Thanatos
  • 42,585
  • 14
  • 91
  • 146
AIB
  • 5,894
  • 8
  • 30
  • 36

9 Answers9

94

for numerical solution, you can use fsolve:

http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html#scipy.optimize.fsolve

from scipy.optimize import fsolve
import math

def equations(p):
    x, y = p
    return (x+y**2-4, math.exp(x) + x*y - 3)

x, y =  fsolve(equations, (1, 1))

print equations((x, y))
HYRY
  • 94,853
  • 25
  • 187
  • 187
  • I get (4.4508396968012676e-11, -1.0512035686360832e-11) as an answer, but this does not work: x+y^2 = 4 != 4.4508396968012676e-11+(-1.0512035686360832e-11)**2 = 4.4508396968123175e-11. equations() returns (0,0) according to what was entered and the original question, so apparently these two small #s are its attempt at that? Also what is the "1, 1" , and where does it come from. Just trying to understand... thanks. – Andrew Aug 08 '21 at 01:40
  • @Andrew, the output of `equations((x, y))` is the result of `x + y ** 2 - 4` and `math.exp(x) + x * y - 3`. This shows you that the 2 formulas that were set to 0 in the function `equations` are now 0 with the values found for `x` and `y`. If you `print((x, y))` you'll get the solutions you're looking for. – Jed Dec 01 '21 at 17:15
  • "for different sets of equations" - as required in open-topic msg - fsolve does not suits as it does suits for NON-polinomial equations! But the solution presented here is convinient for the presented example of system of equations. Thanks – JeeyCi Jul 20 '23 at 07:57
31

If you prefer sympy you can use nsolve.

>>> nsolve([x+y**2-4, exp(x)+x*y-3], [x, y], [1, 1])
[0.620344523485226]
[1.83838393066159]

The first argument is a list of equations, the second is list of variables and the third is an initial guess.

Evgeni Sergeev
  • 22,495
  • 17
  • 107
  • 124
Krastanov
  • 6,479
  • 3
  • 29
  • 42
  • 4
    I get the error 'name y is not defined' with the code in this answer. – Sander Heinsalu Apr 09 '18 at 05:13
  • @SanderHeinsalu, just follow what the error message is saying. If "name y is not defined", define it (python can not magically know what you want undefined variables to be). For instance here you want y to be a symbol object you can use to build bigger symbolic objects: `y = Symbol('symbol_name_string')`. Probably you want to keep the same symbol name, so `y = Symbol('y')`. – Krastanov Apr 09 '18 at 16:57
30

Short answer: use fsolve

As mentioned in other answers the simplest solution to the particular problem you have posed is to use something like fsolve:

from scipy.optimize import fsolve
from math import exp

def equations(vars):
    x, y = vars
    eq1 = x+y**2-4
    eq2 = exp(x) + x*y - 3
    return [eq1, eq2]

x, y =  fsolve(equations, (1, 1))

print(x, y)

Output:

0.6203445234801195 1.8383839306750887

Analytic solutions?

You say how to "solve" but there are different kinds of solution. Since you mention SymPy I should point out the biggest difference between what this could mean which is between analytic and numeric solutions. The particular example you have given is one that does not have an (easy) analytic solution but other systems of nonlinear equations do. When there are readily available analytic solutions SymPY can often find them for you:

from sympy import *

x, y = symbols('x, y')
eq1 = Eq(x+y**2, 4)
eq2 = Eq(x**2 + y, 4)

sol = solve([eq1, eq2], [x, y])

Output:

⎡⎛ ⎛  5   √17⎞ ⎛3   √17⎞    √17   1⎞  ⎛ ⎛  5   √17⎞ ⎛3   √17⎞    1   √17⎞  ⎛ ⎛  3   √13⎞ ⎛√13   5⎞  1   √13⎞  ⎛ ⎛5   √13⎞ ⎛  √13   3⎞  1   √13⎞⎤
⎢⎜-⎜- ─ - ───⎟⋅⎜─ - ───⎟, - ─── - ─⎟, ⎜-⎜- ─ + ───⎟⋅⎜─ + ───⎟, - ─ + ───⎟, ⎜-⎜- ─ + ───⎟⋅⎜─── + ─⎟, ─ + ───⎟, ⎜-⎜─ - ───⎟⋅⎜- ─── - ─⎟, ─ - ───⎟⎥
⎣⎝ ⎝  2    2 ⎠ ⎝2    2 ⎠     2    2⎠  ⎝ ⎝  2    2 ⎠ ⎝2    2 ⎠    2    2 ⎠  ⎝ ⎝  2    2 ⎠ ⎝ 2    2⎠  2    2 ⎠  ⎝ ⎝2    2 ⎠ ⎝   2    2⎠  2    2 ⎠⎦

Note that in this example SymPy finds all solutions and does not need to be given an initial estimate.

You can evaluate these solutions numerically with evalf:

soln = [tuple(v.evalf() for v in s) for s in sol]
[(-2.56155281280883, -2.56155281280883), (1.56155281280883, 1.56155281280883), (-1.30277563773199, 2.30277563773199), (2.30277563773199, -1.30277563773199)]

Precision of numeric solutions

However most systems of nonlinear equations will not have a suitable analytic solution so using SymPy as above is great when it works but not generally applicable. That is why we end up looking for numeric solutions even though with numeric solutions: 1) We have no guarantee that we have found all solutions or the "right" solution when there are many. 2) We have to provide an initial guess which isn't always easy.

Having accepted that we want numeric solutions something like fsolve will normally do all you need. For this kind of problem SymPy will probably be much slower but it can offer something else which is finding the (numeric) solutions more precisely:

from sympy import *

x, y = symbols('x, y')
nsolve([Eq(x+y**2, 4), Eq(exp(x)+x*y, 3)], [x, y], [1, 1])
⎡0.620344523485226⎤
⎢                 ⎥
⎣1.83838393066159 ⎦

With greater precision:

nsolve([Eq(x+y**2, 4), Eq(exp(x)+x*y, 3)], [x, y], [1, 1], prec=50)
⎡0.62034452348522585617392716579154399314071550594401⎤
⎢                                                    ⎥
⎣ 1.838383930661594459049793153371142549403114879699 ⎦
Oscar Benjamin
  • 12,649
  • 1
  • 12
  • 14
  • 2
    Don't know why this isn't the most voted answer, however, is there a way to convert the analytical solutions given by SymPy to a list of approximate numerical values? As of my understanding, the only way to find all of the solutions is via the analytical method, but having those solutions converted could be very useful. – Edoardo Serra May 25 '20 at 02:38
  • 1
    You can numerically evaluate any sympy expression that does not have free symbols using `expr.evalf()`: https://docs.sympy.org/latest/modules/evalf.html – Oscar Benjamin May 25 '20 at 12:01
  • 2
    I've added an example with evalf – Oscar Benjamin May 25 '20 at 12:22
  • Thank you very much! – Edoardo Serra May 26 '20 at 06:58
  • Yet another question I'm sorry. How can I separate the real from the complex solutions, for example by displaying only the real ones? – Edoardo Serra May 26 '20 at 08:02
  • @Edoardo `np.real(x)` and `np.imag(x)` – eric Jul 12 '20 at 06:21
  • For sympy, you use the `sympy.real` and `sympy.imag` functions, too. (In numpy, I'd just acess .real and .imag as attributes on the relevant values!) – creanion Oct 18 '21 at 21:00
  • If the symbols are declared as `symbols('x, y', real=True)` then solve will try to return only real solutions. Alternatively you can filter the solutions with something like `Reals & set(solve(z**4 - 1, z))` or `[s for s in sol if s.is_real]`. Note that sometimes it isn't easy to verify whether or not a solution expression is real. – Oscar Benjamin Oct 19 '21 at 11:25
5

An alternative to fsolve is root:

import numpy as np
from scipy.optimize import root    

def your_funcs(X):

    x, y = X
    # all RHS have to be 0
    f = [x + y**2 - 4,
         np.exp(x) + x * y - 3]

    return f

sol = root(your_funcs, [1.0, 1.0])
print(sol.x)

This will print

[0.62034452 1.83838393]

If you then check

print(your_funcs(sol.x))

you obtain

[4.4508396968012676e-11, -1.0512035686360832e-11]

confirming that the solution is correct.

Cleb
  • 25,102
  • 20
  • 116
  • 151
  • 2
    An advantage of `root` over `fsolve` is the easy ability to specify the resolution method and tolerance. – billjoie Nov 16 '21 at 03:44
4

Try this one, I assure you that it will work perfectly.

    import scipy.optimize as opt
    from numpy import exp
    import timeit

    st1 = timeit.default_timer()

    def f(variables) :
        (x,y) = variables

        first_eq = x + y**2 -4
        second_eq = exp(x) + x*y - 3
        return [first_eq, second_eq]

    solution = opt.fsolve(f, (0.1,1) )
    print(solution)


    st2 = timeit.default_timer()
    print("RUN TIME : {0}".format(st2-st1))

->

[ 0.62034452  1.83838393]
RUN TIME : 0.0009331008900937708

FYI. as mentioned above, you can also use 'Broyden's approximation' by replacing 'fsolve' with 'broyden1'. It works. I did it.

I don't know exactly how Broyden's approximation works, but it took 0.02 s.

And I recommend you do not use Sympy's functions <- convenient indeed, but in terms of speed, it's quite slow. You will see.

singh.indolia
  • 1,292
  • 13
  • 26
Dane Lee
  • 1,984
  • 11
  • 14
3

I got Broyden's method to work for coupled non-linear equations (generally involving polynomials and exponentials) in IDL, but I haven't tried it in Python:

http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.broyden1.html#scipy.optimize.broyden1

scipy.optimize.broyden1

scipy.optimize.broyden1(F, xin, iter=None, alpha=None, reduction_method='restart', max_rank=None, verbose=False, maxiter=None, f_tol=None, f_rtol=None, x_tol=None, x_rtol=None, tol_norm=None, line_search='armijo', callback=None, **kw)[source]

Find a root of a function, using Broyden’s first Jacobian approximation.

This method is also known as “Broyden’s good method”.

Community
  • 1
  • 1
Kevin H
  • 61
  • 1
2

You can use openopt package and its NLP method. It has many dynamic programming algorithms to solve nonlinear algebraic equations consisting:
goldenSection, scipy_fminbound, scipy_bfgs, scipy_cg, scipy_ncg, amsg2p, scipy_lbfgsb, scipy_tnc, bobyqa, ralg, ipopt, scipy_slsqp, scipy_cobyla, lincher, algencan, which you can choose from.
Some of the latter algorithms can solve constrained nonlinear programming problem. So, you can introduce your system of equations to openopt.NLP() with a function like this:

lambda x: x[0] + x[1]**2 - 4, np.exp(x[0]) + x[0]*x[1]

Reza Saidafkan
  • 320
  • 2
  • 9
2
from scipy.optimize import fsolve

def double_solve(f1,f2,x0,y0):
    func = lambda x: [f1(x[0], x[1]), f2(x[0], x[1])]
    return fsolve(func,[x0,y0])

def n_solve(functions,variables):
    func = lambda x: [ f(*x) for f in functions]
    return fsolve(func, variables)

f1 = lambda x,y : x**2+y**2-1
f2 = lambda x,y : x-y

res = double_solve(f1,f2,1,0)
res = n_solve([f1,f2],[1.0,0.0])
Victor
  • 398
  • 1
  • 2
  • 11
0

You can use nsolve of sympy, meaning numerical solver.

Example snippet:

from sympy import *

L = 4.11 * 10 ** 5
nu = 1
rho = 0.8175
mu = 2.88 * 10 ** -6
dP = 20000
eps = 4.6 * 10 ** -5

Re, D, f = symbols('Re, D, f')

nsolve((Eq(Re, rho * nu * D / mu),
       Eq(dP, f * L / D * rho * nu ** 2 / 2),
       Eq(1 / sqrt(f), -1.8 * log ( (eps / D / 3.) ** 1.11 + 6.9 / Re))),
      (Re, D, f), (1123, -1231, -1000))

where (1123, -1231, -1000) is the initial vector to find the root. And it gives out:

enter image description here

The imaginary part are very small, both at 10^(-20), so we can consider them zero, which means the roots are all real. Re ~ 13602.938, D ~ 0.047922 and f~0.0057.

user26742873
  • 919
  • 6
  • 21