1

to find intersection between 2 polynomials I tried to minimize sum of least-squares like in this code - first plotted correct solution, then printed result of minimization.

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0,1,100)    # ???????? [0,1]

def polyD(x):
    return  1.115355004199118 - 1.597163991790283* x**1 + 0.6539311181514963* x**2
def polyS(x):
    return -0.03070291735792586 + 0.1601011622660309* x**1 + 0.8530319920733438* x**2

root= 0.61      # x - suppose found somewhere earlier...
y_root= polyD(root) # y
print(root, y_root) # x=0.61 y=0.38441273827121725

plt.plot(root, y_root, 'yo',   x, polyD(x), 'r-', x, polyS(x), 'b-', ms=20, )
plt.show()

####################

from scipy.optimize import minimize

x0=  0
res= minimize(lambda t: sum((polyD(x) - polyS(x))**2), x0)

print(res)
print(res.fun)  # 36.59096676853359 ????????????

I can assume a little difference due to the algorithm, but here I see the difference of 10^2 order.

What is my mistake?? How can I correct it? If I should define some constraints to x=[0,1]? or how to get correct result about the intersection_point?

P.S. or, perhaps, it will be easier to solve with derivatives (minimizing to find extremum)? EDIT: partial_derivatives of constrained function(s) (Jacobian matrix) can be used to speed-up the code execution - here

JeeyCi
  • 354
  • 2
  • 9
  • 1
    Your lambda function specifies parameter `t`, but you are using `x` (which happens to be a vector you also defined earlier, which is why you had to make a `sum` here as well. – Mikael Öhman Jul 21 '23 at 10:31
  • And the issue you link the user is artificially limited to using `minimize`. You can of course use `fsolve` if you want. – Mikael Öhman Jul 21 '23 at 10:33
  • GREAT! thanks a lot for your fresh_view to the code! I corrected... it works: *plt.plot(res.x, polyD(res.x), 'yo', x, polyD(x), 'r-', x, polyS(x), 'b-', ms=20, )* – JeeyCi Jul 21 '23 at 10:54
  • yes, it also works with *fsolve*: `solution_x = opt.fsolve(f, x0=[1], full_output=True)` when `f= lambda x: sum((polyD(x) - polyS(x))**2)` – JeeyCi Jul 21 '23 at 11:33
  • ... or *fsolve* without lambda, just with func defined as `def f(x) : d = polyD(x).sum(); s = polyS(x).sum(); res= [d- s]; return res`... *sum()* is important anyway – JeeyCi Jul 21 '23 at 11:35
  • "optimize.least_squares does the calculation of the chi-squared internally, while using scipy.optimize.minimize you have to calculate the chi-squared manually inside the function you want to minimize" [here](https://stackoverflow.com/q/49211783/15893581) – JeeyCi Jul 24 '23 at 08:47

3 Answers3

0

Don't use minimize and don't use fsolve; use Polynomial and its built-in .roots:


poly_d = Polynomial((1.115355004199118, -1.597163991790283, 0.6539311181514963))
poly_s = Polynomial((-0.03070291735792586, 0.1601011622660309, 0.8530319920733438))

for x in (poly_d - poly_s).roots():
    print(f'd({x:.2f}) = {poly_d(x):.2f}')
    print(f's({x:.2f}) = {poly_s(x):.2f}')
d(-9.44) = 74.41
s(-9.44) = 74.41
d(0.61) = 0.38
s(0.61) = 0.38

Or since this is only order 2, you may also solve it analytically.

c, b, a = poly_d - poly_s
u = 0.5/a
d = np.sqrt(b*b - 4*a*c)
x0 = u*(d - b)
x1 = u*(-d - b)
Reinderien
  • 11,755
  • 5
  • 49
  • 77
  • I did,`roots = np.polynomial.polynomial.polyroots(np.polynomial.polynomial.polysub(poly_coeffS, poly_coeffD)); x_int = roots[(roots >= x.min()) & (roots <= x.max())]; y_int = np.polynomial.polynomial.polyval(x_int, poly_coeffD)` - But, as stated in the topic's title, wanted to solve my misunderstanding of optimization from scipy. I used *np.polinomial* just to fit random initial data, therefore polynomials were used in optimization problem, but equations assumed to be any in reality – JeeyCi Jul 21 '23 at 13:10
  • OK, but if you want practice on `minimize`, use it when it's the right tool for the job. Here, it's not the right tool for the job. – Reinderien Jul 21 '23 at 13:20
  • `scipy.optimize.curve_fit()` & `scipy.optimize.root_scalar()` also suit for such data... though Numpy is faster (as it is written in C), but SciPy "contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms" - it is just the question of preference... fsolve function returns the roots of a non linear equation defined by f(x)=0... I didn't care about this - just wanted to solve/minimize any **Convex Problem** – JeeyCi Jul 21 '23 at 16:58
  • still am not sure how to find global minimum wihout [looping](https://people.duke.edu/~ccc14/sta-663/BlackBoxOptimization.html) all roots... – JeeyCi Jul 21 '23 at 17:16
  • What does that even mean? There's more than one global minimum. This is not a job for an opaque minimizer. Minimizers generally aren't designed to do this, and only seek out one minimum. – Reinderien Jul 21 '23 at 22:40
  • well, concerning my last confusion, really there's [Numerical Solution](https://math.stackexchange.com/questions/620541/finding-extremum-values-in-a-given-function-using-hessian-matrix?rq=1): if the problem is [Convex](https://math.stackexchange.com/a/1527388/1121385), than local minimum (with Grad>0 curving-up & Hessian flat[=0] at stationary point) is [global](https://math.stackexchange.com/a/2193848/1121385) minimum - .. and [check](https://math.stackexchange.com/a/402105/1121385) unbounded condition... and [solution with no calculus at all](https://math.stackexchange.com/a/4341324/1121385) – JeeyCi Jul 22 '23 at 07:06
  • @ Reinderien: "more than one global minimum" [not local !] exists only in periodic functions, as I thought... – but really still in the other case - [e.g.](https://math.stackexchange.com/a/864689/1121385) link already attached – JeeyCi Jul 22 '23 at 07:52
  • really, there are different functions in [SciPy.Optimize](https://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.html) (icl. Global min finding, or [to find max](https://stackoverflow.com/a/43704112/15893581) - multiply to -1, [e.g.](https://www.geeksforgeeks.org/maximize-optimization-using-scipy/) – JeeyCi Jul 24 '23 at 03:00
  • found methods for [global](https://machinelearningmastery.com/function-optimization-with-scipy/) optimization - they are stochastic ones – JeeyCi Jul 24 '23 at 10:25
0

The reason your code doesn't work is because of this line:

lambda t: sum((polyD(x) - polyS(x))**2)

Notice anything wrong? You used x instead of t. Swap those xs for ts and you get your desired results.

Alternatively, you could use scipy.optimize.root. Below I show both methods, though I only apply bounds to the minimize method.

import numpy as np
from scipy.optimize import minimize, root


def polyD(x):
    return 1.115355004199118 - 1.597163991790283*x**1 + 0.6539311181514963*x**2


def polyS(x):
    return -0.03070291735792586 + 0.1601011622660309*x**1 + 0.8530319920733438*x**2


def f(x):
    return polyD(x) - polyS(x)


def g(x):
    return np.sum(f(x)**2)


x0 = 0
min_res = minimize(g, x0, bounds=[(0, 1)])
root_res = root(f, x0)

print(min_res)
print(root_res)

Output:

  message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
  success: True
   status: 0
      fun: 7.999974469252813e-13
        x: [ 6.100e-01]
      nit: 5
      jac: [-3.538e-06]
     nfev: 12
     njev: 6
 hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>

 message: The solution converged.
 success: True
  status: 1
     fun: [ 6.772e-15]
       x: [ 6.100e-01]
    nfev: 7
    fjac: [[-1.000e+00]]
       r: [ 2.000e+00]
     qtf: [ 6.131e-09]
jared
  • 4,165
  • 1
  • 8
  • 31
  • already was SOLVED in comments to starting message - thanks to Mikael Öhman – JeeyCi Jul 22 '23 at 07:43
  • @JeeyCi (1) I don't know when those comments were written relative to my answer (it currently only says "2 days ago", but eventually it will give times again). (2) Questions really shouldn't be answered in the comments. (3) You cannot accept a comment as an answer to a question. – jared Jul 23 '23 at 18:25
0

another solution, probably: to use difference of functions as eq constraint:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize

x = np.linspace(0,1,100)

def fd(x):
    return  1.115355004199118 - 1.597163991790283* x**1 + 0.6539311181514963* x**2
def fs(x):
    return -0.03070291735792586 + 0.1601011622660309* x**1 + 0.8530319920733438* x**2

plt.plot(  x, fs(x), 'r-', x, fd(x), 'b-')
plt.show()

######################

cons = ( { 'type' : 'eq', 'fun': lambda x: fs(x)-fd(x)} )

res = minimize(fs, [0., ],   constraints= cons,  tol=1.4e-100)
if res.success== True: print(res.x , ' ~ ', fd(res.x), '\n')
else: res.message

plt.plot(res.x, fd(res.x), 'yo', x, fd(x), 'r-', x, fs(x), 'b-', ms=20, )
plt.show()

really, code_design can vary, giving needed result, I suppose.

As I understand: we can use constraints for ys & bounds for xs. (?)

JeeyCi
  • 354
  • 2
  • 9