For illustration purposes, we can print how G
changes as minimize
iterates to the local minimum. For example, consider the following case where the objective is to minimize a quadratic function of the form f(x) = 2*x**2 - 4*x + 7
(which attains its minimum at x=1
). Note that the parameters of the function (2, -4 and 7) were supplied as arguments to obj_func
below.
The initial guess has to be supplied to start the algorithm.
As the output shows, starting from the initial value of 10, the function variable G
descends to the minimizer.
from scipy.optimize import minimize
def obj_func(G, a, b, c):
print(G)
return a*G**2 + b*G + c
initial_guess = 10
a, b, c = 2, -4, 7
result = minimize(obj_func, x0=initial_guess, args=(a, b, c))
print(f"\nMinimizer = {result.x}")
print(f" Minimum = {result.fun}")
which outputs the following:
[10.]
[10.00000001]
[8.99]
[8.99000001]
[6.82085113]
[6.82085114]
[1.62275043]
[1.62275045]
[1.]
[1.00000001]
Minimizer = [1.]
Minimum = 5.0
Another example: Consider a two-variable function of the form f(x, y) = (x - 1)**2 + (y - 2)**2
. The function attains its minimum at (x, y) = (1, 2)
(and the minimum is 0).
Then starting from an initial point of (x, y) = (0, 3)
, the function converges to the minimum as in the following.
def obj_func(variable, x_offset, y_offset):
x, y = variable
print(f"x={x:.3f}, y={y:.3f}")
return (x - x_offset)**2 + (y - y_offset)**2
initial_guess = [0, 3]
result = minimize(obj_func, initial_guess, args=(1, 2))
This prints the following which shows that the variables converge to the minimizer.
x=0.000, y=3.000
x=0.000, y=3.000
x=0.000, y=3.000
x=0.714, y=2.286
x=0.714, y=2.286
x=0.714, y=2.286
x=1.000, y=2.000
x=1.000, y=2.000
x=1.000, y=2.000
An important note about minimize
is that the initial guess has to be an educated one especially if the objective function is complex, otherwise the algorithm may not converge. I find that to be a major source of unsuccessful optimization run.