Background
Auto-tuner for a car application: The application may change depending on model of the car, so naturally the objective function is going to change as well. The problem is to tune the parameters to the optimum ones for the specific car model. Input: car model, output: optimum paramters for the application for the specific car model. I want to solve this with optimization
I'm trying to minimize a complex nonlinear function, constrained with two nonlinear constraints, one inequality and one equality constraint. The problem is not bounded per se but i've put bounds on the parameters anyway to help speed up the optimization since I know more or less where the correct parameters lie. Parameters are: [x0,x1,x2,x3]
I've used the scipy.optimize.minimize() function with SLSQP method and found good results when the problem is bounded correctly. Although, the scipy.optimize.minimize() function is a local optimizer and solves QP problems which I don't think that my problem is. I've therefore started using a global optimization method with mystic,(mystic.differential_evolution). Since I'm not an expert in global optimization I naturally have some questions.
The problem
If I choose the bounds too wide the optimizer (mystic.differential_evolution) will stop iterating after a while and print:
STOP("ChangeOverGeneration with {'tolerance': 0.005, 'generations': 1500}")
When I run the solution that the optimizer found, I see that the result is not as good as if I were to lower(shrink) the bounds. Obviously the global optimizer has not found the global optimum, yet it stopped iterating. I know that there are a multiple parameter sets that yield the same global minimum.
Since the objective function may change with the car model, I want the bounds to remain relativley broad in case the global optimum changes which would change the correct parameters.
Question
- How do I tune the settings of the optimizer to get it to keep searching and find the global optimum?
- Is the npop = 10*dim rule a good approach to the problem?
- Can I make broaden the horizon of the optimizers search algorithm to get it to find the optimal parameters which it missed?
Code
def optimize_mystic_wd(T, param_wp, w_max, safe_f):
# mystic
import mystic
from mystic.monitors import VerboseLoggingMonitor
from mystic.penalty import quadratic_inequality
from mystic.penalty import quadratic_equality
from mystic.solvers import diffev2
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# tools
from mystic.tools import getch
import pylab
import numpy as np
def objective(x):
from model_func import model
[val, _] = model(param_wp, x)
return -val
def penalty1(x): # <= 0.0
t = np.linspace(0, T, 100)
wd = (x[0] * np.sin(x[1] * t + x[3]) + x[2])
index = np.argmax(wd)
t_max = t[index]
return ((x[0] * np.sin(x[1] * t_max + x[3]) + x[2])) -2*np.pi
def penalty2(x): # == 0.0
return x[0] * (np.cos(x[3]) - np.cos(x[1] * T + x[3])) / x[1] + x[2] * T - 2 * np.pi
@quadratic_inequality(penalty1, k=1e12)
@quadratic_equality(penalty2, k=1e12)
def penalty(x):
return 0.0
b1 = (0, 2*np.pi)
b2 = (0, 2 * np.pi/(2*T))
b3 = (0, 2*np.pi)
b4 = (0, 2*np.pi/T)
bounds = [b1, b2, b3, b4]
stepmon = VerboseLoggingMonitor(1,1)
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty, npop=40, gtol=1500, disp=True, full_output=True, itermon=stepmon, handler=True, retall=True, maxiter=4000)