1

For this function :

import numpy as np

def my_function(param1 , param2 , param3 , param4) : 
    return param1 + 3*param2 + 5*param3 + np.power(5 , 3) + np.sqrt(param4)

print(my_function(1,2,3,4))

This prints 134.0

How to return 100 instead of 134.0 or as close a value to 6 as possible with following conditions of my_function parameters : param1 must be in range 10-20, param2 must be in range 20-30, param3 must be in range 30-40, param4 must be in range 40-50

I'm not asking for specific solutions to this problem but what domain does it fall into ? Reading https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html & Parameter Optimization in Python suggest this is possible with out of box solutions (in low dimensions) . Can genetic programming be applied to this problem ?

blue-sky
  • 51,962
  • 152
  • 427
  • 752
  • 1
    It's a nonlinear optimization problem. https://en.wikipedia.org/wiki/Nonlinear_programming There's a toolbox of alogrithms here, and GAs are not your go-to. – BadZen Feb 21 '18 at 19:43
  • 1
    ```param3``` is never used. – sascha Feb 21 '18 at 20:00
  • If you are asking how to solve value = my_function , that can be viewed as a root finding algorithm (very related to optimization). You can look at scipy https://docs.scipy.org/doc/scipy-0.18.1/reference/optimize.html – jman Feb 21 '18 at 20:10
  • @BadZen thanks something like : https://stackoverflow.com/questions/21765794/python-constrained-non-linear-optimization also I'm curious as to not why recommend GA ? Using GA can initialize the weight to a set of random values within range and GA should find optimum set of weights that minimizes the function ? – blue-sky Feb 21 '18 at 20:10
  • 1
    1) This is in it's naive form **unconstrained** optimization (with bounds). Much easier than constrained opt. 2) GA are just (personal opinion: dumb) incomplete heuristics without formal guarantees which only work in a limited a-priori analyzed domain while being highly tuned. 3) As this looks non-convex, you need to decide if local-opt solutions are okay (easy) or global-opts are needed (not so easy) – sascha Feb 21 '18 at 20:13
  • What @sascha just said. :) You said you weren't interested in this problem in particular, but this *sort* of problem. I might have let on that it gets more specialized / easier to solve / more specific algos the more you constrain the problem space in certain ways less than I should have... – BadZen Feb 21 '18 at 20:16
  • I might add that this example is kinda useless, as the solution with all vars @ lower-bound will be the best for both your example-values. – sascha Feb 21 '18 at 20:21
  • @sascha setting param1 to 10 , param2 to 20, param3 to 30 , param4 to 40 is beat solution for this example? This is what is meant by setting all ' vars @ lower-bound ' ? – blue-sky Feb 21 '18 at 20:29
  • Sure. Every component is only affecting the output in a growing-way. Using all lower-bounds gives the minimum objective subject to those bounds. And this objective is still far away from what you want. – sascha Feb 21 '18 at 20:32

3 Answers3

1

Basically, you want to minimize the error between an estimate and the real function.

EDIT: The obvious choice here is to use gradient descent on 4 parameters. If you do not want to do that and ask more like a pragmatic solution, here it is.

The main problem here is there are 4 parameters. To solve this problem, you can do this:

  1. Fix any 3 parameters, leave the last one alone. We will try to find this value.
  2. Find the inverse function, and solve for it either explicitly or using a solver (which may be a numerical method like Newton-Raphson or Brent method)

I will describe a process to demonstrate this idea. We will use scipy's scalar_minimizer which employs Brent method.

For the sake of discussion, let's keep your function consist of 2 parameters, and let's assume your function is:

def f(p1, p2):
    return p1 + np.sqrt(p2)

you are basically asking how to find and p1, p2 values such that f(p1, p2) = 100. Assuming ranges are following:

  • ranges for p1: 10-20
  • ranges for p2: 20-30

Let's fix p1 to 10 (you are free to fix to anything in this range). Now the function becomes

def g(p2):
    return 10 + np.sqrt(p2)

We want this to be as close too 100 possible, so let's create an error function which measure how far our estimate is away from 100.

def error(p2):
    return 100 - (10 + np.sqrt(p2)) # we want to minimize this

You can find value to minimize this error so that you can be as close to as possible 100 through

from scipy import optimize
optimize.minimize_scalar(error, bounds = (10,20), method = "bounded")   

which gives a value of x = 19.9 as the value that minimizes the error.

Emmet B
  • 5,341
  • 6
  • 34
  • 47
  • So how will that work for his example? What will it provide? – sascha Feb 21 '18 at 20:57
  • the p1 value that gets as close to 100 as he asked, what else can it be? – Emmet B Feb 21 '18 at 20:59
  • For that you don't need code and your approach does not bring anything worthy (at least it's incomplete in terms of *how to transfer*) (that's just my opinion). The multivariate GD approach is local, but is that enough here? – sascha Feb 21 '18 at 21:04
  • I am just glad that at least you realized finally that its your opinion. – Emmet B Feb 21 '18 at 21:12
0

create new function to optimize by adding penalty to constraint violations.

param1 must be in range 10-20 therefore to satisfy constraints for param1 only new function to optimize would be

f(p1,p2,p3,p4)=my_function(p1,p2,p3,p4)+1000*(p1-30)*2
param1=20+p1

with change of variable to optimize p1=param1-20 you can play with magnitude of coefficent before the constraint , which would depend on optimization method used.

square is needed so that gradient exist for all p1

add other penalties to new optimized function as needed

alexprice
  • 394
  • 4
  • 12
-1

Just for fun, a small demo in julia (as someone said: no concrete solution available).

This a global open-source solver which will work on small-scale problems like that (and transfers to more complex ones). Keep in mind, that your examples are somewhat trivial (both targets will result in lower-bounds for all variables, no optimization needed to see that; the code will output those as expected) and i'm using some other value where there actually is something to optimize!

When the model get's more complex, global-optimization will be infeasible (very hard in theory; sometimes impossible). You can just switch the solver to Ipopt to obtain a local-optimum.

This can be done too in python using pyomo, but it's less nice. The model and the solver can be used. Only the code changes.

Code

using JuMP, AmplNLWriter

TARGET = 387

m = Model(solver=AmplNLSolver(CoinOptServices.couenne))

@variable(m, 10 <= param1 <= 20, start=10)
@variable(m, 20 <= param2 <= 30, start=20)
@variable(m, 30 <= param3 <= 40, start=30)
@variable(m, 40 <= param4 <= 50, start=40)
@variable(m, aux)

@NLconstraint(m, aux == TARGET - (param1 + 3*param2 + 5*param3 + 5^3 + sqrt(param4)))
@NLobjective(m, Min, aux^2)
solve(m)

println("objective: ", getobjectivevalue(m))
println("param1 = ", getvalue(param1))
println("param2 = ", getvalue(param2))
println("param3 = ", getvalue(param3))
println("param4 = ", getvalue(param4))

Out

Mailing list: couenne@list.coin-or.org
Instructions: http://www.coin-or.org/Couenne
couenne:
ANALYSIS TEST: Couenne: new cutoff value 0.0000000000e+000 (0.016 seconds)
NLP0012I
              Num      Status      Obj             It       time                 Location
NLP0014I             1         OPT 0        7 0.003
Loaded instance "C:\Users\Sascha\.julia\v0.5\AmplNLWriter\.solverdata\jl_21AE.tmp.nl"
Constraints:            1
Variables:              5 (0 integer)
Auxiliaries:            2 (0 integer)

Coin0506I Presolve 11 (0) rows, 4 (-3) columns and 22 (-3) elements
Clp0006I 0  Obj 0 Primal inf 0.0023740886 (2)
Clp0006I 1  Obj -4.0767235e-022
Clp0000I Optimal - objective value 0
Clp0032I Optimal objective 0 - 1 iterations time 0.012, Presolve 0.00
Clp0000I Optimal - objective value 0
NLP Heuristic: NLP0014I             2         OPT 0        3 0.001
no solution.
Cbc0010I After 0 nodes, 0 on tree, 1e+050 best solution, best possible 0 (0.01 seconds)
Clp0000I Optimal - objective value 3.90625e-007
Clp0006I 0  Obj 0 Primal inf 0.00098181331 (1)
Clp0006I 1  Obj -3.2730444e-022
Clp0000I Optimal - objective value 0
Optimality Based BT: 0 improved bounds
Cbc0004I Integer solution of 0 found after 2 iterations and 2 nodes (0.03 seconds)
Cbc0001I Search completed - best objective 0, took 2 iterations and 2 nodes (0.04 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost

        "Finished"

Linearization cuts added at root node:         11
Linearization cuts added in total:             11  (separation time: 0s)
Total solve time:                           0.065s (0.065s in branch-and-bound)
Lower bound:                                    0
Upper bound:                                    0  (gap: 0.00%)
Branch-and-bound nodes:                         2
WARNING: Nonlinear solver does not provide dual solutions
objective: 0.0
param1 = 10.0
param2 = 20.0
param3 = 37.13508893593264
param4 = 40.0
sascha
  • 32,238
  • 6
  • 68
  • 110