0

I am trying to use mystic for some constrained optimization problems, but it will not respect my constraint, even after using mystic's simplify function. I reckon I must be missing something, but I can't figure out what it is.

Here is a simple example of code that should illustrate my issue:

# %pip install mystic
import mystic as my
import numpy as np

def objective(X):
  x0, x1, x2 = X
  return x0*x1*x2

bounds = [(0., 1e5)]*3

eqn = "x0 + x1 + x2 - 100 == 0.0"

constraint = my.symbolic.generate_constraint(
    my.symbolic.generate_solvers(my.symbolic.simplify(eqn))
)

mon = my.monitors.VerboseMonitor(10)

result = my.solvers.diffev2(
  objective, 
  x0=bounds, 
  bounds=bounds, 
  constraints=constraint, 
  disp=True, 
  full_output=True, 
  itermon=mon)#, map=p.map)
print (result)

This is a very simple objective function (my actual work involves something more complicated), which should leave us with a result of x0 = x1 = x2 = 100/3; f(x) = (100/3)^3, but mystic is ignoring the constraint and saying that the function value is infinite. I have also tried making the objective function negative, making the constraint negative, and both at the same time, each time mystic ignored the constraint.

What is missing from this code to make mystic follow the simple summation constraint?

Edit: I found that by removing the "s" from "constraints" in the diffev2 call caused mystic to stop reporting infinite function values. I also changed the objective to -x0*x1*x2 because mystic is attempting to minimize, so this should actually give the x0=x1=x2 result I was expecting. Unfortunately mystic now always converges to [100000, 100000, 100000], even when changing the constant in the constraint, changing the objective by squaring one of the terms, or increasing npop and gtol.

Ken Nod
  • 1
  • 1
  • What mystic documentation URL(s) or blog entries did you rely on when writing that? – J_H Jul 25 '23 at 15:20
  • I read over a few stack overflow questions as well as the documentation and examples from the mystic website to try to cobble together an understanding. I heavily utilized Mike's answer on this question: https://stackoverflow.com/questions/21765794/python-constrained-non-linear-optimization especially the pressure vessel example that he provided. Based on your question, I'm guessing I didn't do a very good job of figuring out how everything was supposed to come together. – Ken Nod Jul 25 '23 at 15:31
  • Forgot, @J_H see my previous comment on the mystic constrained optimization question. – Ken Nod Jul 25 '23 at 15:39
  • I'm just looking for differences between "the thing that works" and "the simple thing I've coded up so far". The biggest one I noticed is that Mike has a `penalty()` decorated by `@quadratic_inequality(constraint, k=1e4)`. You might try that, rather than passing in `constraints=constraint`. I _guess_ I'm willing to believe that `npop` and `gtol` aren't relevant here. – J_H Jul 25 '23 at 15:39
  • @J_H that was my debugging thought as well, but I can't figure out what is materially different in my code for the task I am trying to accomplish. My understanding is that in mystic penalties are a separate kind of constraint that apply a negative to the objective, rather than hard constraints which should prevent solutions completely in the defined region. For the problem I'm dealing with that just has a hard constraint, not having any penalties is part of the design. npop and gtol are parameters with defaults that don't effect the problem I'm dealing with. – Ken Nod Jul 25 '23 at 15:44
  • Well, it's just the usual debugging exercise, right? We draw the "ideal" on the whiteboard, here some code from Mike, and compare it with our current buggy situation, and look for differences in order to drive our situation closer to the ideal. Now, it's often enough I've claimed "there's no substantive difference between them, just this trivial spelling detail", and yet when I change the spelling, presto! it starts working. I advise you to mangle the current code until it works, and _then_ bring it closer to current setup. We learn more from testing a Working thing than from a Broken thing. – J_H Jul 25 '23 at 15:47
  • @J_H spelling did seem to be part of the problem. Removing the "s" from "constraints" stopped the program from thinking the solution was infinite. Unfortunately it still doesn't give the right answer. I am consistently getting decimal answers like [0.34, 0.14, 0.05], which is both no where near to an optimum and also no where near to satisfying the equality constraint. – Ken Nod Jul 25 '23 at 16:06

1 Answers1

2

For the bounds you are using the objective function's minimum value is not at [100/3] * 3; it has multiple solutions at [100, 0, 0], [0, 100, 0], and[0, 0, 100].

The issue you are running into is npop is too small. It is the initial number of population sample to use. The default value is only 4, which is fine for 1-D problems. But for a multidimensional problem with such a large set of bounds, you will need to start with more points.

Also, the optimizer will need additional iterations to explore the phase space in search of better locations. You can control this with gtol.

Using the following, it converged on the 3 attempt.

mon = my.monitors.VerboseMonitor(50)

result = my.solvers.diffev2(
  objective,
  x0=bounds,
  bounds=bounds,
  npop=2000,
  gtol=300,
  constraints=constraint,
  disp=True,
  full_output=True,
  itermon=mon,
  )

print(result)

Here is the output:

Generation 0 has ChiSquare: inf
Generation 50 has ChiSquare: 234.327267
Generation 100 has ChiSquare: 234.327267
Generation 150 has ChiSquare: 234.327267
Generation 200 has ChiSquare: 2.427155
Generation 250 has ChiSquare: 2.427155
Generation 300 has ChiSquare: 2.427155
Generation 350 has ChiSquare: 0.015526
Generation 400 has ChiSquare: 0.000000
Generation 450 has ChiSquare: 0.000000
Generation 500 has ChiSquare: 0.000000
Generation 550 has ChiSquare: 0.000000
Generation 600 has ChiSquare: 0.000000
Generation 650 has ChiSquare: 0.000000
STOP("ChangeOverGeneration with {'tolerance': 0.005, 'generations': 300}")
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 655
         Function evaluations: 1312000
(array([0.00000000e+00, 6.62999242e-06, 9.99999934e+01]), 0.0, 655, 1312000, 0)

And it found one of the correct optimizations at [0, 0, 100].

James
  • 32,991
  • 4
  • 47
  • 70
  • Thanks for the explanation about npop and gtol. When I run the code with your changes to npop and gtol, it still doesn't converge to the solution for me, instead I just keep getting three roughly equal decimals that don't satisfy the constraint. I tried upping gtol some more, but that just ended up with smaller decimals. – Ken Nod Jul 25 '23 at 16:36
  • Also, I was actually about to edit my question before seeing your answer, because I realized that I needed to add a negative to my objective to get the solution I was originally expecting (so the code can minimize -x0*x1*x2). Unfortunately, doing that results in the program converging to [100000, 100000, 100000] every time, even if I change the constant in the constraint. Any ideas? – Ken Nod Jul 25 '23 at 16:39
  • If you make you objective function negative, it will be minimized as the largest positive triplet available on the constraint. In this case [100000, 100000, 100000] – James Jul 25 '23 at 16:59
  • so I see that it is maxing out the bounds--reducing the upper bound causes the triplet to reach the new upper bound instead of 100,000--but I don't understand what you mean by "available on the constraint." [100000, 100000, 100000] certainly does not satisfy x0 + x1 + x2 -100 = 0. So it appears that the code is now following the bounds but is still ignoring the constraint. – Ken Nod Jul 25 '23 at 17:34
  • Switching over to a penalty instead of a hard constraint works and gives me the correct solution of ~100/3, ~100/3, ~100/3, and is orders of magnitude faster, which I believe is better for my purposes, but I am still curious to know why the hard constraint doesn't work in my code. – Ken Nod Jul 25 '23 at 17:39