3

I am trying to implement Collaborative Optimization & other multi-level architectures on OpenMDAO. I read here that this can be done by defining a separate solve_nonlinear method in the Subclass of Problem.

The issue is that while running the problem instance the defined solve_linear is not being called. Here is the code -

from __future__ import print_function, division
import numpy as np
import time

from openmdao.api import Component,Group, IndepVarComp, ExecComp,\
    Problem, ScipyOptimizer, NLGaussSeidel, ScipyGMRES


class SellarDis1(Component):
    """Component containing Discipline 1."""

    def __init__(self):
        super(SellarDis1, self).__init__()

        self.add_param('z', val=np.zeros(2))
        self.add_param('x', val=0.0)
        self.add_param('y2', val=1.0)

        self.add_output('y1', val=1.0)

    def solve_nonlinear(self, params, unknowns, resids):
        y1 = z1**2 + z2 + x1 - 0.2*y2"""

        z1 = params['z'][0]
        z2 = params['z'][1]
        x1 = params['x']
        y2 = params['y2']

        unknowns['y1'] = z1**2 + z2 + x1 - 0.2*y2

    def linearize(self, params, unknowns, resids):
        J = {}

        J['y1','y2'] = -0.2
        J['y1','z'] = np.array([[2*params['z'][0], 1.0]])
        J['y1','x'] = 1.0

        return J

class SellarDis2(Component):

    def __init__(self):
        super(SellarDis2, self).__init__()

        self.add_param('z', val=np.zeros(2))
        self.add_param('y1', val=1.0)

        self.add_output('y2', val=1.0)

    def solve_nonlinear(self, params, unknowns, resids):

        z1 = params['z'][0]
        z2 = params['z'][1]
        y1 = params['y1']
        y1 = abs(y1)

        unknowns['y2'] = y1**.5 + z1 + z2

    def linearize(self, params, unknowns, resids):
        J = {}

        J['y2', 'y1'] = 0.5*params['y1']**-0.5
        J['y2', 'z'] = np.array([[1.0, 1.0]])

        return J

class Sellar(Group):

    def __init__(self):
        super(Sellar, self).__init__()

        self.add('px', IndepVarComp('x', 1.0), promotes=['*'])
        self.add('pz', IndepVarComp('z', np.array([5.0,2.0])), promotes=['*'])

        self.add('d1', SellarDis1(), promotes=['*'])
        self.add('d2', SellarDis2(), promotes=['*'])

        self.add('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                     z=np.array([0.0, 0.0]), x=0.0, y1=0.0, y2=0.0),
                 promotes=['*'])

        self.add('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['*'])
        self.add('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['*'])

        self.nl_solver = NLGaussSeidel()
        self.nl_solver.options['atol'] = 1.0e-12

        self.ln_solver = ScipyGMRES()

    def solve_nonlinear(self, params=None, unknowns=None, resids=None, metadata=None):

        print("Group's solve_nonlinear was called!!")
        # Discipline Optimizer would be called here?
        super(Sellar, self).solve_nonlinear(params, unknowns, resids)


class ModifiedProblem(Problem):

    def solve_nonlinear(self, params, unknowns, resids):

        print("Problem's solve_nonlinear was called!!")
        # or here ?
        super(ModifiedProblem, self).solve_nonlinear()


top = ModifiedProblem()
top.root = Sellar()

top.driver = ScipyOptimizer()
top.driver.options['optimizer'] = 'SLSQP'

top.driver.add_desvar('z', lower=np.array([-10.0, 0.0]),
                     upper=np.array([10.0, 10.0]))
top.driver.add_desvar('x', lower=0., upper=10.0)
top.driver.add_objective('obj')
top.driver.add_constraint('con1', upper=0.0)
top.driver.add_constraint('con2', upper=0.0)


top.setup(check=False)
top.run()

The output of above code is -

Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Group's solve_nonlinear was called!!
Optimization terminated successfully.    (Exit mode 0)
            Current function value: [ 3.18339395]
            Iterations: 6
            Function evaluations: 6
            Gradient evaluations: 6
Optimization Complete
-----------------------------------

which means that the solve_nonlinear defined in subclass of Problem isn't called at any time. So, should I call the discipline optimizers in Group's Subclass?

Also, how do I pass the target variables between the two optimization problems (System & Disciplines), specially returning the optimized global variables from individual disciplines back to the system optimizer.

Thanks to all.

Community
  • 1
  • 1
Divya Manglam
  • 491
  • 2
  • 13
  • see related answer for openmdao 2.0: https://stackoverflow.com/questions/42611927/openmdao-co-collaborative-optimization-on-sellar-test-case/48393272#48393272 – Justin Gray Jan 23 '18 at 02:30

1 Answers1

2

You are right that solve_nonlinear on Problem is never called, because Problem is not an OpenMDAO component and doesn't have a solve_nonlinear method. What you want to do in order to run a submodel problem inside another problem is to encapsulate it in a Component instance. It would look something like this:

class SubOptimization(Component)

    def __init__(self):
        super(SubOptimization, self).__init__()

        # Inputs to this subprob
        self.add_param('z', val=np.zeros(2))
        self.add_param('x', val=0.0)
        self.add_param('y2', val=1.0)

        # Unknowns for this sub prob
        self.add_output('y1', val=1.0)

        self.problem = prob = Problem()
        prob.root = Group()
        prob.add('px', IndepVarComp('x', 1.0), promotes=['*'])
        prob.add('d1', SellarDis1(), promotes=['*'])

        # TODO - add cons/objs for sub prob

        prob.driver = ScipyOptimizer()
        prob.driver.options['optimizer'] = 'SLSQP'

        prob.driver.add_desvar('x', lower=0., upper=10.0)
        prob.driver.add_objective('obj')
        prob.driver.add_constraint('con1', upper=0.0)
        prob.driver.add_constraint('con2', upper=0.0)

        prob.setup()

        # Must finite difference across optimizer
        self.fd_options['force_fd'] = True

    def solve_nonlinear(self, params, unknowns, resids):

        prob = self.problem

        # Pass values into our problem
        prob['x'] = params['x']
        prob['z'] = params['z']
        prob['y2'] = params['y2']

        # Run problem
        prob.run()

        # Pull values from problem
        unknowns['y1'] = prob['y1']

You can place this component into your main Problem (along with one for discipline 2, though 2 doesn't really need a sub-optimization since it has no local design variabes) and optimize the global design variable around it.

One caveat: this isn't something I have tried (nor have I tested the incomplete code snippet above), but it should get you on the right track. It's possible you may encounter a bug since this isn't really tested much. When I get some time, I will put together a CO test like this for the OpenMDAO tests so that we are safe.

Kenneth Moore
  • 2,167
  • 1
  • 9
  • 13
  • Thanks!! Its working perfectly. Can you explain why sub optimization isn't necessary for discipline 2? Even though it doesn't has a no local design variable, the local constraint would still need to be taken care of. – Divya Manglam Feb 12 '16 at 05:14
  • You are right. I haven't done collaborative optimization in a long time, and was thinking it only optimized on 'x', but looks like the locals optimize the global 'z' design vars and the coupling variables, while the outer optimizer drives the targets. – Kenneth Moore Feb 12 '16 at 15:55