4

I am trying to implement a cost function in a pydrake Mathematical program, however I encounter problems whenever I try to divide by a decision variable and use the abs(). A shortened version of my attempted implementation is as follows, I tried to include only what I think may be relevant.

    T = 50
    na = 3
    nq = 5

    prog = MathematicalProgram()
    h = prog.NewContinuousVariables(rows=T, cols=1, name='h')
    qd = prog.NewContinuousVariables(rows=T+1, cols=nq, name='qd')
    d = prog.NewContinuousVariables(1, name='d')
    u = prog.NewContinuousVariables(rows=T, cols=na, name='u')

    def energyCost(vars):
       assert vars.size == 2*na + 1 + 1
       split_at = [na, 2*na, 2*na + 1]
       qd, u, h, d = np.split(vars, split_at)
       return np.abs([qd.dot(u)*h/d])

    for t in range(T):
       vars = np.concatenate((qd[t, 2:], u[t,:], h[t], d))
       prog.AddCost(energyCost, vars=vars)

    initial_guess = np.empty(prog.num_vars())
    solver = SnoptSolver()
    result = solver.Solve(prog, initial_guess)

The error I am getting is:

RuntimeError                              Traceback (most recent call last)
<ipython-input-55-111da18cdce0> in <module>()
     22 initial_guess = np.empty(prog.num_vars())
     23 solver = SnoptSolver()
---> 24 result = solver.Solve(prog, initial_guess)
     25 print(f'Solution found? {result.is_success()}.')

RuntimeError: PyFunctionCost: Output must be of .ndim = 0 (scalar) and .size = 1. Got .ndim = 2 and .size = 1 instead.

To the best of my knowledge the problem is the dimensions of the output, however I am unsure of how to proceed. I spent quite some time trying to fix this, but with little success. I also tried changing np.abs to pydrake.math.abs, but then I got the following error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-56-c0c2f008616b> in <module>()
     22 initial_guess = np.empty(prog.num_vars())
     23 solver = SnoptSolver()
---> 24 result = solver.Solve(prog, initial_guess)
     25 print(f'Solution found? {result.is_success()}.')

<ipython-input-56-c0c2f008616b> in energyCost(vars)
     14     split_at = [na, 2*na, 2*na + 1]
     15     qd, u, h, d = np.split(vars, split_at)
---> 16     return pydrake.math.abs([qd.dot(u)*h/d])
     17 
     18 for t in range(T):

TypeError: abs(): incompatible function arguments. The following argument types are supported:
    1. (arg0: float) -> float
    2. (arg0: pydrake.autodiffutils.AutoDiffXd) -> pydrake.autodiffutils.AutoDiffXd
    3. (arg0: pydrake.symbolic.Expression) -> pydrake.symbolic.Expression

Invoked with: [array([<AutoDiffXd 1.691961398933386e-257 nderiv=8>], dtype=object)]

Any help would be greatly appreciated, thanks!

2 Answers2

4

BTW, as Tobia has mentioned, dividing a decision variable in the cost function could be problematic. There are two approaches to avoid the problem

  1. Impose a bound on your decision variable, and 0 is not included in this bound. For example, say you want to optimize
    min f(x) / y
    
    If you can impose a bound that y > 1, then SNOPT will not try to use y=0, thus you avoid the division by zero problem.
  2. One trick is to introduce another variable as the result of division, and then minimize this variable.

    For example, say you want to optimize

    min f(x) / y
    

    You could introduce a slack variable z = f(x) / y. And formulate this problem as

    min z
    s.t f(x) - y * z = 0
    
Hongkai Dai
  • 2,546
  • 11
  • 12
2

Some observations:

  • The kind of cost function you are trying to use does not need the use of a python function to be enforced. You can just say (even though it would raise other errors as is) prog.AddCost(np.abs([qd[t, 2:].dot(u[t,:])*h[t]/d])).

  • The argument of prog.AddCost must be a Drake scalar expression. So be sure that your numpy matrix multiplications return a scalar. In the case above they return a (1,1) numpy array.

  • To minimize the absolute value, you need something a little more sophisticated than that. In the current form you are passing a nondifferentiable objective function: solvers do not quite like that. Say you want to minimize abs(x). A standard trick in optimization is to add an extra (slack) variable, say s, and add the constraints s >= x, s >= -x, and then minimize s itself. All these constraints and this objective are differentiable and linear.

  • Regarding the division of the objective by an optimization variable. Whenever you can, you should avoid that. For example (I'm 90% sure) that solvers like SNOPT or IPOPT set the initial guess to zero if you do not provide one. This implies that, if you do not provide a custom initial guess, at the first evaluation of the constraints, the solver will have a division by zero and it'll crash.

Tobia Marcucci
  • 204
  • 1
  • 7
  • 1
    To build on Tobia's answer: `np.abs([1.])` it'll be a size-1 vector (`.shape == (1,), .ndim == 1`). If instead you do `np.abs(1.)` (no square braces), it should be a scalar (`.shape == (), .ndim == 0`). – Eric Cousineau May 14 '20 at 14:43