1

I am working on a cost minimizing function to help with allocation/weights in a portfolio of stocks. I have the following code for the "Objective Function". This works when I tried it with 15 variables(stocks). However, when I tried it with 55 stocks it failed.

I have tried it with a smaller sample of stocks(15) and it works fine. The num_assets variable below is the number of stocks in the portfolio.

def get_metrics(weights):
    weights = np.array(weights)
    returnsR = np.dot(returns_annualR, weights )
    volatilityR = np.sqrt(np.dot(weights.T, np.dot(cov_matrixR, weights)))
    sharpeR = returnsR / volatilityR
    drawdownR = np.multiply(weights, dailyDD).sum(axis=1, skipna = 
    True).min()
    drawdownR = f(drawdownR)
    calmarR = returnsR / drawdownR
    results = (sharpeR * 0.3) + (calmarR * 0.7)
    return np.array([returnsR, volatilityR, sharpeR, drawdownR, calmarR, 
    results])


def objective(weights):
    # the number 5 is the index from the get_metrics array
    return get_metrics(weights)[5] * -1 

def check_sum(weights): 
    #return 0 if sum of the weights is 1
    return np.sum(weights)-1

bound = (0.0,1.0)
bnds = tuple(bound for x in range (num_assets))
bx = list(bnds)

""" Custom step-function """
class RandomDisplacementBounds(object):
    """random displacement with bounds:  see: https://stackoverflow.com/a/21967888/2320035
        Modified! (dropped acceptance-rejection sampling for a more specialized approach)
    """
    def __init__(self, xmin, xmax, stepsize=0.5):
        self.xmin = xmin
        self.xmax = xmax
        self.stepsize = stepsize

    def __call__(self, x):
        """take a random step but ensure the new position is within the bounds """
        min_step = np.maximum(self.xmin - x, -self.stepsize)
        max_step = np.minimum(self.xmax - x, self.stepsize)

        random_step = np.random.uniform(low=min_step, high=max_step, size=x.shape)
        xnew = x + random_step

        return xnew

bounded_step = RandomDisplacementBounds(np.array([b[0] for b in bx]), np.array([b[1] for b in bx]))

minimizer_kwargs = {"method":"L-BFGS-B", "bounds": bnds}

globmin = sco.basinhopping(objective, 
                           x0=num_assets*[1./num_assets],
                           minimizer_kwargs=minimizer_kwargs,
                           take_step=bounded_step,
                           disp=True)

The output should be an array of numbers that add up to 1 or 100%. However, this is not happening.

  • 1
    Failed how? Crashed? Error message? No convergence? – Joe Sep 09 '19 at 13:08
  • Hi Joe - it does give an output. However, the output should add up to 1, which is not happening. Perhaps, the question is how do I impose a constraint as in the function "check_sum"? Are there other global minimization methods that allow for both bounds and constraints. I have tried SLSQP but that fails due "positive directional search" – user9853666 Sep 11 '19 at 01:52
  • Basinhopping does not support constraints or boundaries, see doc https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize – Joe Sep 11 '19 at 05:15
  • You can take a look at NOMAD or MADS. There is another good toolbox, [NLopt](https://nlopt.readthedocs.io/en/latest/) which has Python bindings, and there is a package here https://www.lfd.uci.edu/~gohlke/pythonlibs/#nlopt – Joe Sep 11 '19 at 05:19
  • https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/ lists the algorithms, take a look at DIRECT, which is a fine global optimizer. Basinhopping needs a rather fine adjustment of the parameters to work well. – Joe Sep 11 '19 at 05:23
  • And if you fail to find a global algorithm that does not provide boundaries or constraints you can introduce them to your objective function using a penalty function. Basically you increase the value artificially if your criteria are not matched, then the optimizer will stay away from them. You can introduce them as a "brick wall" but also as smooth gradient. The algorithm might perform better with the latter option. – Joe Sep 11 '19 at 05:27
  • 1
    https://github.com/scipy/scipy/issues/7799#issuecomment-325301854 – Joe Sep 11 '19 at 05:28
  • Same problem as https://github.com/scipy/scipy/issues/7799? – endolith Oct 09 '22 at 13:33

1 Answers1

0

This function is a failure on my end as well. It failed to choose values which were lower -- ie., regardless of output from optimization function (negative or positive), it persisted until the parameter I was optimizing was as bad as it could possibly be. I suspect that since the function violates function encapsulation and relies on "function attributes" to adjust stepsize, the developer may not have respected encapsulated function scope elsewhere, and surprising behavior is happening as a result.

Regardless, in terms of theory, anything else is just a (dubious) estimated numerical partial second derivative (numerical Hessian, or "estimated curvature" for us mere mortals) based "performance" "gain", which reduces to a randomly-biased annealer in discrete, chaotic (phase space, continuous) or mixed (continuous and discrete) search spaces with volatile curvatures or planar areas (due to numerical underflow and loss of precision).

Anyways, import the following:

scipy.optimize.dual_anneal

dual anneal

Chris
  • 28,822
  • 27
  • 83
  • 158