1

I am trying to use scipy.optimize.minimize to find the optimal weights of 7 scores such that the Weighted Scores = Expected Returns Score with constraints where all weights must equals 1 when added together. (Currently using random scores for the model)

Example : Weight1 * Score1 +.... weight7 * score7 = Expected Return Score

import numpy as np
from scipy.optimize import minimize, Bounds
import random

scores = [random.random(),
         random.random(),
         random.random(),
         random.random(),
         random.random(),
         random.random(),
         random.random()]

normalisedReturns = random.random()

def objective(x):
    
    for i in range(len(scores)):
        scores_sum = 0
        scores_sum += (x[i] * scores[i])
        
        return scores_sum - normalisedReturns

x0 = [1,0,0,0,0,0,0]

opt_constraints = ({'type': 'eq','fun': lambda x: np.sum(x) - 1})
opt_bounds = Bounds(0,1)

sol = minimize(objective,x0,method = 'SLSQP',bounds = opt_bounds,constraints = opt_constraints)

Output

print (sol)
     fun: -0.9289009913526015
     jac: array([0.77359443, 0.        , 0.        , 0.        , 0.        ,
       0.        , 0.        ])
 message: 'Optimization terminated successfully'
    nfev: 24
     nit: 3
    njev: 3
  status: 0
 success: True
       x: array([1.27675648e-15, 1.66666667e-01, 1.66666667e-01, 1.66666667e-01,
       1.66666667e-01, 1.66666667e-01, 1.66666667e-01])

print (sum(sol.x))
    1.0000000000000002

There are 3 issues:

  1. The output x does not follow my Bounds(0,1)
  2. The sum(sol.x) = 1 shows that it is follow my opt_contraints which doesn't make sense as all the values of x are > 1 3.The func should ideally return 0 but it returns the negative value of normalisedReturns

I apologize if im making beginner mistakes as im just starting out! Can someone point out where im wrong or if there are any additional resources i can refer to?

Thanks in advance!

1 Answers1

1

Welcome to SO! First, please note that you can easily rewrite your objective function as an one-liner by using numpy.ndarrays and vectorized numpy operations:

import numpy as np

scores = np.random.random(7)
normalisedReturns = np.random.random()

def objective(x):
    return np.sum(x * scores) - normalisedReturns

Regarding your issues:

  1. Your solution sol.x is feasible and within your bounds. I guess you are new to the scientific notation, so make yourself aware of the fact that 1.27675648e-15 is just a short form for 1.27675648 * 10**(-15).
  2. The reason for your negative objective function value is simple: You are trying to solve an equation by minimizing the residual function of your equation. This is clearly the wrong approach from a mathematical point of view. Instead, you need to minimize some vector norm of your objective function, e.g. lambda x: objective(x)**2. See here and here for a detailed explanation.
joni
  • 6,840
  • 2
  • 13
  • 20