3

I am performing portfolio optimization, and I would like to extend the discussion here with the following:

I have a vector of weights w_bench that is used as a benchmark. I would like to optimize a portfolio weight vector w_pf that satisfies

sum(pmin(w_bench, w_pf)) > 0.7

pmin here is the pairwise minimum. This forces the optimized portfolio weights w_pf to be similar to the benchmark weights w_bench, and the right-hand-size (0.7 in this case) controls how closely they need to match. As that value gets larger, we require the portfolios to be more similar.

Initially I thought I could easily do it with fPortfolio package (still trying). But so far no dice. I also think that solving this with quadprog would be a lot more intuitive, but I do not know how to incorporate said function into the process.

Excel implementation:

Covariance matrix:

0.003015254   -0.000235924  0.000242836
-0.000235924  0.002910845   0.000411308
0.000242836   0.000411308   0.002027183

Weights:

    w_pf    w_bench min
V1   0.32    0.40    0.32 
V2   0.31    0.50    0.31 
V3   0.38    0.10    0.10 
Ss   1.00    1.00    0.72 

Minimize Variance (=MMULT(TRANSPOSE(H8:H10),MMULT(H3:J5,H8:H10))) with constraints of Ss(w_pf) = 1 and Ss(min) > 0.7

josliber
  • 43,891
  • 12
  • 98
  • 133
AK88
  • 2,946
  • 2
  • 12
  • 31

1 Answers1

2

As you note, the tricky constraint is that sum(pmin(w_bench, w_pf)) > 0.7 (actually, it turns out to be very tough to have strict inequality, so I will be doing >= instead of >; you could of course re-solve with >= 0.7+epsilon for some small epsilon). To approach this, we will create a new variable y_i for each element i in our portfolio, and we will add constraints y_i <= wpf_i (aka wpf_i - y_i >= 0) and y_i <= wbench_i (aka -y_i >= -wbench_i), where wpf_i is the proportion of i in our selected portfolio (a decision variable) and wbench_i is the proportion of i in the benchmark portfolio (input data). This constrains y_i to be no larger than the minimum of these two values. Finally, we will add the constraint \sum_i y_i >= 0.7, requiring that these minimum values sum to at least 0.7.

All that remains is to implement this in the quadprog package. Setting up with your problem data:

cov.mat <- rbind(c(0.003015254, -0.000235924, 0.000242836), c(-0.000235924, 0.002910845, 0.000411308), c(0.000242836, 0.000411308, 0.002027183))
w.bench <- c(.4, .5, .1)
n <- length(w.bench)

Since we are adding new variables, we will pad the covariance matrix (which will be placed in the optimization objective) with 0's in the rows and columns corresponding to those new variables. We can do this with:

(cov.mat.exp <- cbind(rbind(cov.mat, matrix(0, n, n)), matrix(0, 2*n, n)))
#              [,1]         [,2]        [,3] [,4] [,5] [,6]
# [1,]  0.003015254 -0.000235924 0.000242836    0    0    0
# [2,] -0.000235924  0.002910845 0.000411308    0    0    0
# [3,]  0.000242836  0.000411308 0.002027183    0    0    0
# [4,]  0.000000000  0.000000000 0.000000000    0    0    0
# [5,]  0.000000000  0.000000000 0.000000000    0    0    0
# [6,]  0.000000000  0.000000000 0.000000000    0    0    0

Now we want to create a constraint matrix for all our constraints:

(consts <- rbind(rep(c(1, 0), c(n, n)),
                 rep(c(0, 1), c(n, n)),
                 cbind(matrix(0, n, n), -diag(n)),
                 cbind(diag(n), -diag(n))))
#      [,1] [,2] [,3] [,4] [,5] [,6]
# [1,]    1    1    1    0    0    0
# [2,]    0    0    0    1    1    1
# [3,]    0    0    0   -1    0    0
# [4,]    0    0    0    0   -1    0
# [5,]    0    0    0    0    0   -1
# [6,]    1    0    0   -1    0    0
# [7,]    0    1    0    0   -1    0
# [8,]    0    0    1    0    0   -1
(rhs <- c(1, 0.7, -w.bench, rep(0, n)))
# [1]  1.0  0.7 -0.4 -0.5 -0.1  0.0  0.0  0.0

The first row will enforce the portfolio weights summing to 1, the next will enforce \sum_i y_i >= 0.7, the next three are the -y_i >= -wbench_i constraints, and the last three are the ypf_i-y_i >= 0 constraints.

All that remains is to fit these into the format expected by the solve.QP function:

library(quadprog)
mod <- solve.QP(cov.mat.exp, rep(0, 2*n), t(consts), rhs, 1)
# Error in solve.QP(cov.mat.exp, rep(0, 2 * n), t(consts), rhs, 1) : 
#   matrix D in quadratic function is not positive definite!

Oof! Because we padded the covariance matrix with extra 0's for our new variables, it is positive semidefinite but not positive definite. Let's add a tiny positive constant to the main diagonal and try again:

library(quadprog)
mod <- solve.QP(cov.mat.exp + 1e-8*diag(2*n), rep(0, 2*n), t(consts), rhs, 1)
(w.pf <- head(mod$solution, n))
# [1] 0.3153442 0.3055084 0.3791474
(y <- tail(mod$solution, n))
# [1] 0.3 0.3 0.1
(opt.variance <- as.vector(t(w.pf) %*% cov.mat %*% w.pf))
# [1] 0.0009708365

We can see that this wasn't a particularly interesting case, because the constraint we worked so hard to add wasn't binding. Let's increase the right-hand-side from 0.7 to 0.9 to see the constraint in action:

(rhs <- c(1, 0.9, -w.bench, rep(0, n)))
# [1]  1.0  0.9 -0.4 -0.5 -0.1  0.0  0.0  0.0
mod <- solve.QP(cov.mat.exp + 1e-8*diag(2*n), rep(0, 2*n), t(consts), rhs, 1)
(w.pf <- head(mod$solution, n))
# [1] 0.3987388 0.4012612 0.2000000
(y <- tail(mod$solution, n))
# [1] 0.3987388 0.4012612 0.1000000
(opt.variance <- as.vector(t(w.pf) %*% cov.mat %*% w.pf))
# [1] 0.00105842

In this case, the constraint was binding; the minimum value taken by y_1 and y_2 is from our new portfolio, and the minimum value taken by y_3 is from the benchmark portfolio. We see that the variance of the optimal portfolio had a relative increase of 9.0% due to the constraint.

josliber
  • 43,891
  • 12
  • 98
  • 133
  • Well done, thank you. I'm away from my machine, but I'll thoroughly learn it asap. – AK88 Jun 30 '17 at 15:13
  • `quadprog` documentation doesn't give detailed information about the function. I would appreciate if you could recommend resources to learn more about padding the covar matrix, setting up the constraints vector and `bvec`, and specification of `meq`. Thanks again. – AK88 Jul 01 '17 at 07:59
  • @AK88 as you can read from `?quadprog`, `solve.QP(D, d, A, b, 0)` solves the QP `min -d'x + 1/2 * x'Dx s.t. A'x >= b`, and you can change the last argument to `k` to make the first `k` constraints equality instead of inequality. All the other stuff (padding covariance matrix and setting up the constraint vector) rely on basic R matrix manipulations. There are many R matrix tutorials on the internet that might help with that. – josliber Jul 01 '17 at 13:49
  • This looks good have you implemented this outside R as well? in python, I am using cvxpy and trying to constraint on active share >= threshold and am struggeling – ThatQuantDude Jul 01 '17 at 13:58
  • @Markus.london I am sure you can also do this in python, but the details of how to do that would probably be best asked in a separate question. – josliber Jul 01 '17 at 14:10
  • @josliber I did a few days ago, https://quant.stackexchange.com/questions/34915/active-share-portfolio-constraint I do not really understand how these helper variables do make the consrtaint convex again – ThatQuantDude Jul 01 '17 at 14:12