As you note, the tricky constraint is that sum(pmin(w_bench, w_pf)) > 0.7
(actually, it turns out to be very tough to have strict inequality, so I will be doing >=
instead of >
; you could of course re-solve with >= 0.7+epsilon
for some small epsilon). To approach this, we will create a new variable y_i
for each element i
in our portfolio, and we will add constraints y_i <= wpf_i
(aka wpf_i - y_i >= 0
) and y_i <= wbench_i
(aka -y_i >= -wbench_i
), where wpf_i
is the proportion of i
in our selected portfolio (a decision variable) and wbench_i
is the proportion of i
in the benchmark portfolio (input data). This constrains y_i
to be no larger than the minimum of these two values. Finally, we will add the constraint \sum_i y_i >= 0.7
, requiring that these minimum values sum to at least 0.7.
All that remains is to implement this in the quadprog
package. Setting up with your problem data:
cov.mat <- rbind(c(0.003015254, -0.000235924, 0.000242836), c(-0.000235924, 0.002910845, 0.000411308), c(0.000242836, 0.000411308, 0.002027183))
w.bench <- c(.4, .5, .1)
n <- length(w.bench)
Since we are adding new variables, we will pad the covariance matrix (which will be placed in the optimization objective) with 0's in the rows and columns corresponding to those new variables. We can do this with:
(cov.mat.exp <- cbind(rbind(cov.mat, matrix(0, n, n)), matrix(0, 2*n, n)))
# [,1] [,2] [,3] [,4] [,5] [,6]
# [1,] 0.003015254 -0.000235924 0.000242836 0 0 0
# [2,] -0.000235924 0.002910845 0.000411308 0 0 0
# [3,] 0.000242836 0.000411308 0.002027183 0 0 0
# [4,] 0.000000000 0.000000000 0.000000000 0 0 0
# [5,] 0.000000000 0.000000000 0.000000000 0 0 0
# [6,] 0.000000000 0.000000000 0.000000000 0 0 0
Now we want to create a constraint matrix for all our constraints:
(consts <- rbind(rep(c(1, 0), c(n, n)),
rep(c(0, 1), c(n, n)),
cbind(matrix(0, n, n), -diag(n)),
cbind(diag(n), -diag(n))))
# [,1] [,2] [,3] [,4] [,5] [,6]
# [1,] 1 1 1 0 0 0
# [2,] 0 0 0 1 1 1
# [3,] 0 0 0 -1 0 0
# [4,] 0 0 0 0 -1 0
# [5,] 0 0 0 0 0 -1
# [6,] 1 0 0 -1 0 0
# [7,] 0 1 0 0 -1 0
# [8,] 0 0 1 0 0 -1
(rhs <- c(1, 0.7, -w.bench, rep(0, n)))
# [1] 1.0 0.7 -0.4 -0.5 -0.1 0.0 0.0 0.0
The first row will enforce the portfolio weights summing to 1, the next will enforce \sum_i y_i >= 0.7
, the next three are the -y_i >= -wbench_i
constraints, and the last three are the ypf_i-y_i >= 0
constraints.
All that remains is to fit these into the format expected by the solve.QP
function:
library(quadprog)
mod <- solve.QP(cov.mat.exp, rep(0, 2*n), t(consts), rhs, 1)
# Error in solve.QP(cov.mat.exp, rep(0, 2 * n), t(consts), rhs, 1) :
# matrix D in quadratic function is not positive definite!
Oof! Because we padded the covariance matrix with extra 0's for our new variables, it is positive semidefinite but not positive definite. Let's add a tiny positive constant to the main diagonal and try again:
library(quadprog)
mod <- solve.QP(cov.mat.exp + 1e-8*diag(2*n), rep(0, 2*n), t(consts), rhs, 1)
(w.pf <- head(mod$solution, n))
# [1] 0.3153442 0.3055084 0.3791474
(y <- tail(mod$solution, n))
# [1] 0.3 0.3 0.1
(opt.variance <- as.vector(t(w.pf) %*% cov.mat %*% w.pf))
# [1] 0.0009708365
We can see that this wasn't a particularly interesting case, because the constraint we worked so hard to add wasn't binding. Let's increase the right-hand-side from 0.7 to 0.9 to see the constraint in action:
(rhs <- c(1, 0.9, -w.bench, rep(0, n)))
# [1] 1.0 0.9 -0.4 -0.5 -0.1 0.0 0.0 0.0
mod <- solve.QP(cov.mat.exp + 1e-8*diag(2*n), rep(0, 2*n), t(consts), rhs, 1)
(w.pf <- head(mod$solution, n))
# [1] 0.3987388 0.4012612 0.2000000
(y <- tail(mod$solution, n))
# [1] 0.3987388 0.4012612 0.1000000
(opt.variance <- as.vector(t(w.pf) %*% cov.mat %*% w.pf))
# [1] 0.00105842
In this case, the constraint was binding; the minimum value taken by y_1
and y_2
is from our new portfolio, and the minimum value taken by y_3
is from the benchmark portfolio. We see that the variance of the optimal portfolio had a relative increase of 9.0% due to the constraint.