I'm doing some experimented, and I'm certainly aware why constraining coefficients is rarely needed, but here goes.
In the following data, I have used quadprog to solve a linear model. Note that X1 is simply the intercept.
X1 <- 1
X2 <- c(4.374, 2.3708, -7.3033, 12.0803, -0.4098, 53.0631, 13.1304, 7.3617, 16.6252, 27.3394)
X3 <- c(2.6423, 2.6284, 36.9398, 15.9278, 18.3124, 54.5039, 3.764, 19.0552, 25.4906, 13.0112)
X4 <- c(4.381, 3.144, 9.506, 15.329, 21.008, 38.091, 22.399, 13.223, 17.419, 19.87)
X <- as.matrix(cbind(X1,X2,X3,X4))
Y <- as.matrix(c(37.7,27.48,24.08,25.97,16.65,73.77,45.10,53.35,61.95,71.15))
M1 <- solve.QP(t(X) %*% X, t(Y) %*% X, matrix(0, 4, 0), c())$solution
The challenge is to subject certain coefficients to constraints. I know that I should be altering the Amet and bvac parameters (according to Linear regression with constraints on the coefficients). However, I'm not sure how to set it up, so that the following constraints are met.
The output reads [1] 37.3468790 1.2872473 -0.0177749 -0.5988443, where the values would be predicted fitted values of X1, X2, X3 and X4.
Constraints (subject to)…
X2 <= .899
0 <= X3 <= .500
0 <= X4 <= .334