0

I've used GPyOpt to optimise a many-dimensional model

opt = BayesianOptimization(f=my_eval_func, domain=domain, constraints=constraints)
opt.run_optimization(max_iter=20)

After doing so I get retrieve the optimal co-ordinates with opt.x_opt, and the model cost with opt.fx_opt. However, I'm also interested in the variance of fx at this optimal location. How do I achieve this?

beldaz
  • 4,299
  • 3
  • 43
  • 63

1 Answers1

0

I solved this for myself by applying the internal GP model to for the optimised x_opt variable, i.e., m.model.predict(m.x_opt). However, the results are, I think, in some normalised and offset coordinate space, requiring a linear transformation to the expected results, e.g.,:

def get_opt_est(m):
    X = []
    pred_X = []
    for x,y in zip(m.X, m.Y):
        X.append(y[0])
        pred_X.append(m.model.predict(x)[0][0])
    scale = (np.max(X) - np.min(X))/(np.max(pred_X) - np.min(pred_X))
    offset = np.min(X) - np.min(pred_X)*scale
    pred = m.model.predict(m.x_opt)
    return(pred[0][0]*scale+offset,pred[1][0]*scale)

print("Predicted loss and variance is",get_opt_est(opt))
beldaz
  • 4,299
  • 3
  • 43
  • 63
  • 2
    Note that `opt.model.predict(x)` returns prediction and [standard deviation](https://gpyopt.readthedocs.io/en/latest/GPyOpt.models.html#GPyOpt.models.base.BOModel.predict) at x, not variance. So I believe you just need to square the returned value. I don't think there is any other transformation going on there – Andrei Apr 09 '19 at 13:11