1

Both GPRegression (GPy) and GaussianProcessRegressor (scikit-learn) uses similar initial values and the same optimizer (lbfgs). Why results vary significantly?

#!pip -qq install pods
#!pip -qq install GPy
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
from sklearn.preprocessing import StandardScaler
import pods
data = pods.datasets.olympic_marathon_men()
X = StandardScaler().fit_transform(data['X'])
y = data['Y']
# scikit-learn
model = GaussianProcessRegressor(C()*RBF(), n_restarts_optimizer=20, random_state=0)
model.fit(X, y)
print(model.kernel_)

# GPy
from GPy.models import GPRegression
from GPy.kern import RBF as GPyRBF
model = GPRegression(X, y, GPyRBF(1))
model.optimize_restarts(20, verbose=0)
print(model.kern)

Results

2.89**2 * RBF(length_scale=0.173)
  rbf.         |               value  |  constraints  |  priors
  variance     |  25.399509298957504  |      +ve      |        
  lengthscale  |   4.279767394389103  |      +ve      |        
gehbiszumeis
  • 3,525
  • 4
  • 24
  • 41
Zeel B Patel
  • 691
  • 5
  • 16

2 Answers2

1

You may try the noiseless version of GP with Gpy by explicitly setting noise to 0, you will obtain the same hyperparameter-tuning results with skelarn and Gpy:

# scikit-learn
model = GaussianProcessRegressor(C()*RBF(), n_restarts_optimizer=20, random_state=0) # don't add noise
model.fit(X, y)
print(model.kernel_)
# 2.89**2 * RBF(length_scale=0.173)

# Gpy
model = GPRegression(X, y, GPyRBF(1))
model['.*Gaussian_noise'] = 0 # make noise zero
model['.*noise'].fix()
model.optimize_restarts(20, verbose=0)
print(model.kern)
#  rbf.         |               value  |  constraints  |  priors
#  variance     |   8.343280650322102  |      +ve      |        
#  lengthscale  |  0.1731764533721659  |      +ve      |        

The optimal values for the RBF variance = 2.89**2 = 8.3521 and the lengthscale hyperparameters have approximately same values, as can be seen from above.

OR use explicit white noise kernel with scikit-learn:

# scikit-learn
from sklearn.gaussian_process.kernels import WhiteKernel as W
model = GaussianProcessRegressor(C()*RBF()+W(), n_restarts_optimizer=20, random_state=0) 
model.fit(X, y)
print(model.kernel_)
# 5.04**2 * RBF(length_scale=4.28) + WhiteKernel(noise_level=0.0485)

# GPy
model = GPRegression(X, y, GPyRBF(1))
model.optimize_restarts(20, verbose=0)
print(model.kern)
#  rbf.         |               value  |  constraints  |  priors
#  variance     |    25.3995066661936  |      +ve      |        
#  lengthscale  |  4.2797670212128756  |      +ve      |  

The optimal values for the RBF variance = 5.04**2 = 25.4016 and the lengthscale hyperparameters have approximately same values, as can be seen from above.

Sandipan Dey
  • 21,482
  • 2
  • 51
  • 63
0

Using GPy RBF() kernel is equivalent to using scikit-learn ConstantKernel()*RBF() + WhiteKernel(). Because GPy library adds likelihood noise internally. Using this I was able to get comparable results in both.

Zeel B Patel
  • 691
  • 5
  • 16