17

Assuming I fit the following neural network for a binary classification problem:

model = Sequential()
model.add(Dense(21, input_dim=19, init='uniform', activation='relu'))
model.add(Dense(80, init='uniform', activation='relu'))
model.add(Dense(80, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(x2, training_target, nb_epoch=10, batch_size=32, verbose=0,validation_split=0.1, shuffle=True,callbacks=[hist])

How would I boost the neural network using AdaBoost? Does keras have any commands for this?

ishido
  • 4,065
  • 9
  • 32
  • 42

3 Answers3

18

This can be done as follows: First create a model (for reproducibility make it as a function):

def simple_model():                                           
    # create model
    model = Sequential()
    model.add(Dense(25, input_dim=x_train.shape[1], kernel_initializer='normal', activation='relu'))
    model.add(Dropout(0.2, input_shape=(x_train.shape[1],)))
    model.add(Dense(10, kernel_initializer='normal', activation='relu'))
    model.add(Dense(1, kernel_initializer='normal'))
    # Compile model
    model.compile(loss='mean_squared_error', optimizer='adam')
    return model

Then put it inside the sklearn wrapper:

ann_estimator = KerasRegressor(build_fn= simple_model, epochs=100, batch_size=10, verbose=0)

Then and finally boost it:

boosted_ann = AdaBoostRegressor(base_estimator= ann_estimator)
boosted_ann.fit(rescaledX, y_train.values.ravel())# scale your training data 
boosted_ann.predict(rescaledX_Test)
owise
  • 1,055
  • 16
  • 28
3

Keras itself does not implement adaboost. However, Keras models are compatible with scikit-learn, so you probably can use AdaBoostClassifier from there: link. Use your model as the base_estimator after you compile it, and fit the AdaBoostClassifier instance instead of model.

This way, however, you will not be able to use the arguments you pass to fit, such as number of epochs or batch_size, so the defaults will be used. If the defaults are not good enough, you might need to build your own class that implements the scikit-learn interface on top of your model and passes proper arguments to fit.

Ishamael
  • 12,583
  • 4
  • 34
  • 52
  • Hi, thank you for your answer. When I insert: `bdt = AdaBoostClassifier(base_estimator=model)` `bdt.fit(x2, training_target)` where model is my compiled keras network, it gives me the error: *TypeError: Cannot clone object '' (type ): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods.* – ishido Aug 23 '16 at 08:12
  • Apparently, by themselves keras classifiers are not scikit-learn compatible. See this article for details on how to make them work together: https://keras.io/scikit-learn-api/ – Ishamael Aug 23 '16 at 16:44
1

Apparently, neural networks are not compatible with the sklearn Adaboost, see https://github.com/scikit-learn/scikit-learn/issues/1752

  • 1
    Welcome to Stack Overflow! This is a borderline [link-only answer](//meta.stackexchange.com/q/8231). You should expand your answer to include as much information here, and use the link only for reference. – Filnor Nov 24 '17 at 15:22