0

I am working on text classification and after the feature extraction step I got pretty matrices, for that reason I tried to use incremental learning as follows:

import xgboost as xgb
from sklearn.model_selection import ShuffleSplit, train_test_split
from sklearn.metrics import accuracy_score as acc


def incremental_learning2(X, y):
    # split data into training and testing sets
    # then split training set in half

    X_train, X_test, y_train, y_test = train_test_split(X,
                                                        y, test_size=0.1,
                                                        random_state=0)

    X_train_1, X_train_2, y_train_1, y_train_2 = train_test_split(X_train, 
                                                     y_train, 
                                                     test_size=0.5,
                                                     random_state=0)

    xg_train_1 = xgb.DMatrix(X_train_1, label=y_train_1)
    xg_train_2 = xgb.DMatrix(X_train_2, label=y_train_2)
    xg_test = xgb.DMatrix(X_test, label=y_test)

    #params = {'objective': 'reg:linear', 'verbose': False}
    params = {}


    model_1 = xgb.train(params, xg_train_1, 30)
    model_1.save_model('model_1.model')

    # ================= train two versions of the model =====================#
    model_2_v1 = xgb.train(params, xg_train_2, 30)
    model_2_v2 = xgb.train(params, xg_train_2, 30, xgb_model='model_1.model')

    #Predictions
    y_pred = model_2_v2.predict(X_test)

    kfold = StratifiedKFold(n_splits=10, random_state=1).split(X_train, y_train)
    scores = []

    for k, (train, test) in enumerate(kfold):
        model_2_v2.fit(X_train[train], y_train[train])
        score = model_2_v2.score(X_train[test], y_train[test])
        scores.append(score)

        print('Fold: %s, Class dist.: %s, Acc: %.3f' % (k+1, np.bincount(y_train[train]), score))
    print('\nCV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))

With regards to the above code. I tried to to do a cross validation and predict some instances. However, it is not working. How can I fix the above code in order to get cross validated metrics and predictions after fitting and updating the GBM model on a very large dataset?.

Tonechas
  • 13,398
  • 16
  • 46
  • 80
tumbleweed
  • 4,624
  • 12
  • 50
  • 81
  • Can you elaborate on "it is not working"? Also, you'll probably get more eyes on the question if you add the `python` tag. – Tchotchke Mar 31 '17 at 20:36
  • Yeah... When I say "is not working" I mean to say that the above code doesn't works. @Tchotchke – tumbleweed Apr 01 '17 at 00:42
  • There are many ways in which something may not work. You are getting an error, and if so - where? You aren't getting results as expected, etc. The more detail you give, the easier it is for people to help you. – Tchotchke Apr 03 '17 at 12:36

1 Answers1

1

This is the solution I came up with. First we import the necessary modules and define a simple function to calculate the root square mean error:

import numpy as np
import xgboost as xgb
from sklearn.model_selection import train_test_split as tts
from sklearn.model_selection import StratifiedKFold

def rmse(a, b):
    return np.sqrt(((a - b) ** 2).mean())

The root mean square error can be calculated differently (look into this thread for details) but for clarity purposes I have chosen an explicit formulation.

And here's a quick and dirty version of your function. I tried to keep the structure of your code the same, but for the sake of readability I have performed some refactoring.

def incremental_learning2(X, y, n_splits=10, params = {}):    
    # Initialize score arrays
    sc_1, sc_2_v1, sc_2_v2 = (np.zeros(n_splits) for i in range(3))
    # Create cross-validator
    kfold = StratifiedKFold(n_splits=n_splits, random_state=0).split(X, y)
    # Iterate through folds
    for k, (train, test) in enumerate(kfold):
        # Split data
        X_test, y_test = X[test], y[test]    
        splits = tts(X[train], y[train], test_size=0.5, random_state=0)
        X_train_1, X_train_2, y_train_1, y_train_2 = splits
        # Create data matrices
        xg_train_1 = xgb.DMatrix(X_train_1, label=y_train_1)
        xg_train_2 = xgb.DMatrix(X_train_2, label=y_train_2)
        xg_test = xgb.DMatrix(X_test, label=y_test)    
        # Fit models
        model_1 = xgb.train(params, xg_train_1, 30)        
        model_1.save_model('model_1.model')
        model_2_v1 = xgb.train(params, xg_train_2, 30)
        model_2_v2 = xgb.train(params, xg_train_2, 30, xgb_model='model_1.model')
        # Make predictions and compute scores
        preds = (m.predict(xg_test) for m in [model_1, model_2_v1, model_2_v2])
        sc_1[k], sc_2_v1[k], sc_2_v2[k] = (rmse(p, y_test) for p in preds)
    # Return scores
    return sc_1, sc_2_v1, sc_2_v2

I have also improved the output format to display results in the form of a table. This functionality is implemented in a separate function:

def display_results(a, b, c):
    def hline(): 
        print('-'*50)
    print('Cross-validation root mean square error\n')    
    print('Fold\tmodel_v1\tmodel_2_v1\tmodel_2_v2')
    hline()
    for k, (ak, bk, ck) in enumerate(zip(a, b, c)):
        print('%s\t%.3f\t\t%.3f\t\t%.3f' % (k+1, ak, bk, ck))        
    hline()
    print('Avg\t%.3f\t\t%.3f\t\t%.3f' % tuple(np.mean(s) for s in [a, b, c]))
    print('Std\t%.3f\t\t%.3f\t\t%.3f' % tuple(np.std(s) for s in [a, b, c]))

Demo

As you didn't share your dataset I had to generate mock data in order to test my code.

from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=500000, centers=50, random_state=0)

scores_1, scores_2_v1, scores2_v2 = incremental_learning2(X, y)
display_results(scores_1, scores_2_v1, scores2_v2)

The code above runs without errors and the output looks like this:

Cross-validation root mean square error

Fold    model_v1        model_2_v1      model_2_v2
--------------------------------------------------
1       9.127           9.136           9.116
2       9.168           9.155           9.128
3       9.117           9.095           9.080
4       9.107           9.113           9.089
5       9.122           9.126           9.109
6       9.096           9.099           9.084
7       9.148           9.163           9.145
8       9.089           9.090           9.069
9       9.128           9.122           9.108
10      9.185           9.162           9.160
--------------------------------------------------
Avg     9.129           9.126           9.109
Std     0.029           0.026           0.028

Remarks

  • For comparison purposes I have also cross-validated model_1.
  • In the sample run model_1 and model_2_v1 have roughly the same accuracy, whereas model_2_v2 performs slightly better, as one could reasonably expect.
  • I played around with the size of the dataset (n_samples) and the number of classes (centers) and interestingly enough, when the values of those parameters are reduced model_2_v2 is the least accurate of the three.
  • Hopefully, using a different configuration, i.e. properly setting the named function argument params, should make things work as expected.
Community
  • 1
  • 1
Tonechas
  • 13,398
  • 16
  • 46
  • 80