I am attempting to get best hyperparameters for XGBClassifier that would lead to getting most predictive attributes. I am attempting to use RandomizedSearchCV to iterate and validate through KFold.
As I run this process total 5 times (numFolds=5), I want the best results to be saved in a dataframe called collector (specified below). So each iteration, I would want best results and score to append to collector dataframe.
from scipy import stats
from scipy.stats import randint
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import
precision_score,recall_score,accuracy_score,f1_score,roc_auc_score
clf_xgb = xgb.XGBClassifier(objective = 'binary:logistic')
param_dist = {'n_estimators': stats.randint(150, 1000),
'learning_rate': stats.uniform(0.01, 0.6),
'subsample': stats.uniform(0.3, 0.9),
'max_depth': [3, 4, 5, 6, 7, 8, 9],
'colsample_bytree': stats.uniform(0.5, 0.9),
'min_child_weight': [1, 2, 3, 4]
}
clf = RandomizedSearchCV(clf_xgb, param_distributions = param_dist, n_iter = 25, scoring = 'roc_auc', error_score = 0, verbose = 3, n_jobs = -1)
numFolds = 5
folds = cross_validation.KFold(n = len(X), shuffle = True, n_folds = numFolds)
collector = pd.DataFrame()
estimators = []
results = np.zeros(len(X))
score = 0.0
for train_index, test_index in folds:
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf.fit(X_train, y_train)
estimators.append(clf.best_estimator_)
estcoll = pd.DataFrame(estimators)
estcoll['score'] = score
pd.concat([collector,estcoll])
print "\n", len(collector), "\n"
score /= numFolds
For some reason there is nothing being saved to the dataframe, please help.
Also, I have about 350 attributes to cycle through with 3.5K rows in train and 2K in testing. Would running this through bayesian hyperparameter optimization process potentially improve my results? or it would only save on processing time?