In the scikit-learn documentation example http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_digits.html a train_test_split is done before the grid search.
the grid search is then fit using the training sets and tested on the testing set from the train_test_split.
I wanted to know if it's possible and advisable to do a kfold cross validation inplace of the train_test_split so I could fit and test grid search on different data folds instead of just one train_test_split.(and consequently get the best score and parameters in this way)