1

I'd like some confirmation on whether or not my implementation of doing supervised learning via 10-fold cross validation on a DNN network in python using tflearn is correct. The code runs and gets some pretty good results with training_accuracy reaching 95.6% and validation accuracy to 98.4% but I am uncertain whether this is the right way to do it in tflearn. (see code below)

The biggest mystery is how to use the model.fit on a DNN network to train the model multiple times with handpicked train and validation data.

My reasoning in the following piece of code is that I have already divided my data into 11 parts of input (X) and output (Y) with 1 part being used as a test set (not used in training or validation in between trainings) and the remaining 10 parts being iteratively used by selecting 1 part for validation and the remaining 9 parts for training. I reason that in tflearn this can be done by calling the model.fit method 10 times on the same model each time with the altered training and validation set.

 # create the network 
network = create_original_Dexpression_network(dropout_keep_prob)


# create a custom tensorflow session to manage the used resources
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config = config)

# Final definition of model checkpoints and other configurations
model = tflearn.DNN(network, checkpoint_path=tf_checkpoints,
                    max_checkpoints=1, tensorboard_verbose=2, tensorboard_dir="./tflearn_logs/")

# Dividing the data in to test, training and validation sets
X_parts = data[0] #contains 11 np.arrays with input data
Y_parts = data[1] #contains 11 np.arrays with output data

test_part_index = 5 #the index of the part used as test data (input and output)

train_val_parts_indices= list(range(0,len(X_parts))) # create a list of the indices of all parts
train_val_parts_indices.remove(test_part_index)      # remove the index of the part used for testing

print("train_val_parts_indices",train_val_parts_indices)


# fit k times (k = 10) each time with a different validation part and different training parts of the data
for i in range(len(train_val_parts_indices)):
    print( "run " , i)
    current_val_part_index = train_val_parts_indices[i]            #select a part to serve as validation 
    current_train_part_indices = deepcopy(train_val_parts_indices) #copy all the possible parts from train_val_parts_indices
    current_train_part_indices.remove(current_val_part_index)      #remove the part used for validation in this run

    print("current_val_part_index ",current_val_part_index)
    print("current_train_part_indices",current_train_part_indices)

    # create the trainings input and output from the selected parts 
    X_train = create_array_out_list_of_parts(X_parts,current_train_part_indices).reshape((-1,224,224,1))
    Y_train =  create_array_out_list_of_parts(Y_parts,current_train_part_indices).reshape((-1,7))

    # create the validation parts from the selected part
    X_val = X_parts[current_val_part_index]
    Y_val = Y_parts[current_val_part_index]

    #  check the shapes
    print("X_train.shape ", X_train.shape)
    print("Y_train.shape ", Y_train.shape)
    print("X_val.shape   ", X_val.shape)
    print("Y_val.shape     ", Y_val.shape)  

    # use this data configuration to Fit the model, train for 1 epoch.
    model.fit(X_train, Y_train, n_epoch=1, validation_set=(X_val,Y_val), shuffle=True, show_metric=True, batch_size=50, snapshot_step=2000,snapshot_epoch=True, run_id=RUNID)

# Save the model
model.save(tf_checkpoints + '/' + RUNID + '.model')
print("finished training and saving")
  • 1
    The purpose of cross validation is not to train a model, but to evaluate it's performance. To avoid copy-pasting, see the text and the useful link to an in-depth discussion in [here](https://stackoverflow.com/questions/46456381/cross-validation-in-lightgbm/50316411#50316411). I do not know if `tflearn.DNN.fit()` resets network weights at the start of the call. If not, you re-use the same data (k-1 out of total k folds, not *whole* dataset). This would explain why your validation performance comes out to be better than on the training set – Mischa Lisovyi May 20 '18 at 09:38

0 Answers0