My question is simple and stright forward. What does a batch size specify while training and predicting a neural network. How to visualize it so as to get a clear picture of how data is being feed to the network.
Suppose I have an autoencoder
encoder = tflearn.input_data(shape=[None, 41])
encoder = tflearn.fully_connected(encoder, 41,activation='relu')
and I am taking an input as csv file with 41 features, So as to what I understand is it will take each feature from csv file and feed it to the 41 neurons of the first layer when my batch size is 1.
But when I increase the batch size to 100 how is this 41 features of 100 batches are going to be feed to this network?
model.fit(test_set, test_labels_set, n_epoch=1, validation_set=(valid_set, valid_labels_set),
run_id="auto_encoder", batch_size=100,show_metric=True, snapshot_epoch=False)
Will there be any normalization on the batches or some operations on them?
The number of epocs are same for both the cases