I'm running a convolutional neural network (this one) on a bunch of my own image data, with shape (no. of channels, height, width) = (3, 30, 30). I have 76960 training samples, 19240 testing samples, and there are 39 classes. The last few blocks of code are:
# Train the model using Stochastic grad descent + momentum
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
cnn.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
batch_size = 128
nb_epoch = 50
cnn.fit(x_train, y_train,
batch_size = batch_size,
nb_epoch = nb_epoch,
validation_data=(x_test, y_test),
shuffle=True)
Training loss and accuracy change over the epochs, but validation accuracy only changes from the 1st to the 2nd epoch (from 0.3387 to 0.3357), and then it stays at 0.3357 all the way.
I've tried varying batch size (32, 128 or 256), learning rate (from 1e-6 to 0.1, in multiplying by 10 along the way) and tried with or without data normalization (the basic mean shift, and division by s.d.). None of these have worked.