I get some strange result on evaluating my model with the training data set. I want to develop a cnn with such structure:
Input --> conv1d --> MaxPool1D --> Flatten --> Dense --> Dense
This is my model:
model = Model(inputs=[inputLayerU, inputLayerM], outputs=outputLayer)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
model.fit([inputU, inputM],outputY , epochs=100, steps_per_epoch=500)
and this is the result of training the model:
Epoch 95/100
500/500 [==============================] - 329s 659ms/step - loss: 0.5058 - acc: 0.8845
Epoch 96/100
500/500 [==============================] - 329s 659ms/step - loss: 0.4137 - acc: 0.9259
Epoch 97/100
500/500 [==============================] - 329s 659ms/step - loss: 0.3221 - acc: 0.9534
Epoch 98/100
500/500 [==============================] - 329s 659ms/step - loss: 0.2938 - acc: 0.9596
Epoch 99/100
500/500 [==============================] - 330s 659ms/step - loss: 0.4707 - acc: 0.9352
Epoch 100/100
500/500 [==============================] - 329s 659ms/step - loss: 0.4324 - acc: 0.9543
I save the model and weights and then load them and evaluate the model using the same training data set:
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights("GCN-conv1d-acc.h5")
loaded_model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
score = loaded_model.evaluate(inputTrain,outTrain,steps=500, verbose=0)
However I get this result:
[7.320816993713379, 0.3042338788509369]
I expected to get some results close to the fitting result but it's too far away. I checked these postes:
Strange behaviour of the loss function in keras model, with pretrained convolutional base
http://blog.datumbox.com/the-batch-normalization-layer-of-keras-is-broken/
They say Keras has some problems with batch normalization and dropout layers, however I don't use neither of them.