In training, Keras categorical_accuracy value is 100%. But that same training data that I've save it's output to a file, shown several (actually quite many) data that classified to wrong class. I've check input file for label, and it's the correct one.
What does categorical_accuracy measure? Is there any better metric to debug a LSTM?
model = Sequential()
model.add(LSTM(64, batch_input_shape=(7763, TimeStep.TIME_STEP + 1, 10), return_sequences=True, activation='relu'))
model.add(LSTM(128, activation='relu', return_sequences=True))
model.add(LSTM(64, activation='relu', return_sequences=True))
model.add(LSTM(32, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[categorical_accuracy])
history = model.fit(TimeStep.fodder, TimeStep.target, epochs=300, batch_size=7763)
Epoch 400/400
7763/31052 [======>.......................] - ETA: 1s - loss: 2.7971e-04 - categorical_accuracy: 1.0000
15526/31052 [==============>...............] - ETA: 1s - loss: 3.0596e-04 - categorical_accuracy: 1.0000
23289/31052 [=====================>........] - ETA: 0s - loss: 3.0003e-04 - categorical_accuracy: 1.0000
31052/31052 [==============================] - 2s 78us/step - loss: 2.9869e-04 - categorical_accuracy: 1.0000