I am attempting to print the predicted probabilities of each class outcome from my trained model, when I present new raw data. This is a multi-class classification problem, with 8 outputs and 21 inputs.
I am able to print 1 outcome when I present new data, for example:
"Example 0 prediction: 1 (15.0%)"
Instead, I would expect to see something similar to the below. Where the probabilities of each class (0, 1, 2, 3, 4, 6, Wide, Out) are shown:
Example 0 prediction 0: (12.5%), prediction 1: (12.5%), prediction 2: (12.5%), prediction 3: (12.5%), prediction 4: (12.5%), prediction 6: (12.5%), prediction Wide: (12.5%), prediction Out: (12.5%)
Please note I have tried searching for similar issues including here, here and here as well as consulted the TensorFlow documentation. However, these mainly discuss alterations to the model itself e.g. softmax activation on the final layer, categorical crossentropy as the loss function etc. so that probabilities are generated.
I have included the model architecture as well as the prediction code for full visibility.
Model:
earlystopping = callbacks.EarlyStopping(monitor ="val_loss",
mode ="min", patience = 125,
restore_best_weights = True)
#define Keras
model = Sequential()
model.add(Dense(50, input_dim=21))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5,input_shape=(50,)))
model.add(Dense(50))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5,input_shape=(50,)))
model.add(Dense(8, activation='softmax'))
#compile the keras model
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
model.fit(X, dummy_y, validation_split=0.25, epochs=1000, batch_size=100, verbose=1, callbacks=[earlystopping])
_, accuracy3 = model.evaluate(X, dummy_y, verbose=0)
print('Accuracy: %.2f' % (accuracy3*100))
Making predictions:
class_names = ['0', '1', '2','3','4','6','Wide','Out']
predict_dataset = tf.convert_to_tensor([
[1,5,1,0.459,0.322,0.041,0.002,0.103,0.032,0.041,14,0.404,0.284,0.052,0.008,0.128,0.044,0.037,0.043,54,0,],
[1,18,5,0.512,0.286,0,0,0.083,0.024,0.095,13,0.24,0.44,0.08,0,0.08,0.08,0,0.08,173,3],
[2,11,13,0.5,0.417,0,0,0.083,0,0.083,82,0.35,0.36,0.042,0.003,0.135,0.039,0.051,0.02,51,7]
])
predictions = model(predict_dataset, training=False)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("Example {} prediction: {} ({:4.1f}%)".format(i, name,100*p))
Output:
Example 0 prediction: 1 (15.0%)
Example 1 prediction: 1 (16.0%)
Example 2 prediction: 0 (16.9%)
I have tried making changes to the for loop which makes use of TensorFlow's logits, but I am still unable to get it to print each outcome and associated probability.
Any guidance is much appreciated.