I have
y_true = 16
and
y_pred = array([1.1868494e-08, 1.8747659e-09, 1.2777099e-11, 3.6140797e-08,
6.5852622e-11, 2.2888577e-10, 1.4515833e-09, 2.8392664e-09,
4.7054605e-10, 9.5605066e-11, 9.3647139e-13, 2.6149302e-10,
2.5338919e-14, 4.8815413e-10, 3.9381631e-14, 2.1434269e-06,
9.9999785e-01, 3.0857247e-08, 1.3536775e-09, 4.6811921e-10,
3.0638234e-10, 2.0818169e-09, 2.9950772e-10, 1.0457132e-10,
3.2959850e-11, 3.4232595e-10, 5.1689473e-12], dtype=float32)
When I use tf.keras.losses.categorical_crossentropy(to_categorical(y_true,num_classes=27),y_pred,from_logits=True)
The loss value I get is 2.3575358
.
But if I use the formula for categorical cross entropy to get the loss value
-np.sum(to_categorical(gtp_out_true[0],num_classes=27)*np.log(gtp_pred[0]))
I get the value 2.1457695e-06
Now, my question is, why does the function tf.keras.losses.categorical_crossentropy
give different value.
The strange thing is that, my model gives 100% accuracy even though the loss is stuck at 2.3575. Below is the image of the plot of accuracy and losses during training.
What formula does Tensorflow use to calculate categorical cross-entropy?